Test Report: KVM_Linux_crio 19377

                    
                      81fa2899e75fb9e546311166288b8d27068854ba:2024-08-05:35656
                    
                

Test fail (32/320)

Order failed test Duration
43 TestAddons/parallel/Ingress 155.66
45 TestAddons/parallel/MetricsServer 330.33
54 TestAddons/StoppedEnableDisable 154.3
124 TestFunctional/parallel/ImageCommands/ImageListShort 2.27
132 TestFunctional/parallel/MountCmd/any-port 242.83
173 TestMultiControlPlane/serial/StopSecondaryNode 141.9
175 TestMultiControlPlane/serial/RestartSecondaryNode 51.83
177 TestMultiControlPlane/serial/RestartClusterKeepsNodes 372.15
180 TestMultiControlPlane/serial/StopCluster 141.85
240 TestMultiNode/serial/RestartKeepsNodes 323.1
242 TestMultiNode/serial/StopMultiNode 141.41
249 TestPreload 279.39
257 TestKubernetesUpgrade 445.26
294 TestPause/serial/SecondStartNoReconfiguration 53.5
323 TestStartStop/group/old-k8s-version/serial/FirstStart 300.67
348 TestStartStop/group/embed-certs/serial/Stop 138.97
351 TestStartStop/group/no-preload/serial/Stop 139.08
354 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.04
355 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
356 TestStartStop/group/old-k8s-version/serial/DeployApp 0.47
357 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 105.66
359 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
361 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
365 TestStartStop/group/old-k8s-version/serial/SecondStart 721.67
366 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.35
367 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.24
368 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.21
369 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.57
370 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 376.05
371 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 396.34
372 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 445.71
373 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 128.03
x
+
TestAddons/parallel/Ingress (155.66s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-624151 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-624151 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-624151 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [8c086e51-e9aa-47d4-b5da-7196cbb25a28] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [8c086e51-e9aa-47d4-b5da-7196cbb25a28] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 14.003323638s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-624151 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-624151 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.372725724s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-624151 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-624151 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.142
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-624151 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-624151 addons disable ingress-dns --alsologtostderr -v=1: (1.581363338s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-624151 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-624151 addons disable ingress --alsologtostderr -v=1: (7.777801457s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-624151 -n addons-624151
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-624151 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-624151 logs -n 25: (1.207491106s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-704604                                                                     | download-only-704604 | jenkins | v1.33.1 | 05 Aug 24 11:27 UTC | 05 Aug 24 11:27 UTC |
	| delete  | -p download-only-413572                                                                     | download-only-413572 | jenkins | v1.33.1 | 05 Aug 24 11:27 UTC | 05 Aug 24 11:27 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-355673 | jenkins | v1.33.1 | 05 Aug 24 11:27 UTC |                     |
	|         | binary-mirror-355673                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:37911                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-355673                                                                     | binary-mirror-355673 | jenkins | v1.33.1 | 05 Aug 24 11:27 UTC | 05 Aug 24 11:27 UTC |
	| addons  | disable dashboard -p                                                                        | addons-624151        | jenkins | v1.33.1 | 05 Aug 24 11:27 UTC |                     |
	|         | addons-624151                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-624151        | jenkins | v1.33.1 | 05 Aug 24 11:27 UTC |                     |
	|         | addons-624151                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-624151 --wait=true                                                                | addons-624151        | jenkins | v1.33.1 | 05 Aug 24 11:27 UTC | 05 Aug 24 11:30 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-624151 addons disable                                                                | addons-624151        | jenkins | v1.33.1 | 05 Aug 24 11:30 UTC | 05 Aug 24 11:30 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-624151        | jenkins | v1.33.1 | 05 Aug 24 11:30 UTC | 05 Aug 24 11:30 UTC |
	|         | addons-624151                                                                               |                      |         |         |                     |                     |
	| ip      | addons-624151 ip                                                                            | addons-624151        | jenkins | v1.33.1 | 05 Aug 24 11:31 UTC | 05 Aug 24 11:31 UTC |
	| addons  | addons-624151 addons disable                                                                | addons-624151        | jenkins | v1.33.1 | 05 Aug 24 11:31 UTC | 05 Aug 24 11:31 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-624151 addons disable                                                                | addons-624151        | jenkins | v1.33.1 | 05 Aug 24 11:31 UTC | 05 Aug 24 11:31 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-624151        | jenkins | v1.33.1 | 05 Aug 24 11:31 UTC | 05 Aug 24 11:31 UTC |
	|         | -p addons-624151                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-624151        | jenkins | v1.33.1 | 05 Aug 24 11:31 UTC | 05 Aug 24 11:31 UTC |
	|         | addons-624151                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-624151        | jenkins | v1.33.1 | 05 Aug 24 11:31 UTC | 05 Aug 24 11:31 UTC |
	|         | -p addons-624151                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-624151 ssh cat                                                                       | addons-624151        | jenkins | v1.33.1 | 05 Aug 24 11:31 UTC | 05 Aug 24 11:31 UTC |
	|         | /opt/local-path-provisioner/pvc-04dfcdb1-8800-4729-a32a-d013816c2f92_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-624151 addons disable                                                                | addons-624151        | jenkins | v1.33.1 | 05 Aug 24 11:31 UTC | 05 Aug 24 11:32 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-624151 addons disable                                                                | addons-624151        | jenkins | v1.33.1 | 05 Aug 24 11:31 UTC | 05 Aug 24 11:31 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-624151 ssh curl -s                                                                   | addons-624151        | jenkins | v1.33.1 | 05 Aug 24 11:31 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-624151 addons                                                                        | addons-624151        | jenkins | v1.33.1 | 05 Aug 24 11:32 UTC | 05 Aug 24 11:32 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-624151 addons disable                                                                | addons-624151        | jenkins | v1.33.1 | 05 Aug 24 11:32 UTC | 05 Aug 24 11:32 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-624151 addons                                                                        | addons-624151        | jenkins | v1.33.1 | 05 Aug 24 11:32 UTC | 05 Aug 24 11:32 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-624151 ip                                                                            | addons-624151        | jenkins | v1.33.1 | 05 Aug 24 11:33 UTC | 05 Aug 24 11:33 UTC |
	| addons  | addons-624151 addons disable                                                                | addons-624151        | jenkins | v1.33.1 | 05 Aug 24 11:33 UTC | 05 Aug 24 11:34 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-624151 addons disable                                                                | addons-624151        | jenkins | v1.33.1 | 05 Aug 24 11:34 UTC | 05 Aug 24 11:34 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 11:27:52
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 11:27:52.759052  392242 out.go:291] Setting OutFile to fd 1 ...
	I0805 11:27:52.759333  392242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 11:27:52.759344  392242 out.go:304] Setting ErrFile to fd 2...
	I0805 11:27:52.759351  392242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 11:27:52.759531  392242 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
	I0805 11:27:52.760217  392242 out.go:298] Setting JSON to false
	I0805 11:27:52.761164  392242 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":4220,"bootTime":1722853053,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0805 11:27:52.761228  392242 start.go:139] virtualization: kvm guest
	I0805 11:27:52.763387  392242 out.go:177] * [addons-624151] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0805 11:27:52.764728  392242 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 11:27:52.764752  392242 notify.go:220] Checking for updates...
	I0805 11:27:52.767246  392242 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 11:27:52.768625  392242 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 11:27:52.769916  392242 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 11:27:52.771087  392242 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0805 11:27:52.772244  392242 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 11:27:52.773484  392242 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 11:27:52.804582  392242 out.go:177] * Using the kvm2 driver based on user configuration
	I0805 11:27:52.805771  392242 start.go:297] selected driver: kvm2
	I0805 11:27:52.805789  392242 start.go:901] validating driver "kvm2" against <nil>
	I0805 11:27:52.805802  392242 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 11:27:52.806576  392242 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 11:27:52.806676  392242 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19377-383955/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0805 11:27:52.821329  392242 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0805 11:27:52.821382  392242 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 11:27:52.821622  392242 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 11:27:52.821698  392242 cni.go:84] Creating CNI manager for ""
	I0805 11:27:52.821716  392242 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 11:27:52.821723  392242 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 11:27:52.821793  392242 start.go:340] cluster config:
	{Name:addons-624151 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-624151 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 11:27:52.821902  392242 iso.go:125] acquiring lock: {Name:mk78a4988ea0dfb86bb6f7367e362683a39fd912 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 11:27:52.823867  392242 out.go:177] * Starting "addons-624151" primary control-plane node in "addons-624151" cluster
	I0805 11:27:52.825635  392242 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 11:27:52.825677  392242 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0805 11:27:52.825686  392242 cache.go:56] Caching tarball of preloaded images
	I0805 11:27:52.825772  392242 preload.go:172] Found /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0805 11:27:52.825785  392242 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0805 11:27:52.826133  392242 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/config.json ...
	I0805 11:27:52.826161  392242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/config.json: {Name:mk494d23b64500b0325395df24dde97d7c38f780 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:27:52.826317  392242 start.go:360] acquireMachinesLock for addons-624151: {Name:mk3babe91d55c30c0b650587cdec6489eb3a7ed6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 11:27:52.826371  392242 start.go:364] duration metric: took 38.07µs to acquireMachinesLock for "addons-624151"
	I0805 11:27:52.826392  392242 start.go:93] Provisioning new machine with config: &{Name:addons-624151 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-624151 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 11:27:52.826461  392242 start.go:125] createHost starting for "" (driver="kvm2")
	I0805 11:27:52.828342  392242 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0805 11:27:52.828501  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:27:52.828562  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:27:52.843342  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39573
	I0805 11:27:52.843875  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:27:52.844516  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:27:52.844540  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:27:52.844889  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:27:52.845059  392242 main.go:141] libmachine: (addons-624151) Calling .GetMachineName
	I0805 11:27:52.845220  392242 main.go:141] libmachine: (addons-624151) Calling .DriverName
	I0805 11:27:52.845422  392242 start.go:159] libmachine.API.Create for "addons-624151" (driver="kvm2")
	I0805 11:27:52.845453  392242 client.go:168] LocalClient.Create starting
	I0805 11:27:52.845489  392242 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem
	I0805 11:27:53.055523  392242 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem
	I0805 11:27:53.217162  392242 main.go:141] libmachine: Running pre-create checks...
	I0805 11:27:53.217192  392242 main.go:141] libmachine: (addons-624151) Calling .PreCreateCheck
	I0805 11:27:53.217788  392242 main.go:141] libmachine: (addons-624151) Calling .GetConfigRaw
	I0805 11:27:53.218271  392242 main.go:141] libmachine: Creating machine...
	I0805 11:27:53.218286  392242 main.go:141] libmachine: (addons-624151) Calling .Create
	I0805 11:27:53.218462  392242 main.go:141] libmachine: (addons-624151) Creating KVM machine...
	I0805 11:27:53.219850  392242 main.go:141] libmachine: (addons-624151) DBG | found existing default KVM network
	I0805 11:27:53.220696  392242 main.go:141] libmachine: (addons-624151) DBG | I0805 11:27:53.220519  392264 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015330}
	I0805 11:27:53.220778  392242 main.go:141] libmachine: (addons-624151) DBG | created network xml: 
	I0805 11:27:53.220804  392242 main.go:141] libmachine: (addons-624151) DBG | <network>
	I0805 11:27:53.220818  392242 main.go:141] libmachine: (addons-624151) DBG |   <name>mk-addons-624151</name>
	I0805 11:27:53.220831  392242 main.go:141] libmachine: (addons-624151) DBG |   <dns enable='no'/>
	I0805 11:27:53.220845  392242 main.go:141] libmachine: (addons-624151) DBG |   
	I0805 11:27:53.220858  392242 main.go:141] libmachine: (addons-624151) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0805 11:27:53.220871  392242 main.go:141] libmachine: (addons-624151) DBG |     <dhcp>
	I0805 11:27:53.220882  392242 main.go:141] libmachine: (addons-624151) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0805 11:27:53.220893  392242 main.go:141] libmachine: (addons-624151) DBG |     </dhcp>
	I0805 11:27:53.220912  392242 main.go:141] libmachine: (addons-624151) DBG |   </ip>
	I0805 11:27:53.220926  392242 main.go:141] libmachine: (addons-624151) DBG |   
	I0805 11:27:53.220935  392242 main.go:141] libmachine: (addons-624151) DBG | </network>
	I0805 11:27:53.220948  392242 main.go:141] libmachine: (addons-624151) DBG | 
	I0805 11:27:53.226001  392242 main.go:141] libmachine: (addons-624151) DBG | trying to create private KVM network mk-addons-624151 192.168.39.0/24...
	I0805 11:27:53.292284  392242 main.go:141] libmachine: (addons-624151) DBG | private KVM network mk-addons-624151 192.168.39.0/24 created
	I0805 11:27:53.292343  392242 main.go:141] libmachine: (addons-624151) Setting up store path in /home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151 ...
	I0805 11:27:53.292370  392242 main.go:141] libmachine: (addons-624151) Building disk image from file:///home/jenkins/minikube-integration/19377-383955/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0805 11:27:53.292387  392242 main.go:141] libmachine: (addons-624151) DBG | I0805 11:27:53.292267  392264 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 11:27:53.292413  392242 main.go:141] libmachine: (addons-624151) Downloading /home/jenkins/minikube-integration/19377-383955/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19377-383955/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0805 11:27:53.594302  392242 main.go:141] libmachine: (addons-624151) DBG | I0805 11:27:53.594181  392264 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/id_rsa...
	I0805 11:27:53.825891  392242 main.go:141] libmachine: (addons-624151) DBG | I0805 11:27:53.825744  392264 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/addons-624151.rawdisk...
	I0805 11:27:53.825921  392242 main.go:141] libmachine: (addons-624151) DBG | Writing magic tar header
	I0805 11:27:53.825933  392242 main.go:141] libmachine: (addons-624151) DBG | Writing SSH key tar header
	I0805 11:27:53.825946  392242 main.go:141] libmachine: (addons-624151) DBG | I0805 11:27:53.825873  392264 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151 ...
	I0805 11:27:53.826034  392242 main.go:141] libmachine: (addons-624151) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151
	I0805 11:27:53.826061  392242 main.go:141] libmachine: (addons-624151) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19377-383955/.minikube/machines
	I0805 11:27:53.826076  392242 main.go:141] libmachine: (addons-624151) Setting executable bit set on /home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151 (perms=drwx------)
	I0805 11:27:53.826093  392242 main.go:141] libmachine: (addons-624151) Setting executable bit set on /home/jenkins/minikube-integration/19377-383955/.minikube/machines (perms=drwxr-xr-x)
	I0805 11:27:53.826105  392242 main.go:141] libmachine: (addons-624151) Setting executable bit set on /home/jenkins/minikube-integration/19377-383955/.minikube (perms=drwxr-xr-x)
	I0805 11:27:53.826116  392242 main.go:141] libmachine: (addons-624151) Setting executable bit set on /home/jenkins/minikube-integration/19377-383955 (perms=drwxrwxr-x)
	I0805 11:27:53.826126  392242 main.go:141] libmachine: (addons-624151) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0805 11:27:53.826140  392242 main.go:141] libmachine: (addons-624151) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 11:27:53.826150  392242 main.go:141] libmachine: (addons-624151) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0805 11:27:53.826163  392242 main.go:141] libmachine: (addons-624151) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19377-383955
	I0805 11:27:53.826175  392242 main.go:141] libmachine: (addons-624151) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0805 11:27:53.826183  392242 main.go:141] libmachine: (addons-624151) DBG | Checking permissions on dir: /home/jenkins
	I0805 11:27:53.826195  392242 main.go:141] libmachine: (addons-624151) DBG | Checking permissions on dir: /home
	I0805 11:27:53.826206  392242 main.go:141] libmachine: (addons-624151) DBG | Skipping /home - not owner
	I0805 11:27:53.826213  392242 main.go:141] libmachine: (addons-624151) Creating domain...
	I0805 11:27:53.827286  392242 main.go:141] libmachine: (addons-624151) define libvirt domain using xml: 
	I0805 11:27:53.827323  392242 main.go:141] libmachine: (addons-624151) <domain type='kvm'>
	I0805 11:27:53.827335  392242 main.go:141] libmachine: (addons-624151)   <name>addons-624151</name>
	I0805 11:27:53.827348  392242 main.go:141] libmachine: (addons-624151)   <memory unit='MiB'>4000</memory>
	I0805 11:27:53.827355  392242 main.go:141] libmachine: (addons-624151)   <vcpu>2</vcpu>
	I0805 11:27:53.827361  392242 main.go:141] libmachine: (addons-624151)   <features>
	I0805 11:27:53.827366  392242 main.go:141] libmachine: (addons-624151)     <acpi/>
	I0805 11:27:53.827370  392242 main.go:141] libmachine: (addons-624151)     <apic/>
	I0805 11:27:53.827378  392242 main.go:141] libmachine: (addons-624151)     <pae/>
	I0805 11:27:53.827382  392242 main.go:141] libmachine: (addons-624151)     
	I0805 11:27:53.827387  392242 main.go:141] libmachine: (addons-624151)   </features>
	I0805 11:27:53.827394  392242 main.go:141] libmachine: (addons-624151)   <cpu mode='host-passthrough'>
	I0805 11:27:53.827399  392242 main.go:141] libmachine: (addons-624151)   
	I0805 11:27:53.827408  392242 main.go:141] libmachine: (addons-624151)   </cpu>
	I0805 11:27:53.827413  392242 main.go:141] libmachine: (addons-624151)   <os>
	I0805 11:27:53.827419  392242 main.go:141] libmachine: (addons-624151)     <type>hvm</type>
	I0805 11:27:53.827445  392242 main.go:141] libmachine: (addons-624151)     <boot dev='cdrom'/>
	I0805 11:27:53.827474  392242 main.go:141] libmachine: (addons-624151)     <boot dev='hd'/>
	I0805 11:27:53.827483  392242 main.go:141] libmachine: (addons-624151)     <bootmenu enable='no'/>
	I0805 11:27:53.827490  392242 main.go:141] libmachine: (addons-624151)   </os>
	I0805 11:27:53.827495  392242 main.go:141] libmachine: (addons-624151)   <devices>
	I0805 11:27:53.827501  392242 main.go:141] libmachine: (addons-624151)     <disk type='file' device='cdrom'>
	I0805 11:27:53.827511  392242 main.go:141] libmachine: (addons-624151)       <source file='/home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/boot2docker.iso'/>
	I0805 11:27:53.827520  392242 main.go:141] libmachine: (addons-624151)       <target dev='hdc' bus='scsi'/>
	I0805 11:27:53.827528  392242 main.go:141] libmachine: (addons-624151)       <readonly/>
	I0805 11:27:53.827543  392242 main.go:141] libmachine: (addons-624151)     </disk>
	I0805 11:27:53.827556  392242 main.go:141] libmachine: (addons-624151)     <disk type='file' device='disk'>
	I0805 11:27:53.827568  392242 main.go:141] libmachine: (addons-624151)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0805 11:27:53.827583  392242 main.go:141] libmachine: (addons-624151)       <source file='/home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/addons-624151.rawdisk'/>
	I0805 11:27:53.827592  392242 main.go:141] libmachine: (addons-624151)       <target dev='hda' bus='virtio'/>
	I0805 11:27:53.827597  392242 main.go:141] libmachine: (addons-624151)     </disk>
	I0805 11:27:53.827607  392242 main.go:141] libmachine: (addons-624151)     <interface type='network'>
	I0805 11:27:53.827623  392242 main.go:141] libmachine: (addons-624151)       <source network='mk-addons-624151'/>
	I0805 11:27:53.827635  392242 main.go:141] libmachine: (addons-624151)       <model type='virtio'/>
	I0805 11:27:53.827647  392242 main.go:141] libmachine: (addons-624151)     </interface>
	I0805 11:27:53.827657  392242 main.go:141] libmachine: (addons-624151)     <interface type='network'>
	I0805 11:27:53.827668  392242 main.go:141] libmachine: (addons-624151)       <source network='default'/>
	I0805 11:27:53.827683  392242 main.go:141] libmachine: (addons-624151)       <model type='virtio'/>
	I0805 11:27:53.827688  392242 main.go:141] libmachine: (addons-624151)     </interface>
	I0805 11:27:53.827696  392242 main.go:141] libmachine: (addons-624151)     <serial type='pty'>
	I0805 11:27:53.827705  392242 main.go:141] libmachine: (addons-624151)       <target port='0'/>
	I0805 11:27:53.827715  392242 main.go:141] libmachine: (addons-624151)     </serial>
	I0805 11:27:53.827726  392242 main.go:141] libmachine: (addons-624151)     <console type='pty'>
	I0805 11:27:53.827738  392242 main.go:141] libmachine: (addons-624151)       <target type='serial' port='0'/>
	I0805 11:27:53.827779  392242 main.go:141] libmachine: (addons-624151)     </console>
	I0805 11:27:53.827799  392242 main.go:141] libmachine: (addons-624151)     <rng model='virtio'>
	I0805 11:27:53.827826  392242 main.go:141] libmachine: (addons-624151)       <backend model='random'>/dev/random</backend>
	I0805 11:27:53.827838  392242 main.go:141] libmachine: (addons-624151)     </rng>
	I0805 11:27:53.827847  392242 main.go:141] libmachine: (addons-624151)     
	I0805 11:27:53.827859  392242 main.go:141] libmachine: (addons-624151)     
	I0805 11:27:53.827867  392242 main.go:141] libmachine: (addons-624151)   </devices>
	I0805 11:27:53.827872  392242 main.go:141] libmachine: (addons-624151) </domain>
	I0805 11:27:53.827880  392242 main.go:141] libmachine: (addons-624151) 
	I0805 11:27:53.833598  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:4f:e3:f8 in network default
	I0805 11:27:53.834140  392242 main.go:141] libmachine: (addons-624151) Ensuring networks are active...
	I0805 11:27:53.834159  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:27:53.834898  392242 main.go:141] libmachine: (addons-624151) Ensuring network default is active
	I0805 11:27:53.835256  392242 main.go:141] libmachine: (addons-624151) Ensuring network mk-addons-624151 is active
	I0805 11:27:53.835801  392242 main.go:141] libmachine: (addons-624151) Getting domain xml...
	I0805 11:27:53.836572  392242 main.go:141] libmachine: (addons-624151) Creating domain...
	I0805 11:27:55.232003  392242 main.go:141] libmachine: (addons-624151) Waiting to get IP...
	I0805 11:27:55.232750  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:27:55.233144  392242 main.go:141] libmachine: (addons-624151) DBG | unable to find current IP address of domain addons-624151 in network mk-addons-624151
	I0805 11:27:55.233168  392242 main.go:141] libmachine: (addons-624151) DBG | I0805 11:27:55.233126  392264 retry.go:31] will retry after 267.947848ms: waiting for machine to come up
	I0805 11:27:55.502543  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:27:55.503046  392242 main.go:141] libmachine: (addons-624151) DBG | unable to find current IP address of domain addons-624151 in network mk-addons-624151
	I0805 11:27:55.503073  392242 main.go:141] libmachine: (addons-624151) DBG | I0805 11:27:55.503008  392264 retry.go:31] will retry after 343.226091ms: waiting for machine to come up
	I0805 11:27:55.847465  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:27:55.847806  392242 main.go:141] libmachine: (addons-624151) DBG | unable to find current IP address of domain addons-624151 in network mk-addons-624151
	I0805 11:27:55.847828  392242 main.go:141] libmachine: (addons-624151) DBG | I0805 11:27:55.847774  392264 retry.go:31] will retry after 296.941317ms: waiting for machine to come up
	I0805 11:27:56.146181  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:27:56.146506  392242 main.go:141] libmachine: (addons-624151) DBG | unable to find current IP address of domain addons-624151 in network mk-addons-624151
	I0805 11:27:56.146539  392242 main.go:141] libmachine: (addons-624151) DBG | I0805 11:27:56.146466  392264 retry.go:31] will retry after 435.407049ms: waiting for machine to come up
	I0805 11:27:56.583207  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:27:56.583658  392242 main.go:141] libmachine: (addons-624151) DBG | unable to find current IP address of domain addons-624151 in network mk-addons-624151
	I0805 11:27:56.583680  392242 main.go:141] libmachine: (addons-624151) DBG | I0805 11:27:56.583609  392264 retry.go:31] will retry after 601.17555ms: waiting for machine to come up
	I0805 11:27:57.186468  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:27:57.186967  392242 main.go:141] libmachine: (addons-624151) DBG | unable to find current IP address of domain addons-624151 in network mk-addons-624151
	I0805 11:27:57.186995  392242 main.go:141] libmachine: (addons-624151) DBG | I0805 11:27:57.186926  392264 retry.go:31] will retry after 719.110935ms: waiting for machine to come up
	I0805 11:27:57.907567  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:27:57.908039  392242 main.go:141] libmachine: (addons-624151) DBG | unable to find current IP address of domain addons-624151 in network mk-addons-624151
	I0805 11:27:57.908070  392242 main.go:141] libmachine: (addons-624151) DBG | I0805 11:27:57.908008  392264 retry.go:31] will retry after 934.35208ms: waiting for machine to come up
	I0805 11:27:58.844305  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:27:58.844653  392242 main.go:141] libmachine: (addons-624151) DBG | unable to find current IP address of domain addons-624151 in network mk-addons-624151
	I0805 11:27:58.844683  392242 main.go:141] libmachine: (addons-624151) DBG | I0805 11:27:58.844602  392264 retry.go:31] will retry after 1.082420814s: waiting for machine to come up
	I0805 11:27:59.928932  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:27:59.929392  392242 main.go:141] libmachine: (addons-624151) DBG | unable to find current IP address of domain addons-624151 in network mk-addons-624151
	I0805 11:27:59.929419  392242 main.go:141] libmachine: (addons-624151) DBG | I0805 11:27:59.929340  392264 retry.go:31] will retry after 1.228963819s: waiting for machine to come up
	I0805 11:28:01.159962  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:01.160367  392242 main.go:141] libmachine: (addons-624151) DBG | unable to find current IP address of domain addons-624151 in network mk-addons-624151
	I0805 11:28:01.160386  392242 main.go:141] libmachine: (addons-624151) DBG | I0805 11:28:01.160331  392264 retry.go:31] will retry after 2.152496576s: waiting for machine to come up
	I0805 11:28:03.314877  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:03.315338  392242 main.go:141] libmachine: (addons-624151) DBG | unable to find current IP address of domain addons-624151 in network mk-addons-624151
	I0805 11:28:03.315416  392242 main.go:141] libmachine: (addons-624151) DBG | I0805 11:28:03.315306  392264 retry.go:31] will retry after 2.810488145s: waiting for machine to come up
	I0805 11:28:06.127079  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:06.127443  392242 main.go:141] libmachine: (addons-624151) DBG | unable to find current IP address of domain addons-624151 in network mk-addons-624151
	I0805 11:28:06.127467  392242 main.go:141] libmachine: (addons-624151) DBG | I0805 11:28:06.127392  392264 retry.go:31] will retry after 2.755271269s: waiting for machine to come up
	I0805 11:28:08.883971  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:08.884504  392242 main.go:141] libmachine: (addons-624151) DBG | unable to find current IP address of domain addons-624151 in network mk-addons-624151
	I0805 11:28:08.884531  392242 main.go:141] libmachine: (addons-624151) DBG | I0805 11:28:08.884427  392264 retry.go:31] will retry after 4.321043706s: waiting for machine to come up
	I0805 11:28:13.207117  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:13.207475  392242 main.go:141] libmachine: (addons-624151) DBG | unable to find current IP address of domain addons-624151 in network mk-addons-624151
	I0805 11:28:13.207499  392242 main.go:141] libmachine: (addons-624151) DBG | I0805 11:28:13.207423  392264 retry.go:31] will retry after 5.45439584s: waiting for machine to come up
	I0805 11:28:18.663890  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:18.664358  392242 main.go:141] libmachine: (addons-624151) Found IP for machine: 192.168.39.142
	I0805 11:28:18.664393  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has current primary IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:18.664402  392242 main.go:141] libmachine: (addons-624151) Reserving static IP address...
	I0805 11:28:18.664790  392242 main.go:141] libmachine: (addons-624151) DBG | unable to find host DHCP lease matching {name: "addons-624151", mac: "52:54:00:7b:74:67", ip: "192.168.39.142"} in network mk-addons-624151
	I0805 11:28:18.736843  392242 main.go:141] libmachine: (addons-624151) DBG | Getting to WaitForSSH function...
	I0805 11:28:18.736877  392242 main.go:141] libmachine: (addons-624151) Reserved static IP address: 192.168.39.142
	I0805 11:28:18.736892  392242 main.go:141] libmachine: (addons-624151) Waiting for SSH to be available...
	I0805 11:28:18.739335  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:18.739596  392242 main.go:141] libmachine: (addons-624151) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151
	I0805 11:28:18.739622  392242 main.go:141] libmachine: (addons-624151) DBG | unable to find defined IP address of network mk-addons-624151 interface with MAC address 52:54:00:7b:74:67
	I0805 11:28:18.739927  392242 main.go:141] libmachine: (addons-624151) DBG | Using SSH client type: external
	I0805 11:28:18.739952  392242 main.go:141] libmachine: (addons-624151) DBG | Using SSH private key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/id_rsa (-rw-------)
	I0805 11:28:18.739993  392242 main.go:141] libmachine: (addons-624151) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 11:28:18.740021  392242 main.go:141] libmachine: (addons-624151) DBG | About to run SSH command:
	I0805 11:28:18.740037  392242 main.go:141] libmachine: (addons-624151) DBG | exit 0
	I0805 11:28:18.743941  392242 main.go:141] libmachine: (addons-624151) DBG | SSH cmd err, output: exit status 255: 
	I0805 11:28:18.743965  392242 main.go:141] libmachine: (addons-624151) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0805 11:28:18.743974  392242 main.go:141] libmachine: (addons-624151) DBG | command : exit 0
	I0805 11:28:18.743981  392242 main.go:141] libmachine: (addons-624151) DBG | err     : exit status 255
	I0805 11:28:18.743991  392242 main.go:141] libmachine: (addons-624151) DBG | output  : 
	I0805 11:28:21.746187  392242 main.go:141] libmachine: (addons-624151) DBG | Getting to WaitForSSH function...
	I0805 11:28:21.748602  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:21.748946  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:21.748977  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:21.749101  392242 main.go:141] libmachine: (addons-624151) DBG | Using SSH client type: external
	I0805 11:28:21.749130  392242 main.go:141] libmachine: (addons-624151) DBG | Using SSH private key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/id_rsa (-rw-------)
	I0805 11:28:21.749170  392242 main.go:141] libmachine: (addons-624151) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.142 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 11:28:21.749187  392242 main.go:141] libmachine: (addons-624151) DBG | About to run SSH command:
	I0805 11:28:21.749216  392242 main.go:141] libmachine: (addons-624151) DBG | exit 0
	I0805 11:28:21.875926  392242 main.go:141] libmachine: (addons-624151) DBG | SSH cmd err, output: <nil>: 
	I0805 11:28:21.876279  392242 main.go:141] libmachine: (addons-624151) KVM machine creation complete!
	I0805 11:28:21.876518  392242 main.go:141] libmachine: (addons-624151) Calling .GetConfigRaw
	I0805 11:28:21.877280  392242 main.go:141] libmachine: (addons-624151) Calling .DriverName
	I0805 11:28:21.877491  392242 main.go:141] libmachine: (addons-624151) Calling .DriverName
	I0805 11:28:21.877656  392242 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0805 11:28:21.877673  392242 main.go:141] libmachine: (addons-624151) Calling .GetState
	I0805 11:28:21.878964  392242 main.go:141] libmachine: Detecting operating system of created instance...
	I0805 11:28:21.878978  392242 main.go:141] libmachine: Waiting for SSH to be available...
	I0805 11:28:21.878984  392242 main.go:141] libmachine: Getting to WaitForSSH function...
	I0805 11:28:21.878989  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:28:21.881208  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:21.881591  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:21.881619  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:21.881751  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:28:21.881920  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:21.882080  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:21.882215  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:28:21.882396  392242 main.go:141] libmachine: Using SSH client type: native
	I0805 11:28:21.882628  392242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0805 11:28:21.882643  392242 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0805 11:28:21.995193  392242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 11:28:21.995218  392242 main.go:141] libmachine: Detecting the provisioner...
	I0805 11:28:21.995228  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:28:21.998288  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:21.998695  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:21.998721  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:21.998924  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:28:21.999175  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:21.999370  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:21.999526  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:28:21.999785  392242 main.go:141] libmachine: Using SSH client type: native
	I0805 11:28:21.999977  392242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0805 11:28:21.999989  392242 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0805 11:28:22.112625  392242 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0805 11:28:22.112723  392242 main.go:141] libmachine: found compatible host: buildroot
	I0805 11:28:22.112734  392242 main.go:141] libmachine: Provisioning with buildroot...
	I0805 11:28:22.112743  392242 main.go:141] libmachine: (addons-624151) Calling .GetMachineName
	I0805 11:28:22.112981  392242 buildroot.go:166] provisioning hostname "addons-624151"
	I0805 11:28:22.113012  392242 main.go:141] libmachine: (addons-624151) Calling .GetMachineName
	I0805 11:28:22.113226  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:28:22.115718  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:22.116158  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:22.116185  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:22.116360  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:28:22.116936  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:22.117434  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:22.117676  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:28:22.117893  392242 main.go:141] libmachine: Using SSH client type: native
	I0805 11:28:22.118123  392242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0805 11:28:22.118141  392242 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-624151 && echo "addons-624151" | sudo tee /etc/hostname
	I0805 11:28:22.243247  392242 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-624151
	
	I0805 11:28:22.243274  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:28:22.246134  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:22.246505  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:22.246543  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:22.246755  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:28:22.246955  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:22.247138  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:22.247292  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:28:22.247446  392242 main.go:141] libmachine: Using SSH client type: native
	I0805 11:28:22.247652  392242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0805 11:28:22.247669  392242 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-624151' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-624151/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-624151' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 11:28:22.369203  392242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 11:28:22.369245  392242 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19377-383955/.minikube CaCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19377-383955/.minikube}
	I0805 11:28:22.369322  392242 buildroot.go:174] setting up certificates
	I0805 11:28:22.369350  392242 provision.go:84] configureAuth start
	I0805 11:28:22.369373  392242 main.go:141] libmachine: (addons-624151) Calling .GetMachineName
	I0805 11:28:22.369705  392242 main.go:141] libmachine: (addons-624151) Calling .GetIP
	I0805 11:28:22.372267  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:22.372559  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:22.372587  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:22.372718  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:28:22.374796  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:22.375130  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:22.375157  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:22.375314  392242 provision.go:143] copyHostCerts
	I0805 11:28:22.375399  392242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem (1082 bytes)
	I0805 11:28:22.375597  392242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem (1123 bytes)
	I0805 11:28:22.375682  392242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem (1675 bytes)
	I0805 11:28:22.375772  392242 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem org=jenkins.addons-624151 san=[127.0.0.1 192.168.39.142 addons-624151 localhost minikube]
	I0805 11:28:22.534263  392242 provision.go:177] copyRemoteCerts
	I0805 11:28:22.534327  392242 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 11:28:22.534354  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:28:22.537700  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:22.538089  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:22.538115  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:22.538372  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:28:22.538581  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:22.538732  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:28:22.538853  392242 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/id_rsa Username:docker}
	I0805 11:28:22.626460  392242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 11:28:22.651152  392242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0805 11:28:22.676715  392242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0805 11:28:22.702742  392242 provision.go:87] duration metric: took 333.372423ms to configureAuth
	I0805 11:28:22.702777  392242 buildroot.go:189] setting minikube options for container-runtime
	I0805 11:28:22.703027  392242 config.go:182] Loaded profile config "addons-624151": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 11:28:22.703127  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:28:22.705594  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:22.705948  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:22.705974  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:22.706116  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:28:22.706321  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:22.706519  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:22.706671  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:28:22.706805  392242 main.go:141] libmachine: Using SSH client type: native
	I0805 11:28:22.706965  392242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0805 11:28:22.706979  392242 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 11:28:22.985822  392242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 11:28:22.985854  392242 main.go:141] libmachine: Checking connection to Docker...
	I0805 11:28:22.985863  392242 main.go:141] libmachine: (addons-624151) Calling .GetURL
	I0805 11:28:22.987239  392242 main.go:141] libmachine: (addons-624151) DBG | Using libvirt version 6000000
	I0805 11:28:22.989198  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:22.989656  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:22.989687  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:22.989910  392242 main.go:141] libmachine: Docker is up and running!
	I0805 11:28:22.989924  392242 main.go:141] libmachine: Reticulating splines...
	I0805 11:28:22.989932  392242 client.go:171] duration metric: took 30.144472044s to LocalClient.Create
	I0805 11:28:22.989962  392242 start.go:167] duration metric: took 30.144541719s to libmachine.API.Create "addons-624151"
	I0805 11:28:22.989976  392242 start.go:293] postStartSetup for "addons-624151" (driver="kvm2")
	I0805 11:28:22.989991  392242 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 11:28:22.990014  392242 main.go:141] libmachine: (addons-624151) Calling .DriverName
	I0805 11:28:22.990290  392242 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 11:28:22.990315  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:28:22.992656  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:22.992993  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:22.993023  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:22.993147  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:28:22.993326  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:22.993511  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:28:22.993670  392242 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/id_rsa Username:docker}
	I0805 11:28:23.078033  392242 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 11:28:23.082418  392242 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 11:28:23.082461  392242 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/addons for local assets ...
	I0805 11:28:23.082553  392242 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/files for local assets ...
	I0805 11:28:23.082577  392242 start.go:296] duration metric: took 92.592498ms for postStartSetup
	I0805 11:28:23.082617  392242 main.go:141] libmachine: (addons-624151) Calling .GetConfigRaw
	I0805 11:28:23.083314  392242 main.go:141] libmachine: (addons-624151) Calling .GetIP
	I0805 11:28:23.086031  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:23.086373  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:23.086399  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:23.086618  392242 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/config.json ...
	I0805 11:28:23.086841  392242 start.go:128] duration metric: took 30.260368337s to createHost
	I0805 11:28:23.086880  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:28:23.089102  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:23.089425  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:23.089449  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:23.089562  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:28:23.089824  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:23.090014  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:23.090179  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:28:23.090359  392242 main.go:141] libmachine: Using SSH client type: native
	I0805 11:28:23.090529  392242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0805 11:28:23.090540  392242 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 11:28:23.204745  392242 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722857303.187203054
	
	I0805 11:28:23.204774  392242 fix.go:216] guest clock: 1722857303.187203054
	I0805 11:28:23.204784  392242 fix.go:229] Guest: 2024-08-05 11:28:23.187203054 +0000 UTC Remote: 2024-08-05 11:28:23.086854803 +0000 UTC m=+30.362744727 (delta=100.348251ms)
	I0805 11:28:23.204851  392242 fix.go:200] guest clock delta is within tolerance: 100.348251ms
	I0805 11:28:23.204874  392242 start.go:83] releasing machines lock for "addons-624151", held for 30.378492825s
	I0805 11:28:23.204908  392242 main.go:141] libmachine: (addons-624151) Calling .DriverName
	I0805 11:28:23.205244  392242 main.go:141] libmachine: (addons-624151) Calling .GetIP
	I0805 11:28:23.207972  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:23.208595  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:23.208608  392242 main.go:141] libmachine: (addons-624151) Calling .DriverName
	I0805 11:28:23.208643  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:23.209124  392242 main.go:141] libmachine: (addons-624151) Calling .DriverName
	I0805 11:28:23.209307  392242 main.go:141] libmachine: (addons-624151) Calling .DriverName
	I0805 11:28:23.209434  392242 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 11:28:23.209482  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:28:23.209547  392242 ssh_runner.go:195] Run: cat /version.json
	I0805 11:28:23.209569  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:28:23.212093  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:23.212264  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:23.212450  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:23.212477  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:23.212623  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:23.212652  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:23.212657  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:28:23.212823  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:28:23.212868  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:23.213043  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:23.213049  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:28:23.213240  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:28:23.213243  392242 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/id_rsa Username:docker}
	I0805 11:28:23.213385  392242 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/id_rsa Username:docker}
	I0805 11:28:23.293185  392242 ssh_runner.go:195] Run: systemctl --version
	I0805 11:28:23.320391  392242 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 11:28:23.480109  392242 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 11:28:23.486161  392242 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 11:28:23.486235  392242 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 11:28:23.502622  392242 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 11:28:23.502652  392242 start.go:495] detecting cgroup driver to use...
	I0805 11:28:23.502735  392242 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 11:28:23.520055  392242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 11:28:23.534083  392242 docker.go:217] disabling cri-docker service (if available) ...
	I0805 11:28:23.534157  392242 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 11:28:23.549226  392242 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 11:28:23.563199  392242 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 11:28:23.677620  392242 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 11:28:23.819780  392242 docker.go:233] disabling docker service ...
	I0805 11:28:23.819865  392242 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 11:28:23.833808  392242 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 11:28:23.848833  392242 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 11:28:23.986678  392242 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 11:28:24.120165  392242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 11:28:24.133913  392242 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 11:28:24.152724  392242 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0805 11:28:24.152802  392242 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:28:24.163359  392242 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 11:28:24.163478  392242 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:28:24.174058  392242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:28:24.184879  392242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:28:24.195734  392242 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 11:28:24.206141  392242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:28:24.216026  392242 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:28:24.233192  392242 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:28:24.243197  392242 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 11:28:24.252414  392242 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0805 11:28:24.252470  392242 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0805 11:28:24.266281  392242 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 11:28:24.276080  392242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 11:28:24.398683  392242 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 11:28:24.535410  392242 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 11:28:24.535513  392242 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 11:28:24.541078  392242 start.go:563] Will wait 60s for crictl version
	I0805 11:28:24.541177  392242 ssh_runner.go:195] Run: which crictl
	I0805 11:28:24.545109  392242 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 11:28:24.585245  392242 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 11:28:24.585380  392242 ssh_runner.go:195] Run: crio --version
	I0805 11:28:24.614699  392242 ssh_runner.go:195] Run: crio --version
	I0805 11:28:24.644681  392242 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0805 11:28:24.645937  392242 main.go:141] libmachine: (addons-624151) Calling .GetIP
	I0805 11:28:24.648641  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:24.648975  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:24.649005  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:24.649256  392242 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0805 11:28:24.653580  392242 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 11:28:24.667222  392242 kubeadm.go:883] updating cluster {Name:addons-624151 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-624151 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 11:28:24.667345  392242 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 11:28:24.667401  392242 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 11:28:24.700316  392242 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0805 11:28:24.700380  392242 ssh_runner.go:195] Run: which lz4
	I0805 11:28:24.707369  392242 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0805 11:28:24.715155  392242 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 11:28:24.715200  392242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0805 11:28:26.073477  392242 crio.go:462] duration metric: took 1.36614873s to copy over tarball
	I0805 11:28:26.073551  392242 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 11:28:28.350339  392242 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.276737729s)
	I0805 11:28:28.350374  392242 crio.go:469] duration metric: took 2.276864789s to extract the tarball
	I0805 11:28:28.350385  392242 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0805 11:28:28.390523  392242 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 11:28:28.434767  392242 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 11:28:28.434808  392242 cache_images.go:84] Images are preloaded, skipping loading
	I0805 11:28:28.434819  392242 kubeadm.go:934] updating node { 192.168.39.142 8443 v1.30.3 crio true true} ...
	I0805 11:28:28.434970  392242 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-624151 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.142
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-624151 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 11:28:28.435042  392242 ssh_runner.go:195] Run: crio config
	I0805 11:28:28.479939  392242 cni.go:84] Creating CNI manager for ""
	I0805 11:28:28.479958  392242 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 11:28:28.479968  392242 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 11:28:28.479989  392242 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.142 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-624151 NodeName:addons-624151 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.142"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.142 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 11:28:28.480125  392242 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.142
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-624151"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.142
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.142"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 11:28:28.480197  392242 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 11:28:28.490472  392242 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 11:28:28.490545  392242 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 11:28:28.500350  392242 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0805 11:28:28.517032  392242 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 11:28:28.533680  392242 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0805 11:28:28.550370  392242 ssh_runner.go:195] Run: grep 192.168.39.142	control-plane.minikube.internal$ /etc/hosts
	I0805 11:28:28.554386  392242 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.142	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 11:28:28.567368  392242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 11:28:28.686987  392242 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 11:28:28.705218  392242 certs.go:68] Setting up /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151 for IP: 192.168.39.142
	I0805 11:28:28.705245  392242 certs.go:194] generating shared ca certs ...
	I0805 11:28:28.705264  392242 certs.go:226] acquiring lock for ca certs: {Name:mk0abfcaff3883fbb5243c47b487f9200d9166d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:28:28.705439  392242 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key
	I0805 11:28:28.796681  392242 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt ...
	I0805 11:28:28.796715  392242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt: {Name:mkd5fdcf45ea9df6d5fa18d45bdea63152eca76d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:28:28.796932  392242 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key ...
	I0805 11:28:28.796951  392242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key: {Name:mk1bbb53bb80f9444fe0f770cd146b0ddaa8afc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:28:28.797064  392242 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key
	I0805 11:28:29.059373  392242 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt ...
	I0805 11:28:29.059411  392242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt: {Name:mkb34ce72bc362bcbac0cd9684abb1d30ca4c34b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:28:29.059603  392242 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key ...
	I0805 11:28:29.059614  392242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key: {Name:mked69d0ac8bf6b4a6eff42e658df9ea29c964f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:28:29.059692  392242 certs.go:256] generating profile certs ...
	I0805 11:28:29.059779  392242 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.key
	I0805 11:28:29.059794  392242 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.crt with IP's: []
	I0805 11:28:29.251125  392242 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.crt ...
	I0805 11:28:29.251161  392242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.crt: {Name:mk4ec282ef4daa54f044621721118c8d98e31968 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:28:29.251330  392242 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.key ...
	I0805 11:28:29.251343  392242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.key: {Name:mk6589561b42c2c2c2be68e99be6d652fd418e21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:28:29.251414  392242 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/apiserver.key.b5e194bb
	I0805 11:28:29.251433  392242 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/apiserver.crt.b5e194bb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.142]
	I0805 11:28:29.398753  392242 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/apiserver.crt.b5e194bb ...
	I0805 11:28:29.398790  392242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/apiserver.crt.b5e194bb: {Name:mke94d236913fbf5b761f9dc674c8d40be6f2163 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:28:29.398980  392242 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/apiserver.key.b5e194bb ...
	I0805 11:28:29.398995  392242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/apiserver.key.b5e194bb: {Name:mk2bf2fa082dc57ec25f039d12e629f1e37991c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:28:29.399085  392242 certs.go:381] copying /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/apiserver.crt.b5e194bb -> /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/apiserver.crt
	I0805 11:28:29.399167  392242 certs.go:385] copying /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/apiserver.key.b5e194bb -> /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/apiserver.key
	I0805 11:28:29.399221  392242 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/proxy-client.key
	I0805 11:28:29.399247  392242 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/proxy-client.crt with IP's: []
	I0805 11:28:29.465109  392242 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/proxy-client.crt ...
	I0805 11:28:29.465143  392242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/proxy-client.crt: {Name:mk3f6493e491a193b1aa934ef0ff5632e2d4f042 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:28:29.465310  392242 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/proxy-client.key ...
	I0805 11:28:29.465323  392242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/proxy-client.key: {Name:mk0a5e508065b816cfb38cb7260296cbd40974f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:28:29.465491  392242 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 11:28:29.465532  392242 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem (1082 bytes)
	I0805 11:28:29.465559  392242 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem (1123 bytes)
	I0805 11:28:29.465586  392242 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem (1675 bytes)
	I0805 11:28:29.466240  392242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 11:28:29.500549  392242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 11:28:29.525523  392242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 11:28:29.557252  392242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 11:28:29.581272  392242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0805 11:28:29.606488  392242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0805 11:28:29.631674  392242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 11:28:29.658485  392242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0805 11:28:29.683337  392242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 11:28:29.707430  392242 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 11:28:29.725871  392242 ssh_runner.go:195] Run: openssl version
	I0805 11:28:29.731606  392242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 11:28:29.742664  392242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 11:28:29.747361  392242 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0805 11:28:29.747428  392242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 11:28:29.753430  392242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 11:28:29.765074  392242 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 11:28:29.769282  392242 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0805 11:28:29.769361  392242 kubeadm.go:392] StartCluster: {Name:addons-624151 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-624151 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 11:28:29.769443  392242 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 11:28:29.769634  392242 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 11:28:29.806553  392242 cri.go:89] found id: ""
	I0805 11:28:29.806646  392242 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 11:28:29.817288  392242 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 11:28:29.827658  392242 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 11:28:29.838106  392242 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 11:28:29.838141  392242 kubeadm.go:157] found existing configuration files:
	
	I0805 11:28:29.838201  392242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 11:28:29.849540  392242 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 11:28:29.849612  392242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 11:28:29.861521  392242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 11:28:29.871652  392242 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 11:28:29.871718  392242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 11:28:29.881852  392242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 11:28:29.891180  392242 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 11:28:29.891234  392242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 11:28:29.901243  392242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 11:28:29.911541  392242 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 11:28:29.911611  392242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 11:28:29.921893  392242 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 11:28:29.984348  392242 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0805 11:28:29.984412  392242 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 11:28:30.123453  392242 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 11:28:30.123636  392242 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 11:28:30.123815  392242 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 11:28:30.372922  392242 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 11:28:30.455517  392242 out.go:204]   - Generating certificates and keys ...
	I0805 11:28:30.455678  392242 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 11:28:30.455793  392242 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 11:28:30.455891  392242 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0805 11:28:30.542510  392242 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0805 11:28:30.805444  392242 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0805 11:28:30.929956  392242 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0805 11:28:31.306559  392242 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0805 11:28:31.306818  392242 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-624151 localhost] and IPs [192.168.39.142 127.0.0.1 ::1]
	I0805 11:28:31.525964  392242 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0805 11:28:31.526266  392242 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-624151 localhost] and IPs [192.168.39.142 127.0.0.1 ::1]
	I0805 11:28:31.798928  392242 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0805 11:28:31.889013  392242 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0805 11:28:32.078756  392242 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0805 11:28:32.078825  392242 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 11:28:32.257153  392242 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 11:28:32.553855  392242 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0805 11:28:32.641238  392242 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 11:28:32.849721  392242 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 11:28:32.933788  392242 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 11:28:32.934422  392242 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 11:28:32.936908  392242 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 11:28:32.938773  392242 out.go:204]   - Booting up control plane ...
	I0805 11:28:32.938897  392242 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 11:28:32.938988  392242 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 11:28:32.939219  392242 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 11:28:32.954696  392242 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 11:28:32.955636  392242 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 11:28:32.955703  392242 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 11:28:33.084896  392242 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0805 11:28:33.085004  392242 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0805 11:28:33.586597  392242 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.990366ms
	I0805 11:28:33.586724  392242 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0805 11:28:38.586530  392242 kubeadm.go:310] [api-check] The API server is healthy after 5.001960162s
	I0805 11:28:38.598390  392242 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 11:28:38.613336  392242 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 11:28:38.638697  392242 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0805 11:28:38.638920  392242 kubeadm.go:310] [mark-control-plane] Marking the node addons-624151 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 11:28:38.654612  392242 kubeadm.go:310] [bootstrap-token] Using token: 9dtprs.4desv7mp1hzrofda
	I0805 11:28:38.655937  392242 out.go:204]   - Configuring RBAC rules ...
	I0805 11:28:38.656045  392242 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 11:28:38.663853  392242 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 11:28:38.674484  392242 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 11:28:38.681354  392242 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 11:28:38.685389  392242 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 11:28:38.688922  392242 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 11:28:38.993498  392242 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 11:28:39.435299  392242 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0805 11:28:39.994201  392242 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0805 11:28:39.994227  392242 kubeadm.go:310] 
	I0805 11:28:39.994289  392242 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0805 11:28:39.994301  392242 kubeadm.go:310] 
	I0805 11:28:39.994384  392242 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0805 11:28:39.994426  392242 kubeadm.go:310] 
	I0805 11:28:39.994485  392242 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0805 11:28:39.994560  392242 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 11:28:39.994632  392242 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 11:28:39.994640  392242 kubeadm.go:310] 
	I0805 11:28:39.994726  392242 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0805 11:28:39.994741  392242 kubeadm.go:310] 
	I0805 11:28:39.994808  392242 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 11:28:39.994817  392242 kubeadm.go:310] 
	I0805 11:28:39.994859  392242 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0805 11:28:39.994923  392242 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 11:28:39.994985  392242 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 11:28:39.995000  392242 kubeadm.go:310] 
	I0805 11:28:39.995085  392242 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0805 11:28:39.995174  392242 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0805 11:28:39.995181  392242 kubeadm.go:310] 
	I0805 11:28:39.995287  392242 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9dtprs.4desv7mp1hzrofda \
	I0805 11:28:39.995386  392242 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d5d31a77e9c4cbf19599d2fca5d8f2345e115b01301fa4b841f92bcfec86ddc6 \
	I0805 11:28:39.995405  392242 kubeadm.go:310] 	--control-plane 
	I0805 11:28:39.995432  392242 kubeadm.go:310] 
	I0805 11:28:39.995543  392242 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0805 11:28:39.995552  392242 kubeadm.go:310] 
	I0805 11:28:39.995644  392242 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9dtprs.4desv7mp1hzrofda \
	I0805 11:28:39.995788  392242 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d5d31a77e9c4cbf19599d2fca5d8f2345e115b01301fa4b841f92bcfec86ddc6 
	I0805 11:28:39.996279  392242 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 11:28:39.996309  392242 cni.go:84] Creating CNI manager for ""
	I0805 11:28:39.996324  392242 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 11:28:39.998756  392242 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0805 11:28:39.999913  392242 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0805 11:28:40.012040  392242 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0805 11:28:40.033421  392242 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 11:28:40.033514  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:40.033525  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-624151 minikube.k8s.io/updated_at=2024_08_05T11_28_40_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f minikube.k8s.io/name=addons-624151 minikube.k8s.io/primary=true
	I0805 11:28:40.082033  392242 ops.go:34] apiserver oom_adj: -16
	I0805 11:28:40.168298  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:40.669195  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:41.168732  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:41.668805  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:42.168893  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:42.668347  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:43.168590  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:43.668603  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:44.168689  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:44.668341  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:45.168382  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:45.668964  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:46.168441  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:46.668705  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:47.169012  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:47.668975  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:48.169164  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:48.669111  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:49.168358  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:49.668340  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:50.168917  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:50.668835  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:51.169206  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:51.668968  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:52.169264  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:52.669007  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:53.168480  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:53.272052  392242 kubeadm.go:1113] duration metric: took 13.238603143s to wait for elevateKubeSystemPrivileges
	I0805 11:28:53.272103  392242 kubeadm.go:394] duration metric: took 23.502751026s to StartCluster
	I0805 11:28:53.272130  392242 settings.go:142] acquiring lock: {Name:mkef693333292ed53a03690c72ec170ce2e26d3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:28:53.272306  392242 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 11:28:53.272827  392242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/kubeconfig: {Name:mkf2ea766e58530103015ce4ba9d1ed3336f3926 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:28:53.273073  392242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0805 11:28:53.273102  392242 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 11:28:53.273190  392242 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0805 11:28:53.273300  392242 addons.go:69] Setting yakd=true in profile "addons-624151"
	I0805 11:28:53.273308  392242 addons.go:69] Setting gcp-auth=true in profile "addons-624151"
	I0805 11:28:53.273324  392242 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-624151"
	I0805 11:28:53.273347  392242 addons.go:69] Setting default-storageclass=true in profile "addons-624151"
	I0805 11:28:53.273354  392242 addons.go:69] Setting helm-tiller=true in profile "addons-624151"
	I0805 11:28:53.273370  392242 config.go:182] Loaded profile config "addons-624151": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 11:28:53.273385  392242 addons.go:69] Setting ingress-dns=true in profile "addons-624151"
	I0805 11:28:53.273392  392242 addons.go:69] Setting inspektor-gadget=true in profile "addons-624151"
	I0805 11:28:53.273396  392242 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-624151"
	I0805 11:28:53.273396  392242 addons.go:69] Setting storage-provisioner=true in profile "addons-624151"
	I0805 11:28:53.273404  392242 addons.go:69] Setting metrics-server=true in profile "addons-624151"
	I0805 11:28:53.273413  392242 addons.go:234] Setting addon ingress-dns=true in "addons-624151"
	I0805 11:28:53.273418  392242 addons.go:234] Setting addon storage-provisioner=true in "addons-624151"
	I0805 11:28:53.273421  392242 addons.go:69] Setting ingress=true in profile "addons-624151"
	I0805 11:28:53.273425  392242 addons.go:234] Setting addon metrics-server=true in "addons-624151"
	I0805 11:28:53.273435  392242 host.go:66] Checking if "addons-624151" exists ...
	I0805 11:28:53.273441  392242 addons.go:234] Setting addon ingress=true in "addons-624151"
	I0805 11:28:53.273450  392242 host.go:66] Checking if "addons-624151" exists ...
	I0805 11:28:53.273453  392242 host.go:66] Checking if "addons-624151" exists ...
	I0805 11:28:53.273456  392242 host.go:66] Checking if "addons-624151" exists ...
	I0805 11:28:53.273466  392242 host.go:66] Checking if "addons-624151" exists ...
	I0805 11:28:53.273341  392242 mustload.go:65] Loading cluster: addons-624151
	I0805 11:28:53.273379  392242 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-624151"
	I0805 11:28:53.273585  392242 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-624151"
	I0805 11:28:53.273605  392242 host.go:66] Checking if "addons-624151" exists ...
	I0805 11:28:53.273750  392242 config.go:182] Loaded profile config "addons-624151": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 11:28:53.273348  392242 addons.go:234] Setting addon yakd=true in "addons-624151"
	I0805 11:28:53.273924  392242 host.go:66] Checking if "addons-624151" exists ...
	I0805 11:28:53.273996  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.273371  392242 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-624151"
	I0805 11:28:53.274043  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.273422  392242 addons.go:234] Setting addon inspektor-gadget=true in "addons-624151"
	I0805 11:28:53.273379  392242 addons.go:69] Setting registry=true in profile "addons-624151"
	I0805 11:28:53.274122  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.274132  392242 addons.go:234] Setting addon registry=true in "addons-624151"
	I0805 11:28:53.274164  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.274168  392242 host.go:66] Checking if "addons-624151" exists ...
	I0805 11:28:53.274127  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.274246  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.273388  392242 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-624151"
	I0805 11:28:53.274302  392242 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-624151"
	I0805 11:28:53.273397  392242 addons.go:69] Setting volumesnapshots=true in profile "addons-624151"
	I0805 11:28:53.274317  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.274333  392242 addons.go:234] Setting addon volumesnapshots=true in "addons-624151"
	I0805 11:28:53.273940  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.274349  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.274360  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.273925  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.273372  392242 addons.go:234] Setting addon helm-tiller=true in "addons-624151"
	I0805 11:28:53.274446  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.274486  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.274500  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.274510  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.274529  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.273349  392242 addons.go:69] Setting cloud-spanner=true in profile "addons-624151"
	I0805 11:28:53.274578  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.274592  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.274594  392242 addons.go:234] Setting addon cloud-spanner=true in "addons-624151"
	I0805 11:28:53.274599  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.274556  392242 host.go:66] Checking if "addons-624151" exists ...
	I0805 11:28:53.274613  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.274762  392242 host.go:66] Checking if "addons-624151" exists ...
	I0805 11:28:53.274790  392242 host.go:66] Checking if "addons-624151" exists ...
	I0805 11:28:53.274956  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.273388  392242 addons.go:69] Setting volcano=true in profile "addons-624151"
	I0805 11:28:53.274974  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.274990  392242 addons.go:234] Setting addon volcano=true in "addons-624151"
	I0805 11:28:53.275035  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.275068  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.275085  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.275102  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.275105  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.275126  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.275366  392242 host.go:66] Checking if "addons-624151" exists ...
	I0805 11:28:53.275896  392242 host.go:66] Checking if "addons-624151" exists ...
	I0805 11:28:53.281283  392242 out.go:177] * Verifying Kubernetes components...
	I0805 11:28:53.282778  392242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 11:28:53.290478  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35467
	I0805 11:28:53.290993  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.291483  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.291506  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.297093  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.297420  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.297430  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.297477  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.297485  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.298060  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.298098  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.313640  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33461
	I0805 11:28:53.314400  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.315025  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.315045  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.315482  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.316131  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.316177  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.319860  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41177
	I0805 11:28:53.320377  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.320968  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.320998  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.321357  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.321574  392242 main.go:141] libmachine: (addons-624151) Calling .GetState
	I0805 11:28:53.325643  392242 addons.go:234] Setting addon default-storageclass=true in "addons-624151"
	I0805 11:28:53.325695  392242 host.go:66] Checking if "addons-624151" exists ...
	I0805 11:28:53.326050  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.326085  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.326352  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41021
	I0805 11:28:53.326851  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.327387  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.327413  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.327810  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.328393  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.328421  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.329323  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41335
	I0805 11:28:53.329817  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.330300  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.330316  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.330720  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.331241  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.331270  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.338402  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36467
	I0805 11:28:53.338842  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.339356  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.339375  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.339712  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.339906  392242 main.go:141] libmachine: (addons-624151) Calling .GetState
	I0805 11:28:53.341718  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46867
	I0805 11:28:53.341898  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41525
	I0805 11:28:53.343191  392242 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-624151"
	I0805 11:28:53.343231  392242 host.go:66] Checking if "addons-624151" exists ...
	I0805 11:28:53.343560  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.343599  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.343853  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.344461  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.344479  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.344914  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.345463  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.345506  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.346621  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.347304  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.347321  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.347761  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.348337  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.348375  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.350054  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35389
	I0805 11:28:53.350602  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.352436  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.352453  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.353043  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.353844  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.353871  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.354160  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34711
	I0805 11:28:53.354663  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.355244  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.355262  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.355642  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.356225  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.356266  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.357988  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38179
	I0805 11:28:53.358436  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.358789  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36093
	I0805 11:28:53.359086  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.359102  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.359440  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.359729  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.360008  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.360023  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.360674  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.360719  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.361810  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42937
	I0805 11:28:53.362362  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.362895  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.362911  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.363288  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.363841  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.363876  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.368411  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.369041  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.369098  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.370015  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37015
	I0805 11:28:53.370550  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.371131  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.371157  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.371221  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35015
	I0805 11:28:53.371852  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.371986  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.372464  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.372482  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.372876  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.373147  392242 main.go:141] libmachine: (addons-624151) Calling .GetState
	I0805 11:28:53.374104  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33075
	I0805 11:28:53.374566  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.375013  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.375030  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.375391  392242 main.go:141] libmachine: (addons-624151) Calling .DriverName
	I0805 11:28:53.375451  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.376203  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.376244  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.376441  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.376484  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.377640  392242 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0805 11:28:53.378853  392242 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0805 11:28:53.378880  392242 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0805 11:28:53.378909  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:28:53.381767  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45513
	I0805 11:28:53.382397  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.383051  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.383070  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.383475  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.383708  392242 main.go:141] libmachine: (addons-624151) Calling .GetState
	I0805 11:28:53.384348  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.384867  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:53.384895  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.385210  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:28:53.385394  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:53.385543  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:28:53.385726  392242 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/id_rsa Username:docker}
	I0805 11:28:53.387348  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46339
	I0805 11:28:53.387495  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38139
	I0805 11:28:53.388211  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45101
	I0805 11:28:53.388803  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.388918  392242 host.go:66] Checking if "addons-624151" exists ...
	I0805 11:28:53.389327  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.389365  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.389616  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.389646  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.390264  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.390282  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.390349  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33503
	I0805 11:28:53.390585  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.390606  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.390858  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.391052  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.391294  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.391310  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.391435  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.391526  392242 main.go:141] libmachine: (addons-624151) Calling .GetState
	I0805 11:28:53.392021  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.392051  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.392417  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.393249  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.393291  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.393593  392242 main.go:141] libmachine: (addons-624151) Calling .DriverName
	I0805 11:28:53.393749  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.393946  392242 main.go:141] libmachine: (addons-624151) Calling .GetState
	I0805 11:28:53.394279  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.394335  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.395767  392242 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.1
	I0805 11:28:53.396005  392242 main.go:141] libmachine: (addons-624151) Calling .DriverName
	I0805 11:28:53.396323  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44169
	I0805 11:28:53.397140  392242 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0805 11:28:53.397163  392242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0805 11:28:53.397184  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:28:53.397747  392242 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 11:28:53.398465  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.399099  392242 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 11:28:53.399122  392242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 11:28:53.399143  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:28:53.399459  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.399478  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.400454  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.401018  392242 main.go:141] libmachine: (addons-624151) Calling .GetState
	I0805 11:28:53.401686  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39545
	I0805 11:28:53.401877  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41983
	I0805 11:28:53.402381  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.402473  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45181
	I0805 11:28:53.402685  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.402821  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.403311  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.403339  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.403427  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.403518  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:53.403534  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.403567  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:28:53.403811  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.403867  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:53.404164  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:28:53.404226  392242 main.go:141] libmachine: (addons-624151) Calling .GetState
	I0805 11:28:53.404642  392242 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/id_rsa Username:docker}
	I0805 11:28:53.404998  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:53.405019  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.405281  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:28:53.405499  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:53.405658  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:28:53.405854  392242 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/id_rsa Username:docker}
	I0805 11:28:53.406480  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.406502  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.406966  392242 main.go:141] libmachine: (addons-624151) Calling .DriverName
	I0805 11:28:53.407036  392242 main.go:141] libmachine: (addons-624151) Calling .DriverName
	I0805 11:28:53.407089  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.407286  392242 main.go:141] libmachine: (addons-624151) Calling .GetState
	I0805 11:28:53.408736  392242 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0805 11:28:53.408795  392242 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0805 11:28:53.409932  392242 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0805 11:28:53.409952  392242 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0805 11:28:53.409974  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:28:53.410170  392242 main.go:141] libmachine: (addons-624151) Calling .DriverName
	I0805 11:28:53.410468  392242 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 11:28:53.410481  392242 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 11:28:53.410499  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:28:53.410681  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.411576  392242 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0805 11:28:53.412333  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.412354  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.414015  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.414239  392242 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0805 11:28:53.414493  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:53.414517  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.414666  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:28:53.414717  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.414847  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:53.414988  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:28:53.415118  392242 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/id_rsa Username:docker}
	I0805 11:28:53.415325  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:53.415349  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.415761  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.415811  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:28:53.416030  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:53.416306  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:28:53.416500  392242 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/id_rsa Username:docker}
	I0805 11:28:53.416985  392242 main.go:141] libmachine: (addons-624151) Calling .GetState
	I0805 11:28:53.417452  392242 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0805 11:28:53.418676  392242 main.go:141] libmachine: (addons-624151) Calling .DriverName
	I0805 11:28:53.420037  392242 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0805 11:28:53.420038  392242 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0805 11:28:53.420615  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38759
	I0805 11:28:53.422886  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33073
	I0805 11:28:53.423364  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.423412  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.423905  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.423945  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.424160  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.424184  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.424399  392242 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0805 11:28:53.424423  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.424452  392242 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0805 11:28:53.424625  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.424669  392242 main.go:141] libmachine: (addons-624151) Calling .GetState
	I0805 11:28:53.424794  392242 main.go:141] libmachine: (addons-624151) Calling .GetState
	I0805 11:28:53.426533  392242 main.go:141] libmachine: (addons-624151) Calling .DriverName
	I0805 11:28:53.427024  392242 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0805 11:28:53.427057  392242 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0805 11:28:53.427287  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33529
	I0805 11:28:53.428237  392242 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0805 11:28:53.428469  392242 main.go:141] libmachine: (addons-624151) Calling .DriverName
	I0805 11:28:53.428583  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.428697  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40243
	I0805 11:28:53.429116  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.429293  392242 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0805 11:28:53.429311  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.429320  392242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0805 11:28:53.429325  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.429340  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:28:53.429705  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.429836  392242 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0805 11:28:53.429843  392242 main.go:141] libmachine: (addons-624151) Calling .GetState
	I0805 11:28:53.429851  392242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0805 11:28:53.429867  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:28:53.430163  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38621
	I0805 11:28:53.430192  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34365
	I0805 11:28:53.430656  392242 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0805 11:28:53.430747  392242 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0805 11:28:53.430805  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.431307  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.431317  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.431484  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.431548  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37125
	I0805 11:28:53.431641  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.431663  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.431852  392242 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0805 11:28:53.431868  392242 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0805 11:28:53.431896  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:28:53.432225  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.432403  392242 main.go:141] libmachine: (addons-624151) Calling .GetState
	I0805 11:28:53.433120  392242 out.go:177]   - Using image docker.io/registry:2.8.3
	I0805 11:28:53.433230  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.433653  392242 main.go:141] libmachine: (addons-624151) Calling .DriverName
	I0805 11:28:53.433891  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:28:53.433906  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:28:53.433914  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41061
	I0805 11:28:53.434032  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.434093  392242 main.go:141] libmachine: (addons-624151) DBG | Closing plugin on server side
	I0805 11:28:53.434114  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:28:53.434121  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:28:53.434133  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:28:53.434141  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:28:53.434226  392242 main.go:141] libmachine: (addons-624151) Calling .GetState
	I0805 11:28:53.434310  392242 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0805 11:28:53.434322  392242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0805 11:28:53.434340  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:28:53.434447  392242 main.go:141] libmachine: (addons-624151) DBG | Closing plugin on server side
	I0805 11:28:53.434461  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:28:53.434468  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:28:53.434467  392242 main.go:141] libmachine: (addons-624151) Calling .DriverName
	W0805 11:28:53.434529  392242 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0805 11:28:53.435011  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.435565  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:53.435590  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.435901  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:28:53.435926  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.436219  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:53.436583  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.436598  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.437132  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.437208  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:28:53.437293  392242 main.go:141] libmachine: (addons-624151) Calling .GetState
	I0805 11:28:53.437489  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.437456  392242 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/id_rsa Username:docker}
	I0805 11:28:53.437518  392242 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0805 11:28:53.438095  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:53.438116  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.438383  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:28:53.438587  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:53.438744  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:28:53.438879  392242 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/id_rsa Username:docker}
	I0805 11:28:53.439023  392242 main.go:141] libmachine: (addons-624151) Calling .DriverName
	I0805 11:28:53.439058  392242 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0805 11:28:53.439075  392242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0805 11:28:53.439091  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:28:53.439487  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.439501  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.439570  392242 main.go:141] libmachine: (addons-624151) Calling .DriverName
	I0805 11:28:53.440007  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.440015  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.440178  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.440271  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.440288  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.440319  392242 main.go:141] libmachine: (addons-624151) Calling .GetState
	I0805 11:28:53.440507  392242 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0805 11:28:53.440584  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:53.440849  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.440789  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:28:53.440822  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:53.440913  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.441023  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:53.441235  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:28:53.441247  392242 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0805 11:28:53.441282  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:28:53.441422  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:53.441763  392242 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/id_rsa Username:docker}
	I0805 11:28:53.441781  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:28:53.441901  392242 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/id_rsa Username:docker}
	I0805 11:28:53.442007  392242 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0805 11:28:53.442024  392242 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0805 11:28:53.442045  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:28:53.442125  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.442409  392242 main.go:141] libmachine: (addons-624151) Calling .DriverName
	I0805 11:28:53.442718  392242 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0805 11:28:53.442735  392242 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0805 11:28:53.442753  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:28:53.443037  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.443189  392242 main.go:141] libmachine: (addons-624151) Calling .DriverName
	I0805 11:28:53.443574  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:53.443592  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.443790  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:28:53.444496  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:53.444684  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:28:53.444830  392242 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/id_rsa Username:docker}
	I0805 11:28:53.445733  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.445833  392242 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0805 11:28:53.446208  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:53.446287  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.446365  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:28:53.446523  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:53.446669  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:28:53.446786  392242 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/id_rsa Username:docker}
	I0805 11:28:53.447125  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.447240  392242 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0805 11:28:53.447254  392242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0805 11:28:53.447269  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:28:53.447507  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:53.447522  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.447784  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:28:53.447955  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:53.448128  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:28:53.448274  392242 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/id_rsa Username:docker}
	I0805 11:28:53.450072  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.450504  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:53.450527  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.450698  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:28:53.450847  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:53.450995  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:28:53.451148  392242 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/id_rsa Username:docker}
	W0805 11:28:53.452038  392242 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:53610->192.168.39.142:22: read: connection reset by peer
	I0805 11:28:53.452065  392242 retry.go:31] will retry after 145.787354ms: ssh: handshake failed: read tcp 192.168.39.1:53610->192.168.39.142:22: read: connection reset by peer
	I0805 11:28:53.454613  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35789
	I0805 11:28:53.454963  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.455451  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.455476  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.455913  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.456108  392242 main.go:141] libmachine: (addons-624151) Calling .GetState
	I0805 11:28:53.457761  392242 main.go:141] libmachine: (addons-624151) Calling .DriverName
	I0805 11:28:53.459541  392242 out.go:177]   - Using image docker.io/busybox:stable
	I0805 11:28:53.461088  392242 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0805 11:28:53.462298  392242 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0805 11:28:53.462316  392242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0805 11:28:53.462337  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:28:53.465279  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.465664  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:53.465691  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.466181  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:28:53.466462  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:53.466661  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:28:53.466840  392242 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/id_rsa Username:docker}
	W0805 11:28:53.485314  392242 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:53616->192.168.39.142:22: read: connection reset by peer
	I0805 11:28:53.485355  392242 retry.go:31] will retry after 197.740583ms: ssh: handshake failed: read tcp 192.168.39.1:53616->192.168.39.142:22: read: connection reset by peer
	W0805 11:28:53.684682  392242 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0805 11:28:53.684735  392242 retry.go:31] will retry after 453.404253ms: ssh: handshake failed: EOF
	I0805 11:28:53.722077  392242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 11:28:53.839879  392242 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0805 11:28:53.839905  392242 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0805 11:28:53.873551  392242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0805 11:28:53.937056  392242 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0805 11:28:53.937086  392242 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0805 11:28:53.939593  392242 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0805 11:28:53.939616  392242 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0805 11:28:53.961178  392242 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0805 11:28:53.961216  392242 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0805 11:28:53.968824  392242 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 11:28:53.968845  392242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0805 11:28:53.975230  392242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 11:28:53.978631  392242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0805 11:28:53.994060  392242 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0805 11:28:53.994093  392242 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0805 11:28:54.064218  392242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0805 11:28:54.076542  392242 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0805 11:28:54.076568  392242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0805 11:28:54.078434  392242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0805 11:28:54.084690  392242 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0805 11:28:54.084716  392242 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0805 11:28:54.088075  392242 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0805 11:28:54.088100  392242 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0805 11:28:54.099056  392242 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0805 11:28:54.099083  392242 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0805 11:28:54.103353  392242 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0805 11:28:54.103380  392242 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0805 11:28:54.130326  392242 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0805 11:28:54.130355  392242 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0805 11:28:54.217782  392242 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0805 11:28:54.217810  392242 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0805 11:28:54.218499  392242 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0805 11:28:54.218526  392242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0805 11:28:54.220191  392242 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0805 11:28:54.220210  392242 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0805 11:28:54.348362  392242 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0805 11:28:54.348402  392242 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0805 11:28:54.364151  392242 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0805 11:28:54.364176  392242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0805 11:28:54.411467  392242 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0805 11:28:54.411497  392242 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0805 11:28:54.417184  392242 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0805 11:28:54.417207  392242 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0805 11:28:54.421354  392242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0805 11:28:54.425905  392242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0805 11:28:54.532320  392242 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0805 11:28:54.532355  392242 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0805 11:28:54.680122  392242 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0805 11:28:54.680159  392242 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0805 11:28:54.681761  392242 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0805 11:28:54.681787  392242 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0805 11:28:54.701559  392242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0805 11:28:54.721417  392242 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0805 11:28:54.721453  392242 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0805 11:28:54.871016  392242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0805 11:28:54.904692  392242 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0805 11:28:54.904732  392242 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0805 11:28:54.970601  392242 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0805 11:28:54.970635  392242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0805 11:28:55.016003  392242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0805 11:28:55.018227  392242 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0805 11:28:55.018253  392242 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0805 11:28:55.267010  392242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0805 11:28:55.299260  392242 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0805 11:28:55.299285  392242 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0805 11:28:55.512866  392242 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0805 11:28:55.512893  392242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0805 11:28:55.541255  392242 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0805 11:28:55.541285  392242 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0805 11:28:55.713412  392242 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0805 11:28:55.713444  392242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0805 11:28:55.789517  392242 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0805 11:28:55.789554  392242 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0805 11:28:56.173717  392242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0805 11:28:56.178268  392242 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0805 11:28:56.178311  392242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0805 11:28:56.564459  392242 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0805 11:28:56.564486  392242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0805 11:28:56.956022  392242 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0805 11:28:56.956203  392242 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0805 11:28:57.170027  392242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0805 11:28:57.535128  392242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.813003444s)
	I0805 11:28:57.535168  392242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.661577619s)
	I0805 11:28:57.535200  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:28:57.535211  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:28:57.535218  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:28:57.535224  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:28:57.535248  392242 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.566379929s)
	I0805 11:28:57.535276  392242 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0805 11:28:57.535218  392242 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.566352546s)
	I0805 11:28:57.535337  392242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.560080123s)
	I0805 11:28:57.535367  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:28:57.535377  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:28:57.535781  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:28:57.535799  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:28:57.535809  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:28:57.535817  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:28:57.536401  392242 node_ready.go:35] waiting up to 6m0s for node "addons-624151" to be "Ready" ...
	I0805 11:28:57.536503  392242 main.go:141] libmachine: (addons-624151) DBG | Closing plugin on server side
	I0805 11:28:57.536522  392242 main.go:141] libmachine: (addons-624151) DBG | Closing plugin on server side
	I0805 11:28:57.536551  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:28:57.536559  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:28:57.536569  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:28:57.536576  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:28:57.536595  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:28:57.536610  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:28:57.536623  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:28:57.536632  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:28:57.536847  392242 main.go:141] libmachine: (addons-624151) DBG | Closing plugin on server side
	I0805 11:28:57.536918  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:28:57.536925  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:28:57.537093  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:28:57.537104  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:28:57.537198  392242 main.go:141] libmachine: (addons-624151) DBG | Closing plugin on server side
	I0805 11:28:57.537236  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:28:57.537251  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:28:57.615292  392242 node_ready.go:49] node "addons-624151" has status "Ready":"True"
	I0805 11:28:57.615325  392242 node_ready.go:38] duration metric: took 78.902708ms for node "addons-624151" to be "Ready" ...
	I0805 11:28:57.615338  392242 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 11:28:57.701322  392242 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gmwdn" in "kube-system" namespace to be "Ready" ...
	I0805 11:28:57.706215  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:28:57.706237  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:28:57.706610  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:28:57.706634  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:28:58.073820  392242 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-624151" context rescaled to 1 replicas
	I0805 11:28:58.709710  392242 pod_ready.go:92] pod "coredns-7db6d8ff4d-gmwdn" in "kube-system" namespace has status "Ready":"True"
	I0805 11:28:58.709738  392242 pod_ready.go:81] duration metric: took 1.008372873s for pod "coredns-7db6d8ff4d-gmwdn" in "kube-system" namespace to be "Ready" ...
	I0805 11:28:58.709751  392242 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-s7xqd" in "kube-system" namespace to be "Ready" ...
	I0805 11:28:58.723918  392242 pod_ready.go:92] pod "coredns-7db6d8ff4d-s7xqd" in "kube-system" namespace has status "Ready":"True"
	I0805 11:28:58.723947  392242 pod_ready.go:81] duration metric: took 14.18693ms for pod "coredns-7db6d8ff4d-s7xqd" in "kube-system" namespace to be "Ready" ...
	I0805 11:28:58.723958  392242 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-624151" in "kube-system" namespace to be "Ready" ...
	I0805 11:28:58.730686  392242 pod_ready.go:92] pod "etcd-addons-624151" in "kube-system" namespace has status "Ready":"True"
	I0805 11:28:58.730714  392242 pod_ready.go:81] duration metric: took 6.746982ms for pod "etcd-addons-624151" in "kube-system" namespace to be "Ready" ...
	I0805 11:28:58.730725  392242 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-624151" in "kube-system" namespace to be "Ready" ...
	I0805 11:28:58.742980  392242 pod_ready.go:92] pod "kube-apiserver-addons-624151" in "kube-system" namespace has status "Ready":"True"
	I0805 11:28:58.743011  392242 pod_ready.go:81] duration metric: took 12.277228ms for pod "kube-apiserver-addons-624151" in "kube-system" namespace to be "Ready" ...
	I0805 11:28:58.743024  392242 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-624151" in "kube-system" namespace to be "Ready" ...
	I0805 11:28:58.763952  392242 pod_ready.go:92] pod "kube-controller-manager-addons-624151" in "kube-system" namespace has status "Ready":"True"
	I0805 11:28:58.763978  392242 pod_ready.go:81] duration metric: took 20.944907ms for pod "kube-controller-manager-addons-624151" in "kube-system" namespace to be "Ready" ...
	I0805 11:28:58.763994  392242 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nbpvj" in "kube-system" namespace to be "Ready" ...
	I0805 11:28:59.180033  392242 pod_ready.go:92] pod "kube-proxy-nbpvj" in "kube-system" namespace has status "Ready":"True"
	I0805 11:28:59.180061  392242 pod_ready.go:81] duration metric: took 416.060257ms for pod "kube-proxy-nbpvj" in "kube-system" namespace to be "Ready" ...
	I0805 11:28:59.180071  392242 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-624151" in "kube-system" namespace to be "Ready" ...
	I0805 11:28:59.600019  392242 pod_ready.go:92] pod "kube-scheduler-addons-624151" in "kube-system" namespace has status "Ready":"True"
	I0805 11:28:59.600046  392242 pod_ready.go:81] duration metric: took 419.968921ms for pod "kube-scheduler-addons-624151" in "kube-system" namespace to be "Ready" ...
	I0805 11:28:59.600057  392242 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-kgtjf" in "kube-system" namespace to be "Ready" ...
	I0805 11:29:00.500985  392242 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0805 11:29:00.501027  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:29:00.504481  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:29:00.505035  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:29:00.505069  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:29:00.505265  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:29:00.505557  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:29:00.505728  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:29:00.505889  392242 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/id_rsa Username:docker}
	I0805 11:29:00.785710  392242 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0805 11:29:01.020555  392242 addons.go:234] Setting addon gcp-auth=true in "addons-624151"
	I0805 11:29:01.020615  392242 host.go:66] Checking if "addons-624151" exists ...
	I0805 11:29:01.020920  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:29:01.020949  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:29:01.037267  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35773
	I0805 11:29:01.037751  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:29:01.038337  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:29:01.038362  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:29:01.038731  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:29:01.039252  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:29:01.039287  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:29:01.054696  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38945
	I0805 11:29:01.055168  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:29:01.055705  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:29:01.055733  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:29:01.056084  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:29:01.056275  392242 main.go:141] libmachine: (addons-624151) Calling .GetState
	I0805 11:29:01.058094  392242 main.go:141] libmachine: (addons-624151) Calling .DriverName
	I0805 11:29:01.058355  392242 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0805 11:29:01.058380  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:29:01.061816  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:29:01.062345  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:29:01.062374  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:29:01.062561  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:29:01.062772  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:29:01.062997  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:29:01.063141  392242 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/id_rsa Username:docker}
	I0805 11:29:01.617911  392242 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-kgtjf" in "kube-system" namespace has status "Ready":"False"
	I0805 11:29:02.109487  392242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.130816341s)
	I0805 11:29:02.109543  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:29:02.109541  392242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.045286653s)
	I0805 11:29:02.109576  392242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.031114184s)
	I0805 11:29:02.109593  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:29:02.109618  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:29:02.109556  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:29:02.109644  392242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.683703566s)
	I0805 11:29:02.109614  392242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.688233142s)
	I0805 11:29:02.109672  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:29:02.109696  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:29:02.109704  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:29:02.109624  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:29:02.109717  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:29:02.109706  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:29:02.110096  392242 main.go:141] libmachine: (addons-624151) DBG | Closing plugin on server side
	I0805 11:29:02.110099  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:29:02.110110  392242 main.go:141] libmachine: (addons-624151) DBG | Closing plugin on server side
	I0805 11:29:02.110115  392242 main.go:141] libmachine: (addons-624151) DBG | Closing plugin on server side
	I0805 11:29:02.110118  392242 main.go:141] libmachine: (addons-624151) DBG | Closing plugin on server side
	I0805 11:29:02.110122  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:29:02.110124  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:29:02.110099  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:29:02.110127  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:29:02.110132  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:29:02.110139  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:29:02.110140  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:29:02.110147  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:29:02.110149  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:29:02.110155  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:29:02.110157  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:29:02.110161  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:29:02.110169  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:29:02.110096  392242 main.go:141] libmachine: (addons-624151) DBG | Closing plugin on server side
	I0805 11:29:02.110098  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:29:02.110223  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:29:02.110232  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:29:02.110239  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:29:02.110132  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:29:02.110281  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:29:02.110323  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:29:02.110330  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:29:02.110464  392242 main.go:141] libmachine: (addons-624151) DBG | Closing plugin on server side
	I0805 11:29:02.110483  392242 main.go:141] libmachine: (addons-624151) DBG | Closing plugin on server side
	I0805 11:29:02.110504  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:29:02.110510  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:29:02.110570  392242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.408976092s)
	W0805 11:29:02.110603  392242 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0805 11:29:02.110625  392242 retry.go:31] will retry after 322.036799ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0805 11:29:02.110605  392242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.239561603s)
	I0805 11:29:02.110653  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:29:02.110673  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:29:02.110678  392242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.094649931s)
	I0805 11:29:02.110684  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:29:02.110694  392242 addons.go:475] Verifying addon registry=true in "addons-624151"
	I0805 11:29:02.110737  392242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.843697299s)
	I0805 11:29:02.110751  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:29:02.110839  392242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.937089503s)
	I0805 11:29:02.110853  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:29:02.110861  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:29:02.110753  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:29:02.111086  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:29:02.111156  392242 main.go:141] libmachine: (addons-624151) DBG | Closing plugin on server side
	I0805 11:29:02.111182  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:29:02.111188  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:29:02.111195  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:29:02.111202  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:29:02.112065  392242 main.go:141] libmachine: (addons-624151) DBG | Closing plugin on server side
	I0805 11:29:02.112096  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:29:02.112103  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:29:02.110697  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:29:02.112133  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:29:02.112111  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:29:02.112186  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:29:02.112518  392242 main.go:141] libmachine: (addons-624151) DBG | Closing plugin on server side
	I0805 11:29:02.112544  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:29:02.112558  392242 main.go:141] libmachine: (addons-624151) DBG | Closing plugin on server side
	I0805 11:29:02.112561  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:29:02.112570  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:29:02.112579  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:29:02.112584  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:29:02.112591  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:29:02.112676  392242 out.go:177] * Verifying registry addon...
	I0805 11:29:02.112790  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:29:02.112801  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:29:02.112810  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:29:02.112818  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:29:02.112867  392242 main.go:141] libmachine: (addons-624151) DBG | Closing plugin on server side
	I0805 11:29:02.112877  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:29:02.112885  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:29:02.112897  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:29:02.112905  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:29:02.113159  392242 main.go:141] libmachine: (addons-624151) DBG | Closing plugin on server side
	I0805 11:29:02.113179  392242 main.go:141] libmachine: (addons-624151) DBG | Closing plugin on server side
	I0805 11:29:02.113210  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:29:02.113218  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:29:02.113465  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:29:02.113766  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:29:02.113777  392242 addons.go:475] Verifying addon ingress=true in "addons-624151"
	I0805 11:29:02.113489  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:29:02.114924  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:29:02.114935  392242 addons.go:475] Verifying addon metrics-server=true in "addons-624151"
	I0805 11:29:02.113494  392242 main.go:141] libmachine: (addons-624151) DBG | Closing plugin on server side
	I0805 11:29:02.115860  392242 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-624151 service yakd-dashboard -n yakd-dashboard
	
	I0805 11:29:02.116799  392242 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0805 11:29:02.117102  392242 out.go:177] * Verifying ingress addon...
	I0805 11:29:02.119458  392242 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0805 11:29:02.133081  392242 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0805 11:29:02.133100  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:02.133390  392242 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0805 11:29:02.133413  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:02.156755  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:29:02.156777  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:29:02.157172  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:29:02.157196  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:29:02.432920  392242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0805 11:29:02.634791  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:02.641958  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:03.160178  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:03.192398  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:03.345417  392242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.175330258s)
	I0805 11:29:03.345485  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:29:03.345497  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:29:03.345503  392242 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.28711956s)
	I0805 11:29:03.345865  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:29:03.345913  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:29:03.345927  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:29:03.345936  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:29:03.345895  392242 main.go:141] libmachine: (addons-624151) DBG | Closing plugin on server side
	I0805 11:29:03.346162  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:29:03.346176  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:29:03.346196  392242 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-624151"
	I0805 11:29:03.347049  392242 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0805 11:29:03.348045  392242 out.go:177] * Verifying csi-hostpath-driver addon...
	I0805 11:29:03.349208  392242 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0805 11:29:03.350308  392242 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0805 11:29:03.350400  392242 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0805 11:29:03.350423  392242 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0805 11:29:03.365249  392242 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0805 11:29:03.365275  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:03.503754  392242 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0805 11:29:03.503788  392242 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0805 11:29:03.622251  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:03.626042  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:03.630350  392242 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0805 11:29:03.630369  392242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0805 11:29:03.728581  392242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0805 11:29:03.856312  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:04.106628  392242 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-kgtjf" in "kube-system" namespace has status "Ready":"False"
	I0805 11:29:04.121758  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:04.123389  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:04.286549  392242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.853568405s)
	I0805 11:29:04.286610  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:29:04.286627  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:29:04.286951  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:29:04.286971  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:29:04.286981  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:29:04.286990  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:29:04.287239  392242 main.go:141] libmachine: (addons-624151) DBG | Closing plugin on server side
	I0805 11:29:04.287302  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:29:04.287332  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:29:04.357236  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:04.620700  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:04.626357  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:04.873459  392242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.144827306s)
	I0805 11:29:04.873560  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:29:04.873578  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:29:04.873901  392242 main.go:141] libmachine: (addons-624151) DBG | Closing plugin on server side
	I0805 11:29:04.873933  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:29:04.873977  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:29:04.873990  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:29:04.873998  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:29:04.874315  392242 main.go:141] libmachine: (addons-624151) DBG | Closing plugin on server side
	I0805 11:29:04.874356  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:29:04.874366  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:29:04.876305  392242 addons.go:475] Verifying addon gcp-auth=true in "addons-624151"
	I0805 11:29:04.878162  392242 out.go:177] * Verifying gcp-auth addon...
	I0805 11:29:04.880791  392242 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0805 11:29:04.913922  392242 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0805 11:29:04.913946  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:04.920031  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:05.157737  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:05.162084  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:05.361530  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:05.385307  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:05.623386  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:05.627488  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:05.860821  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:05.884885  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:06.110986  392242 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-kgtjf" in "kube-system" namespace has status "Ready":"False"
	I0805 11:29:06.124453  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:06.127324  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:06.356481  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:06.385153  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:06.623434  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:06.623700  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:06.856109  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:06.885131  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:07.122068  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:07.124591  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:07.357294  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:07.384976  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:07.623498  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:07.623635  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:07.858727  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:07.885027  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:08.121476  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:08.123483  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:08.355842  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:08.384129  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:08.606923  392242 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-kgtjf" in "kube-system" namespace has status "Ready":"False"
	I0805 11:29:08.622016  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:08.623790  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:08.856633  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:08.884973  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:09.124834  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:09.125398  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:09.356576  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:09.384524  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:09.623927  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:09.624445  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:10.062752  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:10.065566  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:10.120994  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:10.123410  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:10.357489  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:10.385844  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:10.621626  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:10.623637  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:10.856584  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:10.884955  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:11.106106  392242 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-kgtjf" in "kube-system" namespace has status "Ready":"False"
	I0805 11:29:11.121246  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:11.123514  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:11.356465  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:11.385108  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:11.621620  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:11.623180  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:11.856118  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:11.884542  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:12.123102  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:12.124350  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:12.355811  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:12.384586  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:12.621317  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:12.622533  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:12.855702  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:12.884737  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:13.107308  392242 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-kgtjf" in "kube-system" namespace has status "Ready":"False"
	I0805 11:29:13.121265  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:13.123618  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:13.356739  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:13.384345  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:13.622007  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:13.624070  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:13.856884  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:13.884280  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:14.120581  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:14.122920  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:14.356840  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:14.384440  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:14.624233  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:14.626668  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:14.855993  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:14.884965  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:15.122551  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:15.129841  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:15.365715  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:15.385488  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:15.608138  392242 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-kgtjf" in "kube-system" namespace has status "Ready":"False"
	I0805 11:29:15.624823  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:15.627576  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:15.856769  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:15.885352  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:16.121787  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:16.130019  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:16.357716  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:16.385492  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:16.621832  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:16.625102  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:16.856923  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:16.885548  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:17.121658  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:17.123527  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:17.356229  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:17.384482  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:17.622382  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:17.624637  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:17.856820  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:17.885581  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:18.431663  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:18.438989  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:18.444580  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:18.444972  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:18.449980  392242 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-kgtjf" in "kube-system" namespace has status "Ready":"False"
	I0805 11:29:18.620788  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:18.623249  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:18.855599  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:18.885061  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:19.124627  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:19.126007  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:19.358446  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:19.384782  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:19.621585  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:19.624423  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:19.856296  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:19.884812  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:20.122446  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:20.123672  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:20.356373  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:20.385012  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:20.607038  392242 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-kgtjf" in "kube-system" namespace has status "Ready":"False"
	I0805 11:29:20.622565  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:20.624345  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:21.041841  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:21.042951  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:21.121973  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:21.123618  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:21.356276  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:21.385444  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:21.622067  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:21.625061  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:21.858688  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:21.884672  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:22.122074  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:22.123419  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:22.356770  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:22.384772  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:22.606728  392242 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-kgtjf" in "kube-system" namespace has status "Ready":"True"
	I0805 11:29:22.606751  392242 pod_ready.go:81] duration metric: took 23.006687541s for pod "nvidia-device-plugin-daemonset-kgtjf" in "kube-system" namespace to be "Ready" ...
	I0805 11:29:22.606761  392242 pod_ready.go:38] duration metric: took 24.991409704s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 11:29:22.606795  392242 api_server.go:52] waiting for apiserver process to appear ...
	I0805 11:29:22.606862  392242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 11:29:22.621701  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:22.624813  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:22.629676  392242 api_server.go:72] duration metric: took 29.356540455s to wait for apiserver process to appear ...
	I0805 11:29:22.629693  392242 api_server.go:88] waiting for apiserver healthz status ...
	I0805 11:29:22.629770  392242 api_server.go:253] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
	I0805 11:29:22.634749  392242 api_server.go:279] https://192.168.39.142:8443/healthz returned 200:
	ok
	I0805 11:29:22.635757  392242 api_server.go:141] control plane version: v1.30.3
	I0805 11:29:22.635786  392242 api_server.go:131] duration metric: took 6.08634ms to wait for apiserver health ...
	I0805 11:29:22.635794  392242 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 11:29:22.644836  392242 system_pods.go:59] 18 kube-system pods found
	I0805 11:29:22.644862  392242 system_pods.go:61] "coredns-7db6d8ff4d-s7xqd" [6dee3eaa-4dd1-4077-889c-712056552228] Running
	I0805 11:29:22.644870  392242 system_pods.go:61] "csi-hostpath-attacher-0" [2a900d97-2723-48f6-9ef3-6afbc793b8a7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0805 11:29:22.644876  392242 system_pods.go:61] "csi-hostpath-resizer-0" [ff440e79-141d-4812-bfe9-c7d044fb5399] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0805 11:29:22.644883  392242 system_pods.go:61] "csi-hostpathplugin-bcjcs" [14fac8ba-adca-400e-bfb8-6320103d3061] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0805 11:29:22.644888  392242 system_pods.go:61] "etcd-addons-624151" [f47c334a-333f-499a-987d-ce5af8753b5e] Running
	I0805 11:29:22.644892  392242 system_pods.go:61] "kube-apiserver-addons-624151" [d7cef200-3ea7-4e6a-90a3-0a6cdd323229] Running
	I0805 11:29:22.644895  392242 system_pods.go:61] "kube-controller-manager-addons-624151" [19bf1de0-8ca6-4c6f-b142-84e105adc647] Running
	I0805 11:29:22.644899  392242 system_pods.go:61] "kube-ingress-dns-minikube" [a48e697f-4786-4387-9ef7-f15a45091c80] Running
	I0805 11:29:22.644902  392242 system_pods.go:61] "kube-proxy-nbpvj" [65b10013-8b12-4e89-b735-91ae7c4b32f8] Running
	I0805 11:29:22.644906  392242 system_pods.go:61] "kube-scheduler-addons-624151" [0d5635e3-6d61-40c7-b101-8e3176b4bb01] Running
	I0805 11:29:22.644911  392242 system_pods.go:61] "metrics-server-c59844bb4-f96nq" [7b3be79e-f92b-4158-8829-8fc50c6ebbd1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 11:29:22.644918  392242 system_pods.go:61] "nvidia-device-plugin-daemonset-kgtjf" [bb17bf33-643c-4417-8bb1-1814162e0e18] Running
	I0805 11:29:22.644924  392242 system_pods.go:61] "registry-698f998955-kbn7c" [825a2f6e-bea8-4451-bc76-8ab82bd3e8f4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0805 11:29:22.644931  392242 system_pods.go:61] "registry-proxy-6z85d" [f926e212-9d55-48fa-8149-0c86aaff8647] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0805 11:29:22.644940  392242 system_pods.go:61] "snapshot-controller-745499f584-nft7w" [fd109bf8-f9d0-479f-92af-d7ecbc0b4975] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0805 11:29:22.644946  392242 system_pods.go:61] "snapshot-controller-745499f584-szg99" [4754f0ca-4286-41b7-ab92-ce41eaf84ae6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0805 11:29:22.644950  392242 system_pods.go:61] "storage-provisioner" [3bfbac9c-232e-4b87-bd62-216bc17fad0e] Running
	I0805 11:29:22.644956  392242 system_pods.go:61] "tiller-deploy-6677d64bcd-g6dj9" [b48dc3b9-5ca0-4b5c-a47b-ed3b9a318ea5] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0805 11:29:22.644963  392242 system_pods.go:74] duration metric: took 9.163687ms to wait for pod list to return data ...
	I0805 11:29:22.644973  392242 default_sa.go:34] waiting for default service account to be created ...
	I0805 11:29:22.647017  392242 default_sa.go:45] found service account: "default"
	I0805 11:29:22.647033  392242 default_sa.go:55] duration metric: took 2.053393ms for default service account to be created ...
	I0805 11:29:22.647040  392242 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 11:29:22.655474  392242 system_pods.go:86] 18 kube-system pods found
	I0805 11:29:22.655497  392242 system_pods.go:89] "coredns-7db6d8ff4d-s7xqd" [6dee3eaa-4dd1-4077-889c-712056552228] Running
	I0805 11:29:22.655505  392242 system_pods.go:89] "csi-hostpath-attacher-0" [2a900d97-2723-48f6-9ef3-6afbc793b8a7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0805 11:29:22.655514  392242 system_pods.go:89] "csi-hostpath-resizer-0" [ff440e79-141d-4812-bfe9-c7d044fb5399] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0805 11:29:22.655522  392242 system_pods.go:89] "csi-hostpathplugin-bcjcs" [14fac8ba-adca-400e-bfb8-6320103d3061] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0805 11:29:22.655527  392242 system_pods.go:89] "etcd-addons-624151" [f47c334a-333f-499a-987d-ce5af8753b5e] Running
	I0805 11:29:22.655532  392242 system_pods.go:89] "kube-apiserver-addons-624151" [d7cef200-3ea7-4e6a-90a3-0a6cdd323229] Running
	I0805 11:29:22.655536  392242 system_pods.go:89] "kube-controller-manager-addons-624151" [19bf1de0-8ca6-4c6f-b142-84e105adc647] Running
	I0805 11:29:22.655541  392242 system_pods.go:89] "kube-ingress-dns-minikube" [a48e697f-4786-4387-9ef7-f15a45091c80] Running
	I0805 11:29:22.655545  392242 system_pods.go:89] "kube-proxy-nbpvj" [65b10013-8b12-4e89-b735-91ae7c4b32f8] Running
	I0805 11:29:22.655551  392242 system_pods.go:89] "kube-scheduler-addons-624151" [0d5635e3-6d61-40c7-b101-8e3176b4bb01] Running
	I0805 11:29:22.655560  392242 system_pods.go:89] "metrics-server-c59844bb4-f96nq" [7b3be79e-f92b-4158-8829-8fc50c6ebbd1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 11:29:22.655564  392242 system_pods.go:89] "nvidia-device-plugin-daemonset-kgtjf" [bb17bf33-643c-4417-8bb1-1814162e0e18] Running
	I0805 11:29:22.655572  392242 system_pods.go:89] "registry-698f998955-kbn7c" [825a2f6e-bea8-4451-bc76-8ab82bd3e8f4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0805 11:29:22.655577  392242 system_pods.go:89] "registry-proxy-6z85d" [f926e212-9d55-48fa-8149-0c86aaff8647] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0805 11:29:22.655587  392242 system_pods.go:89] "snapshot-controller-745499f584-nft7w" [fd109bf8-f9d0-479f-92af-d7ecbc0b4975] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0805 11:29:22.655595  392242 system_pods.go:89] "snapshot-controller-745499f584-szg99" [4754f0ca-4286-41b7-ab92-ce41eaf84ae6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0805 11:29:22.655599  392242 system_pods.go:89] "storage-provisioner" [3bfbac9c-232e-4b87-bd62-216bc17fad0e] Running
	I0805 11:29:22.655607  392242 system_pods.go:89] "tiller-deploy-6677d64bcd-g6dj9" [b48dc3b9-5ca0-4b5c-a47b-ed3b9a318ea5] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0805 11:29:22.655613  392242 system_pods.go:126] duration metric: took 8.567274ms to wait for k8s-apps to be running ...
	I0805 11:29:22.655629  392242 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 11:29:22.655675  392242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 11:29:22.671442  392242 system_svc.go:56] duration metric: took 15.803966ms WaitForService to wait for kubelet
	I0805 11:29:22.671471  392242 kubeadm.go:582] duration metric: took 29.398338375s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 11:29:22.671492  392242 node_conditions.go:102] verifying NodePressure condition ...
	I0805 11:29:22.674666  392242 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 11:29:22.674693  392242 node_conditions.go:123] node cpu capacity is 2
	I0805 11:29:22.674705  392242 node_conditions.go:105] duration metric: took 3.207964ms to run NodePressure ...
	I0805 11:29:22.674717  392242 start.go:241] waiting for startup goroutines ...
	I0805 11:29:22.856263  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:22.885118  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:23.122316  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:23.123798  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:23.357767  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:23.384936  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:23.623568  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:23.625298  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:23.856549  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:23.885037  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:24.121440  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:24.124160  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:24.356180  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:24.384937  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:25.095250  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:25.096628  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:25.098590  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:25.100300  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:25.122180  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:25.124424  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:25.356216  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:25.384890  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:25.622571  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:25.624756  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:25.857101  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:25.885600  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:26.123076  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:26.125246  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:26.356104  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:26.385220  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:26.621672  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:26.624280  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:26.856357  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:26.884817  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:27.122228  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:27.123832  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:27.355221  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:27.390133  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:27.621276  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:27.625340  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:27.856513  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:27.884914  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:28.122533  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:28.125108  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:28.357666  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:28.384101  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:28.622364  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:28.623923  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:28.855842  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:28.884755  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:29.123174  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:29.124439  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:29.362642  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:29.385260  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:29.623961  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:29.627641  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:29.857411  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:29.885201  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:30.122809  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:30.125113  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:30.356135  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:30.385231  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:30.621792  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:30.624999  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:30.856688  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:30.884241  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:31.122104  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:31.124984  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:31.356400  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:31.385521  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:31.622253  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:31.623404  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:31.855950  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:31.884279  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:32.125311  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:32.126562  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:32.357049  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:32.384338  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:32.889258  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:32.891080  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:32.895056  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:32.895996  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:33.124216  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:33.124573  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:33.356053  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:33.384477  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:33.622122  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:33.623706  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:33.858896  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:33.886533  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:34.122430  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:34.123857  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:34.357212  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:34.384108  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:34.623909  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:34.624363  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:34.857345  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:34.885375  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:35.121976  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:35.125256  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:35.355889  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:35.384539  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:35.623342  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:35.625106  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:35.856447  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:35.886289  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:36.121721  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:36.124197  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:36.355871  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:36.384423  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:36.622951  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:36.625086  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:36.855474  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:36.885608  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:37.121883  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:37.124140  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:37.356106  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:37.384331  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:37.620762  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:37.623258  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:37.856305  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:37.884560  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:38.121752  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:38.124610  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:38.357451  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:38.384766  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:38.621809  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:38.624936  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:38.855327  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:38.885055  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:39.121864  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:39.125025  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:39.356489  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:39.385083  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:39.622308  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:39.624786  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:39.855027  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:39.884042  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:40.121210  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:40.124944  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:40.358169  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:40.386479  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:40.624354  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:40.624478  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:40.858583  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:40.884861  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:41.122067  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:41.125065  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:41.355893  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:41.385073  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:41.621340  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:41.623996  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:41.858831  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:41.888443  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:42.121923  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:42.124835  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:42.357172  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:42.384306  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:42.622332  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:42.624915  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:42.855994  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:42.885125  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:43.122661  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:43.125070  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:43.355523  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:43.385282  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:43.621430  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:43.624628  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:43.856434  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:43.886215  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:44.121666  392242 kapi.go:107] duration metric: took 42.004863243s to wait for kubernetes.io/minikube-addons=registry ...
	I0805 11:29:44.124108  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:44.355872  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:44.384579  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:44.624150  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:44.855976  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:44.884996  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:45.125286  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:45.356496  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:45.385257  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:45.624446  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:45.856504  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:45.885553  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:46.124854  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:46.357088  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:46.385348  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:46.624429  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:46.856287  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:46.884335  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:47.124695  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:47.357652  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:47.384647  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:47.623930  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:47.855805  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:47.885304  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:48.124493  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:48.356858  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:48.385025  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:48.626305  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:48.859226  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:48.885358  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:49.124525  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:49.356179  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:49.384715  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:50.080392  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:50.081072  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:50.081387  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:50.129186  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:50.356113  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:50.384521  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:50.626737  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:50.861620  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:50.885098  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:51.124185  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:51.356637  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:51.384285  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:51.624183  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:51.856460  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:51.884725  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:52.123708  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:52.356737  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:52.384632  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:52.624289  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:52.857576  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:52.885439  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:53.123843  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:53.356488  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:53.384729  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:53.624298  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:53.856557  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:53.886346  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:54.124598  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:54.357428  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:54.384696  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:54.623908  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:54.855693  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:54.885457  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:55.136039  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:55.358111  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:55.384546  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:55.624353  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:55.857544  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:55.886717  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:56.127388  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:56.355905  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:56.385341  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:56.624261  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:56.856501  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:56.889564  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:57.123220  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:57.356162  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:57.384686  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:57.623479  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:57.857683  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:57.884618  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:58.123206  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:58.355921  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:58.384448  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:58.625053  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:58.856097  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:58.885758  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:59.123581  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:59.357011  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:59.385142  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:59.624300  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:59.856838  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:59.885463  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:00.124600  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:00.355604  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:00.392602  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:00.624931  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:00.857249  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:00.884297  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:01.124223  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:01.356509  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:01.385046  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:01.625917  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:01.855760  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:01.884098  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:02.130252  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:02.356266  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:02.384461  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:02.626641  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:02.859094  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:02.887856  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:03.124566  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:03.355211  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:03.384806  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:03.625147  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:03.855803  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:03.884107  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:04.126328  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:04.361618  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:04.388864  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:04.624162  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:04.856352  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:04.884563  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:05.126913  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:05.356823  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:05.384978  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:05.624535  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:05.857619  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:05.885480  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:06.128626  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:06.357289  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:06.384146  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:06.624498  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:06.856645  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:06.884503  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:07.124695  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:07.626477  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:07.627950  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:07.629189  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:07.859854  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:07.886994  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:08.130587  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:08.356930  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:08.392798  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:08.625098  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:08.867376  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:08.886332  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:09.124820  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:09.359000  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:09.386725  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:09.623804  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:09.856249  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:09.884619  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:10.123581  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:10.357777  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:10.385105  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:10.623787  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:10.856635  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:10.886102  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:11.124326  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:11.356075  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:11.385632  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:11.637153  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:11.855995  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:11.884458  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:12.124139  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:12.356952  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:12.401403  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:12.625332  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:12.858980  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:12.885311  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:13.124566  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:13.356510  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:13.384996  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:13.624551  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:14.041621  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:14.042809  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:14.123987  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:14.355651  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:14.384402  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:14.624179  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:14.855502  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:14.885175  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:15.124687  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:15.356137  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:15.385035  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:15.627146  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:15.856372  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:15.885436  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:16.124934  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:16.356001  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:16.384893  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:16.624579  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:16.868866  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:16.884032  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:17.123999  392242 kapi.go:107] duration metric: took 1m15.004537622s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0805 11:30:17.359205  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:17.384813  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:17.856438  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:17.884152  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:18.357127  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:18.384525  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:18.856132  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:18.884722  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:19.356552  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:19.386136  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:19.859469  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:19.884979  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:20.357822  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:20.384777  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:20.857192  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:20.884936  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:21.358242  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:21.384653  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:21.857808  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:21.885176  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:22.358778  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:22.385912  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:22.856861  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:22.888993  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:23.358307  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:23.384940  392242 kapi.go:107] duration metric: took 1m18.50415092s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0805 11:30:23.386817  392242 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-624151 cluster.
	I0805 11:30:23.388173  392242 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0805 11:30:23.389479  392242 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0805 11:30:23.857022  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:24.544940  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:24.855422  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:25.357268  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:25.856640  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:26.359105  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:26.856358  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:27.356018  392242 kapi.go:107] duration metric: took 1m24.005708679s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0805 11:30:27.357712  392242 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, default-storageclass, helm-tiller, nvidia-device-plugin, inspektor-gadget, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0805 11:30:27.359214  392242 addons.go:510] duration metric: took 1m34.086031545s for enable addons: enabled=[cloud-spanner storage-provisioner default-storageclass helm-tiller nvidia-device-plugin inspektor-gadget ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0805 11:30:27.359251  392242 start.go:246] waiting for cluster config update ...
	I0805 11:30:27.359268  392242 start.go:255] writing updated cluster config ...
	I0805 11:30:27.359539  392242 ssh_runner.go:195] Run: rm -f paused
	I0805 11:30:27.412705  392242 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0805 11:30:27.414651  392242 out.go:177] * Done! kubectl is now configured to use "addons-624151" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 05 11:34:09 addons-624151 crio[682]: time="2024-08-05 11:34:09.532356781Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:bf8488bfd154a167ef0bfbc64858c06825f1ace9af3108b2fb8282fa505ec428,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1722857334321062055,StartedAt:1722857334446136874,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-proxy:v1.30.3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nbpvj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b10013-8b12-4e89-b735-91ae7c4b32f8,},Annotations:map[string]string{io.kubernetes.container.hash: 92541790,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/65b10013-8b12-4e89-b735-91ae7c4b32f8/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/65b10013-8b12-4e89-b735-91ae7c4b32f8/containers/kube-proxy/e59eb26b,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/kube-proxy,HostPath:/var/lib/
kubelet/pods/65b10013-8b12-4e89-b735-91ae7c4b32f8/volumes/kubernetes.io~configmap/kube-proxy,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/65b10013-8b12-4e89-b735-91ae7c4b32f8/volumes/kubernetes.io~projected/kube-api-access-2kmf2,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-proxy-nbpvj_65b10013-8b12-4e89-b735-91ae7c4b32f8/kube-proxy/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-colle
ctor/interceptors.go:74" id=6f69ebff-7ad8-4e52-a557-d1759fbe0ec5 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 05 11:34:09 addons-624151 crio[682]: time="2024-08-05 11:34:09.532742807Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:1fbda93b1b1b61d3a7df0606e39ec3314e927ecdaca7d4ce96af0bf43dd56928,Verbose:false,}" file="otel-collector/interceptors.go:62" id=a90d5581-85dc-4501-bb71-e7b9de98d1f8 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 05 11:34:09 addons-624151 crio[682]: time="2024-08-05 11:34:09.532893421Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:1fbda93b1b1b61d3a7df0606e39ec3314e927ecdaca7d4ce96af0bf43dd56928,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1722857314352045947,StartedAt:1722857314454447993,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-scheduler:v1.30.3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-624151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e410e9957c9bb7ef05423de94b75d113,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/e410e9957c9bb7ef05423de94b75d113/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/e410e9957c9bb7ef05423de94b75d113/containers/kube-scheduler/8c83df92,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-scheduler-addons-624151_e410e9957c9bb7ef05423de94b75d113/kube-scheduler/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,
CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=a90d5581-85dc-4501-bb71-e7b9de98d1f8 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 05 11:34:09 addons-624151 crio[682]: time="2024-08-05 11:34:09.533244727Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:ea77cae92000d4c3148918a81861d1b11bd73553e26ec733761698ded5b7c2e9,Verbose:false,}" file="otel-collector/interceptors.go:62" id=27343b28-88c9-46d0-84e3-7ebe7ddbf4d4 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 05 11:34:09 addons-624151 crio[682]: time="2024-08-05 11:34:09.533362195Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:ea77cae92000d4c3148918a81861d1b11bd73553e26ec733761698ded5b7c2e9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1722857314340278031,StartedAt:1722857314432848254,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/etcd:3.5.12-0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-624151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f0da1f77e0045af5e54ecd4e121311e,},Annotations:map[string]string{io.kubernetes.container.hash: a0ed5d74,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/3f0da1f77e0045af5e54ecd4e121311e/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/3f0da1f77e0045af5e54ecd4e121311e/containers/etcd/665c29ed,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_etcd-addons-6
24151_3f0da1f77e0045af5e54ecd4e121311e/etcd/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=27343b28-88c9-46d0-84e3-7ebe7ddbf4d4 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 05 11:34:09 addons-624151 crio[682]: time="2024-08-05 11:34:09.533962955Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:c7da48507d2311bebc83cd3bf4ca2cef602ec05ea529873ed1d454439bf7073f,Verbose:false,}" file="otel-collector/interceptors.go:62" id=7288ca9a-1aeb-4b79-9396-5875427036cf name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 05 11:34:09 addons-624151 crio[682]: time="2024-08-05 11:34:09.534989083Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:c7da48507d2311bebc83cd3bf4ca2cef602ec05ea529873ed1d454439bf7073f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1722857314207532588,StartedAt:1722857314305967933,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.30.3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-624151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2128468ad8768be81ae3f787fbd1b4d,},Annotations:map[string]string{io.kubernetes.container.hash: f00d3253,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/e2128468ad8768be81ae3f787fbd1b4d/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/e2128468ad8768be81ae3f787fbd1b4d/containers/kube-apiserver/a9607a1a,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/
var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-apiserver-addons-624151_e2128468ad8768be81ae3f787fbd1b4d/kube-apiserver/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:256,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=7288ca9a-1aeb-4b79-9396-5875427036cf name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 05 11:34:09 addons-624151 crio[682]: time="2024-08-05 11:34:09.538126242Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:7f11721040ddc212db151c004c40ef0a7e7a27d51f5c8fb06955f93ae7edb02a,Verbose:false,}" file="otel-collector/interceptors.go:62" id=7fe219dd-cb61-4577-92d0-52671c1b399e name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 05 11:34:09 addons-624151 crio[682]: time="2024-08-05 11:34:09.538414586Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:7f11721040ddc212db151c004c40ef0a7e7a27d51f5c8fb06955f93ae7edb02a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1722857314174147604,StartedAt:1722857314281877724,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-controller-manager:v1.30.3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-624151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 600ddff116cca39e51d6b17e354e744e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/600ddff116cca39e51d6b17e354e744e/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/600ddff116cca39e51d6b17e354e744e/containers/kube-controller-manager/65c98ed9,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/controller-manager.conf,HostPath:/etc/kubernetes/controller-manager.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMapp
ings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,HostPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-controller-manager-addons-624151_600ddff116cca39e51d6b17e354e744e/kube-controller-manager/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:204,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,Hugepag
eLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=7fe219dd-cb61-4577-92d0-52671c1b399e name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 05 11:34:09 addons-624151 crio[682]: time="2024-08-05 11:34:09.561011041Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ad11edb5-9db0-49b9-801a-f3ba1fc41e81 name=/runtime.v1.RuntimeService/Version
	Aug 05 11:34:09 addons-624151 crio[682]: time="2024-08-05 11:34:09.561395381Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ad11edb5-9db0-49b9-801a-f3ba1fc41e81 name=/runtime.v1.RuntimeService/Version
	Aug 05 11:34:09 addons-624151 crio[682]: time="2024-08-05 11:34:09.563142360Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a30b57f5-fd07-4795-9fc1-0cf5d024d24e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 11:34:09 addons-624151 crio[682]: time="2024-08-05 11:34:09.564534919Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722857649564512652,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589584,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a30b57f5-fd07-4795-9fc1-0cf5d024d24e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 11:34:09 addons-624151 crio[682]: time="2024-08-05 11:34:09.565208932Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4eae6c1d-cd3f-4fae-96a6-df27015e04ba name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 11:34:09 addons-624151 crio[682]: time="2024-08-05 11:34:09.565421844Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4eae6c1d-cd3f-4fae-96a6-df27015e04ba name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 11:34:09 addons-624151 crio[682]: time="2024-08-05 11:34:09.566127824Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:29967c1a34fdd70f38938c81105eee2de604acad5c342ceb0528aea72aeaa6b2,PodSandboxId:5af76ddc44262f44f77a78f8266db7a3f6a4a8eb3cf17ed5a253203e5bbf0f3d,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722857642178722530,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-766vd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a9c74668-33fb-4764-a90a-a62b6278412b,},Annotations:map[string]string{io.kubernetes.container.hash: 4bc11893,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db49fd8f431c9e97fddb6f55e5323780f859599ab01f8c5e5a140995076f8112,PodSandboxId:92acbbfa3733bd8790a8f1df24d4db773591b117bf036aadd7059d0063828729,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722857502510523235,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8c086e51-e9aa-47d4-b5da-7196cbb25a28,},Annotations:map[string]string{io.kubernet
es.container.hash: 57261cd3,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f269744d60f0b2b6497481c781c5c68ef2a589cb1ec32595a1a3a2ded79fead3,PodSandboxId:7e71318d6d00288614e6c56f44eda2a01ffecf1485b418000f2289fe3ac1f81c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722857431226184122,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 615be7ce-86e2-476f-8
1a0-9c656f5b27ad,},Annotations:map[string]string{io.kubernetes.container.hash: d45a858f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea47c54585f354161665d826e48ab1db4a14e25572006bd88ee33514ea425646,PodSandboxId:5aace54d9aeeb678b63cc79f25f3c7fa7685a3ad0439bbc580c5074f2972a27c,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722857401986495453,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-lqhwv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ac13ed42-f259-4e3c-af23-d3dd00413b01,},Anno
tations:map[string]string{io.kubernetes.container.hash: f413dbf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f86482c227aef4e8356044724b4dece6cafd222483cbf308f0b67cd1893de58,PodSandboxId:9c671911a609ab58d0cc31298ff573c65ee82a5af26195c4bea7df7d85f45713,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722857401029018674,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-zwjl5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b60965
6d-850a-459b-9f53-d5183d9245a3,},Annotations:map[string]string{io.kubernetes.container.hash: b1f8c956,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a21dd489749ff06cb7a6f20a0f0b8aec17868a25adc1ae767f7f5f3843c78fbf,PodSandboxId:18a4f31a0fb9b487e490d8a6fcd523237e7d26d378a924036a8d02270bcb219b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722857367357639978,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-f96nq,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 7b3be79e-f92b-4158-8829-8fc50c6ebbd1,},Annotations:map[string]string{io.kubernetes.container.hash: 1bddcb70,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c11137f2baffb9dff05dff3e4c5264eb5ba8e5ede5a9153db347bfb605a09a4c,PodSandboxId:aef1255e7abe477800eda44354377e46c87feec3a47ab6320c1d5e22f71c01b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722857338775026247,Labels:map[string]string{io.kubernetes.container.name: storag
e-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bfbac9c-232e-4b87-bd62-216bc17fad0e,},Annotations:map[string]string{io.kubernetes.container.hash: 72dfffbf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93a73ab29d9dbbf0c576f434d6ba9272e2177689692a8f9aa8aac76ed4fc9028,PodSandboxId:2202578a252573a14cc49f72422bd2c2e36ae6488cf22805191908c9f0dd29ee,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722857336396783354,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name
: coredns-7db6d8ff4d-s7xqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dee3eaa-4dd1-4077-889c-712056552228,},Annotations:map[string]string{io.kubernetes.container.hash: 748f2ff8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf8488bfd154a167ef0bfbc64858c06825f1ace9af3108b2fb8282fa505ec428,PodSandboxId:e948c450eec0ec2bf0c69c35be019b5a77e99b275771cfe3f32e96355fd1e5a3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb02
5d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722857334011999123,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nbpvj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b10013-8b12-4e89-b735-91ae7c4b32f8,},Annotations:map[string]string{io.kubernetes.container.hash: 92541790,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fbda93b1b1b61d3a7df0606e39ec3314e927ecdaca7d4ce96af0bf43dd56928,PodSandboxId:8160c730de833c3f57575a025962de757bfbc94cfc9443b91505de1bcdedfadb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd
422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722857314210285013,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-624151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e410e9957c9bb7ef05423de94b75d113,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea77cae92000d4c3148918a81861d1b11bd73553e26ec733761698ded5b7c2e9,PodSandboxId:4eff185c0adebdaed2d00dc578176bfab416d976f3301d55cd6f3725d5c2f82d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a8
99,State:CONTAINER_RUNNING,CreatedAt:1722857314201454698,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-624151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f0da1f77e0045af5e54ecd4e121311e,},Annotations:map[string]string{io.kubernetes.container.hash: a0ed5d74,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7da48507d2311bebc83cd3bf4ca2cef602ec05ea529873ed1d454439bf7073f,PodSandboxId:0daf17a00555e0f3f4f4f026d6a90a88e3925ba06b19fbe913cf57eef9b92a8b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:
1722857314123349501,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-624151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2128468ad8768be81ae3f787fbd1b4d,},Annotations:map[string]string{io.kubernetes.container.hash: f00d3253,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f11721040ddc212db151c004c40ef0a7e7a27d51f5c8fb06955f93ae7edb02a,PodSandboxId:2a6d39055991a2056ae7148b498e25a2d06df7d6c41c5b6c99a8305ffbf2aa0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:17228573
14094956467,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-624151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 600ddff116cca39e51d6b17e354e744e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4eae6c1d-cd3f-4fae-96a6-df27015e04ba name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 11:34:09 addons-624151 crio[682]: time="2024-08-05 11:34:09.600377912Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7b8c9ac2-383e-4797-8c96-24e12aa3d856 name=/runtime.v1.RuntimeService/Version
	Aug 05 11:34:09 addons-624151 crio[682]: time="2024-08-05 11:34:09.600478479Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7b8c9ac2-383e-4797-8c96-24e12aa3d856 name=/runtime.v1.RuntimeService/Version
	Aug 05 11:34:09 addons-624151 crio[682]: time="2024-08-05 11:34:09.602105286Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e1e684e0-fd52-46b7-bf56-ff8de672acaa name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 11:34:09 addons-624151 crio[682]: time="2024-08-05 11:34:09.603281052Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722857649603257325,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589584,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e1e684e0-fd52-46b7-bf56-ff8de672acaa name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 11:34:09 addons-624151 crio[682]: time="2024-08-05 11:34:09.604205416Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=54a62e48-eda5-419d-8470-f77e1ef48315 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 11:34:09 addons-624151 crio[682]: time="2024-08-05 11:34:09.604288231Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=54a62e48-eda5-419d-8470-f77e1ef48315 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 11:34:09 addons-624151 crio[682]: time="2024-08-05 11:34:09.604612137Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:29967c1a34fdd70f38938c81105eee2de604acad5c342ceb0528aea72aeaa6b2,PodSandboxId:5af76ddc44262f44f77a78f8266db7a3f6a4a8eb3cf17ed5a253203e5bbf0f3d,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722857642178722530,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-766vd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a9c74668-33fb-4764-a90a-a62b6278412b,},Annotations:map[string]string{io.kubernetes.container.hash: 4bc11893,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db49fd8f431c9e97fddb6f55e5323780f859599ab01f8c5e5a140995076f8112,PodSandboxId:92acbbfa3733bd8790a8f1df24d4db773591b117bf036aadd7059d0063828729,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722857502510523235,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8c086e51-e9aa-47d4-b5da-7196cbb25a28,},Annotations:map[string]string{io.kubernet
es.container.hash: 57261cd3,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f269744d60f0b2b6497481c781c5c68ef2a589cb1ec32595a1a3a2ded79fead3,PodSandboxId:7e71318d6d00288614e6c56f44eda2a01ffecf1485b418000f2289fe3ac1f81c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722857431226184122,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 615be7ce-86e2-476f-8
1a0-9c656f5b27ad,},Annotations:map[string]string{io.kubernetes.container.hash: d45a858f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea47c54585f354161665d826e48ab1db4a14e25572006bd88ee33514ea425646,PodSandboxId:5aace54d9aeeb678b63cc79f25f3c7fa7685a3ad0439bbc580c5074f2972a27c,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722857401986495453,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-lqhwv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ac13ed42-f259-4e3c-af23-d3dd00413b01,},Anno
tations:map[string]string{io.kubernetes.container.hash: f413dbf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f86482c227aef4e8356044724b4dece6cafd222483cbf308f0b67cd1893de58,PodSandboxId:9c671911a609ab58d0cc31298ff573c65ee82a5af26195c4bea7df7d85f45713,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722857401029018674,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-zwjl5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b60965
6d-850a-459b-9f53-d5183d9245a3,},Annotations:map[string]string{io.kubernetes.container.hash: b1f8c956,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a21dd489749ff06cb7a6f20a0f0b8aec17868a25adc1ae767f7f5f3843c78fbf,PodSandboxId:18a4f31a0fb9b487e490d8a6fcd523237e7d26d378a924036a8d02270bcb219b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722857367357639978,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-f96nq,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 7b3be79e-f92b-4158-8829-8fc50c6ebbd1,},Annotations:map[string]string{io.kubernetes.container.hash: 1bddcb70,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c11137f2baffb9dff05dff3e4c5264eb5ba8e5ede5a9153db347bfb605a09a4c,PodSandboxId:aef1255e7abe477800eda44354377e46c87feec3a47ab6320c1d5e22f71c01b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722857338775026247,Labels:map[string]string{io.kubernetes.container.name: storag
e-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bfbac9c-232e-4b87-bd62-216bc17fad0e,},Annotations:map[string]string{io.kubernetes.container.hash: 72dfffbf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93a73ab29d9dbbf0c576f434d6ba9272e2177689692a8f9aa8aac76ed4fc9028,PodSandboxId:2202578a252573a14cc49f72422bd2c2e36ae6488cf22805191908c9f0dd29ee,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722857336396783354,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name
: coredns-7db6d8ff4d-s7xqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dee3eaa-4dd1-4077-889c-712056552228,},Annotations:map[string]string{io.kubernetes.container.hash: 748f2ff8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf8488bfd154a167ef0bfbc64858c06825f1ace9af3108b2fb8282fa505ec428,PodSandboxId:e948c450eec0ec2bf0c69c35be019b5a77e99b275771cfe3f32e96355fd1e5a3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb02
5d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722857334011999123,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nbpvj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b10013-8b12-4e89-b735-91ae7c4b32f8,},Annotations:map[string]string{io.kubernetes.container.hash: 92541790,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fbda93b1b1b61d3a7df0606e39ec3314e927ecdaca7d4ce96af0bf43dd56928,PodSandboxId:8160c730de833c3f57575a025962de757bfbc94cfc9443b91505de1bcdedfadb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd
422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722857314210285013,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-624151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e410e9957c9bb7ef05423de94b75d113,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea77cae92000d4c3148918a81861d1b11bd73553e26ec733761698ded5b7c2e9,PodSandboxId:4eff185c0adebdaed2d00dc578176bfab416d976f3301d55cd6f3725d5c2f82d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a8
99,State:CONTAINER_RUNNING,CreatedAt:1722857314201454698,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-624151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f0da1f77e0045af5e54ecd4e121311e,},Annotations:map[string]string{io.kubernetes.container.hash: a0ed5d74,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7da48507d2311bebc83cd3bf4ca2cef602ec05ea529873ed1d454439bf7073f,PodSandboxId:0daf17a00555e0f3f4f4f026d6a90a88e3925ba06b19fbe913cf57eef9b92a8b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:
1722857314123349501,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-624151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2128468ad8768be81ae3f787fbd1b4d,},Annotations:map[string]string{io.kubernetes.container.hash: f00d3253,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f11721040ddc212db151c004c40ef0a7e7a27d51f5c8fb06955f93ae7edb02a,PodSandboxId:2a6d39055991a2056ae7148b498e25a2d06df7d6c41c5b6c99a8305ffbf2aa0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:17228573
14094956467,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-624151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 600ddff116cca39e51d6b17e354e744e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=54a62e48-eda5-419d-8470-f77e1ef48315 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 11:34:09 addons-624151 crio[682]: time="2024-08-05 11:34:09.611485335Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=de0886ee-0741-4677-b275-e26c4003f0f7 name=/runtime.v1.RuntimeService/Status
	Aug 05 11:34:09 addons-624151 crio[682]: time="2024-08-05 11:34:09.611563372Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=de0886ee-0741-4677-b275-e26c4003f0f7 name=/runtime.v1.RuntimeService/Status
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	29967c1a34fdd       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        7 seconds ago       Running             hello-world-app           0                   5af76ddc44262       hello-world-app-6778b5fc9f-766vd
	db49fd8f431c9       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                              2 minutes ago       Running             nginx                     0                   92acbbfa3733b       nginx
	f269744d60f0b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   7e71318d6d002       busybox
	ea47c54585f35       684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66                                                             4 minutes ago       Exited              patch                     1                   5aace54d9aeeb       ingress-nginx-admission-patch-lqhwv
	3f86482c227ae       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   4 minutes ago       Exited              create                    0                   9c671911a609a       ingress-nginx-admission-create-zwjl5
	a21dd489749ff       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        4 minutes ago       Running             metrics-server            0                   18a4f31a0fb9b       metrics-server-c59844bb4-f96nq
	c11137f2baffb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   aef1255e7abe4       storage-provisioner
	93a73ab29d9db       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             5 minutes ago       Running             coredns                   0                   2202578a25257       coredns-7db6d8ff4d-s7xqd
	bf8488bfd154a       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                             5 minutes ago       Running             kube-proxy                0                   e948c450eec0e       kube-proxy-nbpvj
	1fbda93b1b1b6       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                             5 minutes ago       Running             kube-scheduler            0                   8160c730de833       kube-scheduler-addons-624151
	ea77cae92000d       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                             5 minutes ago       Running             etcd                      0                   4eff185c0adeb       etcd-addons-624151
	c7da48507d231       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                             5 minutes ago       Running             kube-apiserver            0                   0daf17a00555e       kube-apiserver-addons-624151
	7f11721040ddc       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                             5 minutes ago       Running             kube-controller-manager   0                   2a6d39055991a       kube-controller-manager-addons-624151
	
	
	==> coredns [93a73ab29d9dbbf0c576f434d6ba9272e2177689692a8f9aa8aac76ed4fc9028] <==
	[INFO] 10.244.0.7:40176 - 13185 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000425464s
	[INFO] 10.244.0.7:34591 - 49047 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000095625s
	[INFO] 10.244.0.7:34591 - 3945 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000055104s
	[INFO] 10.244.0.7:49726 - 49481 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000076157s
	[INFO] 10.244.0.7:49726 - 40023 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00097384s
	[INFO] 10.244.0.7:51199 - 14602 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000257994s
	[INFO] 10.244.0.7:51199 - 31496 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000202688s
	[INFO] 10.244.0.7:59351 - 13513 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000057116s
	[INFO] 10.244.0.7:59351 - 63683 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000193861s
	[INFO] 10.244.0.7:52073 - 10001 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000033086s
	[INFO] 10.244.0.7:52073 - 28951 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000170336s
	[INFO] 10.244.0.7:37647 - 22512 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000056482s
	[INFO] 10.244.0.7:37647 - 23794 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00002733s
	[INFO] 10.244.0.7:32997 - 30507 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000080099s
	[INFO] 10.244.0.7:32997 - 46121 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000117637s
	[INFO] 10.244.0.22:56323 - 29064 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00075055s
	[INFO] 10.244.0.22:60511 - 4358 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000151184s
	[INFO] 10.244.0.22:43808 - 51766 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000480222s
	[INFO] 10.244.0.22:42209 - 33096 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000394441s
	[INFO] 10.244.0.22:47722 - 40890 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000089353s
	[INFO] 10.244.0.22:58614 - 9577 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000063844s
	[INFO] 10.244.0.22:45815 - 62497 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00086798s
	[INFO] 10.244.0.22:58134 - 62897 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000967947s
	[INFO] 10.244.0.24:52178 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00040037s
	[INFO] 10.244.0.24:48746 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000145385s
	
	
	==> describe nodes <==
	Name:               addons-624151
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-624151
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f
	                    minikube.k8s.io/name=addons-624151
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_05T11_28_40_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-624151
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 11:28:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-624151
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 11:34:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 11:32:44 +0000   Mon, 05 Aug 2024 11:28:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 11:32:44 +0000   Mon, 05 Aug 2024 11:28:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 11:32:44 +0000   Mon, 05 Aug 2024 11:28:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 11:32:44 +0000   Mon, 05 Aug 2024 11:28:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.142
	  Hostname:    addons-624151
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 300940b5f96141d395f2b88a58f331cd
	  System UUID:                300940b5-f961-41d3-95f2-b88a58f331cd
	  Boot ID:                    e20994b0-235e-42ff-8124-7b64eb456736
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  default                     hello-world-app-6778b5fc9f-766vd         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	  kube-system                 coredns-7db6d8ff4d-s7xqd                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m16s
	  kube-system                 etcd-addons-624151                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m30s
	  kube-system                 kube-apiserver-addons-624151             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m30s
	  kube-system                 kube-controller-manager-addons-624151    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m30s
	  kube-system                 kube-proxy-nbpvj                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m17s
	  kube-system                 kube-scheduler-addons-624151             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m30s
	  kube-system                 metrics-server-c59844bb4-f96nq           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         5m10s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (9%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m15s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m36s (x8 over 5m36s)  kubelet          Node addons-624151 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m36s (x8 over 5m36s)  kubelet          Node addons-624151 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m36s (x7 over 5m36s)  kubelet          Node addons-624151 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m30s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m30s                  kubelet          Node addons-624151 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m30s                  kubelet          Node addons-624151 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m30s                  kubelet          Node addons-624151 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m29s                  kubelet          Node addons-624151 status is now: NodeReady
	  Normal  RegisteredNode           5m17s                  node-controller  Node addons-624151 event: Registered Node addons-624151 in Controller
	
	
	==> dmesg <==
	[  +5.027228] kauditd_printk_skb: 110 callbacks suppressed
	[Aug 5 11:29] kauditd_printk_skb: 131 callbacks suppressed
	[  +6.156128] kauditd_printk_skb: 84 callbacks suppressed
	[ +16.885125] kauditd_printk_skb: 4 callbacks suppressed
	[ +16.260495] kauditd_printk_skb: 4 callbacks suppressed
	[Aug 5 11:30] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.166187] kauditd_printk_skb: 47 callbacks suppressed
	[ +10.563756] kauditd_printk_skb: 78 callbacks suppressed
	[  +5.069184] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.115817] kauditd_printk_skb: 30 callbacks suppressed
	[ +13.284330] kauditd_printk_skb: 43 callbacks suppressed
	[ +11.978967] kauditd_printk_skb: 2 callbacks suppressed
	[Aug 5 11:31] kauditd_printk_skb: 4 callbacks suppressed
	[  +8.056795] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.764342] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.125022] kauditd_printk_skb: 37 callbacks suppressed
	[  +5.102303] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.052259] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.769727] kauditd_printk_skb: 10 callbacks suppressed
	[  +6.936499] kauditd_printk_skb: 11 callbacks suppressed
	[ +10.830297] kauditd_printk_skb: 5 callbacks suppressed
	[Aug 5 11:32] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.172160] kauditd_printk_skb: 43 callbacks suppressed
	[Aug 5 11:33] kauditd_printk_skb: 31 callbacks suppressed
	[Aug 5 11:34] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [ea77cae92000d4c3148918a81861d1b11bd73553e26ec733761698ded5b7c2e9] <==
	{"level":"warn","ts":"2024-08-05T11:30:14.021662Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-05T11:30:13.690306Z","time spent":"331.295484ms","remote":"127.0.0.1:44704","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1136 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-08-05T11:30:14.027486Z","caller":"traceutil/trace.go:171","msg":"trace[251119462] linearizableReadLoop","detail":"{readStateIndex:1171; appliedIndex:1170; }","duration":"184.168218ms","start":"2024-08-05T11:30:13.843305Z","end":"2024-08-05T11:30:14.027473Z","steps":["trace[251119462] 'read index received'  (duration: 178.681057ms)","trace[251119462] 'applied index is now lower than readState.Index'  (duration: 5.48652ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-05T11:30:14.027916Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"184.554036ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85652"}
	{"level":"info","ts":"2024-08-05T11:30:14.02813Z","caller":"traceutil/trace.go:171","msg":"trace[484340735] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1139; }","duration":"184.85522ms","start":"2024-08-05T11:30:13.843264Z","end":"2024-08-05T11:30:14.028119Z","steps":["trace[484340735] 'agreement among raft nodes before linearized reading'  (duration: 184.436694ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-05T11:30:14.028385Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.898977ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11790"}
	{"level":"info","ts":"2024-08-05T11:30:14.02847Z","caller":"traceutil/trace.go:171","msg":"trace[1892054956] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1139; }","duration":"155.00604ms","start":"2024-08-05T11:30:13.873457Z","end":"2024-08-05T11:30:14.028463Z","steps":["trace[1892054956] 'agreement among raft nodes before linearized reading'  (duration: 154.864009ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-05T11:30:24.530504Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.809459ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85652"}
	{"level":"info","ts":"2024-08-05T11:30:24.530561Z","caller":"traceutil/trace.go:171","msg":"trace[1499690658] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1200; }","duration":"187.891672ms","start":"2024-08-05T11:30:24.342649Z","end":"2024-08-05T11:30:24.530541Z","steps":["trace[1499690658] 'range keys from in-memory index tree'  (duration: 187.545072ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T11:30:56.075994Z","caller":"traceutil/trace.go:171","msg":"trace[1931072354] transaction","detail":"{read_only:false; response_revision:1384; number_of_response:1; }","duration":"109.916144ms","start":"2024-08-05T11:30:55.966048Z","end":"2024-08-05T11:30:56.075965Z","steps":["trace[1931072354] 'process raft request'  (duration: 109.547607ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T11:31:22.93114Z","caller":"traceutil/trace.go:171","msg":"trace[2066504602] transaction","detail":"{read_only:false; response_revision:1579; number_of_response:1; }","duration":"169.137988ms","start":"2024-08-05T11:31:22.761969Z","end":"2024-08-05T11:31:22.931107Z","steps":["trace[2066504602] 'process raft request'  (duration: 169.092076ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T11:31:22.931371Z","caller":"traceutil/trace.go:171","msg":"trace[261962480] transaction","detail":"{read_only:false; response_revision:1578; number_of_response:1; }","duration":"352.43285ms","start":"2024-08-05T11:31:22.578926Z","end":"2024-08-05T11:31:22.931359Z","steps":["trace[261962480] 'process raft request'  (duration: 349.399033ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T11:31:22.931433Z","caller":"traceutil/trace.go:171","msg":"trace[98690444] linearizableReadLoop","detail":"{readStateIndex:1630; appliedIndex:1629; }","duration":"349.837986ms","start":"2024-08-05T11:31:22.581586Z","end":"2024-08-05T11:31:22.931424Z","steps":["trace[98690444] 'read index received'  (duration: 346.745538ms)","trace[98690444] 'applied index is now lower than readState.Index'  (duration: 3.091824ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-05T11:31:22.931522Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-05T11:31:22.578909Z","time spent":"352.503222ms","remote":"127.0.0.1:44682","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1247,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/persistentvolumes/pvc-04dfcdb1-8800-4729-a32a-d013816c2f92\" mod_revision:1512 > success:<request_put:<key:\"/registry/persistentvolumes/pvc-04dfcdb1-8800-4729-a32a-d013816c2f92\" value_size:1171 >> failure:<request_range:<key:\"/registry/persistentvolumes/pvc-04dfcdb1-8800-4729-a32a-d013816c2f92\" > >"}
	{"level":"warn","ts":"2024-08-05T11:31:22.931649Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"350.05426ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-08-05T11:31:22.931676Z","caller":"traceutil/trace.go:171","msg":"trace[1464305680] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1579; }","duration":"350.104553ms","start":"2024-08-05T11:31:22.581563Z","end":"2024-08-05T11:31:22.931668Z","steps":["trace[1464305680] 'agreement among raft nodes before linearized reading'  (duration: 350.019411ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-05T11:31:22.931696Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-05T11:31:22.581553Z","time spent":"350.138747ms","remote":"127.0.0.1:44704","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1137,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2024-08-05T11:31:22.931779Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"248.096076ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc\" ","response":"range_response_count:1 size:822"}
	{"level":"info","ts":"2024-08-05T11:31:22.931798Z","caller":"traceutil/trace.go:171","msg":"trace[1240337336] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc; range_end:; response_count:1; response_revision:1579; }","duration":"248.115772ms","start":"2024-08-05T11:31:22.683677Z","end":"2024-08-05T11:31:22.931792Z","steps":["trace[1240337336] 'agreement among raft nodes before linearized reading'  (duration: 248.057438ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T11:31:38.203176Z","caller":"traceutil/trace.go:171","msg":"trace[1974242065] transaction","detail":"{read_only:false; response_revision:1692; number_of_response:1; }","duration":"100.823026ms","start":"2024-08-05T11:31:38.102317Z","end":"2024-08-05T11:31:38.20314Z","steps":["trace[1974242065] 'process raft request'  (duration: 99.733861ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T11:32:12.616752Z","caller":"traceutil/trace.go:171","msg":"trace[1644210485] transaction","detail":"{read_only:false; response_revision:1888; number_of_response:1; }","duration":"203.426092ms","start":"2024-08-05T11:32:12.413299Z","end":"2024-08-05T11:32:12.616725Z","steps":["trace[1644210485] 'process raft request'  (duration: 196.097364ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T11:32:12.616966Z","caller":"traceutil/trace.go:171","msg":"trace[742688298] linearizableReadLoop","detail":"{readStateIndex:1957; appliedIndex:1956; }","duration":"174.866109ms","start":"2024-08-05T11:32:12.442081Z","end":"2024-08-05T11:32:12.616947Z","steps":["trace[742688298] 'read index received'  (duration: 167.324395ms)","trace[742688298] 'applied index is now lower than readState.Index'  (duration: 7.541049ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-05T11:32:12.617171Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"175.048206ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/external-health-monitor-controller-runner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-05T11:32:12.617231Z","caller":"traceutil/trace.go:171","msg":"trace[1285658805] range","detail":"{range_begin:/registry/clusterroles/external-health-monitor-controller-runner; range_end:; response_count:0; response_revision:1889; }","duration":"175.143769ms","start":"2024-08-05T11:32:12.44207Z","end":"2024-08-05T11:32:12.617214Z","steps":["trace[1285658805] 'agreement among raft nodes before linearized reading'  (duration: 175.03422ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-05T11:32:12.617426Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.995335ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/addons-624151\" ","response":"range_response_count:1 size:865"}
	{"level":"info","ts":"2024-08-05T11:32:12.61747Z","caller":"traceutil/trace.go:171","msg":"trace[1814877083] range","detail":"{range_begin:/registry/csinodes/addons-624151; range_end:; response_count:1; response_revision:1889; }","duration":"138.043873ms","start":"2024-08-05T11:32:12.479419Z","end":"2024-08-05T11:32:12.617463Z","steps":["trace[1814877083] 'agreement among raft nodes before linearized reading'  (duration: 137.902448ms)"],"step_count":1}
	
	
	==> kernel <==
	 11:34:09 up 6 min,  0 users,  load average: 0.41, 1.07, 0.59
	Linux addons-624151 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c7da48507d2311bebc83cd3bf4ca2cef602ec05ea529873ed1d454439bf7073f] <==
	I0805 11:30:29.896792       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0805 11:30:37.885626       1 conn.go:339] Error on socket receive: read tcp 192.168.39.142:8443->192.168.39.1:36500: use of closed network connection
	E0805 11:30:38.090005       1 conn.go:339] Error on socket receive: read tcp 192.168.39.142:8443->192.168.39.1:36524: use of closed network connection
	I0805 11:30:53.048469       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0805 11:30:54.102952       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0805 11:31:15.981679       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.108.152"}
	I0805 11:31:35.301754       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0805 11:31:35.520188       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.189.86"}
	E0805 11:31:38.992548       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0805 11:31:44.860237       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0805 11:32:15.189657       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.39.142:8443->10.244.0.32:34022: read: connection reset by peer
	I0805 11:32:18.170261       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0805 11:32:18.170330       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0805 11:32:18.188041       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0805 11:32:18.188205       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0805 11:32:18.210235       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0805 11:32:18.210382       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0805 11:32:18.217266       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0805 11:32:18.217391       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0805 11:32:18.310209       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0805 11:32:18.310333       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0805 11:32:19.218507       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0805 11:32:19.310970       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0805 11:32:19.317046       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0805 11:33:59.215672       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.144.139"}
	
	
	==> kube-controller-manager [7f11721040ddc212db151c004c40ef0a7e7a27d51f5c8fb06955f93ae7edb02a] <==
	W0805 11:32:49.263455       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 11:32:49.263564       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0805 11:32:50.849100       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 11:32:50.849200       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0805 11:32:53.693255       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 11:32:53.693287       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0805 11:33:03.114594       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 11:33:03.114715       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0805 11:33:26.328584       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 11:33:26.328836       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0805 11:33:38.108494       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 11:33:38.108551       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0805 11:33:39.913437       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 11:33:39.913546       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0805 11:33:42.679025       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 11:33:42.679129       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0805 11:33:59.059073       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="35.477344ms"
	I0805 11:33:59.082745       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="23.535592ms"
	I0805 11:33:59.111427       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="28.317982ms"
	I0805 11:33:59.111640       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="87.175µs"
	I0805 11:34:01.585816       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0805 11:34:01.590084       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-6d9bd977d4" duration="3.482µs"
	I0805 11:34:01.594412       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0805 11:34:02.957778       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="11.289976ms"
	I0805 11:34:02.957903       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="80.611µs"
	
	
	==> kube-proxy [bf8488bfd154a167ef0bfbc64858c06825f1ace9af3108b2fb8282fa505ec428] <==
	I0805 11:28:54.559168       1 server_linux.go:69] "Using iptables proxy"
	I0805 11:28:54.591219       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.142"]
	I0805 11:28:54.683065       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0805 11:28:54.683113       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0805 11:28:54.683129       1 server_linux.go:165] "Using iptables Proxier"
	I0805 11:28:54.687948       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0805 11:28:54.688195       1 server.go:872] "Version info" version="v1.30.3"
	I0805 11:28:54.688208       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 11:28:54.690631       1 config.go:319] "Starting node config controller"
	I0805 11:28:54.690641       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 11:28:54.691004       1 config.go:192] "Starting service config controller"
	I0805 11:28:54.691013       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 11:28:54.691028       1 config.go:101] "Starting endpoint slice config controller"
	I0805 11:28:54.691031       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 11:28:54.791113       1 shared_informer.go:320] Caches are synced for node config
	I0805 11:28:54.791157       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0805 11:28:54.791180       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [1fbda93b1b1b61d3a7df0606e39ec3314e927ecdaca7d4ce96af0bf43dd56928] <==
	W0805 11:28:36.910932       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0805 11:28:36.910966       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0805 11:28:36.911063       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0805 11:28:36.911092       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0805 11:28:36.911105       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0805 11:28:36.911112       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0805 11:28:36.911350       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0805 11:28:36.911406       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0805 11:28:37.729669       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0805 11:28:37.729731       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0805 11:28:37.771257       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0805 11:28:37.771349       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0805 11:28:37.806953       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0805 11:28:37.807361       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0805 11:28:37.935825       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0805 11:28:37.936011       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0805 11:28:38.000396       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0805 11:28:38.000576       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0805 11:28:38.096254       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0805 11:28:38.096418       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0805 11:28:38.096633       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0805 11:28:38.096736       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0805 11:28:38.156456       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0805 11:28:38.156749       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0805 11:28:38.489927       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 05 11:33:59 addons-624151 kubelet[1279]: I0805 11:33:59.069341    1279 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff440e79-141d-4812-bfe9-c7d044fb5399" containerName="csi-resizer"
	Aug 05 11:33:59 addons-624151 kubelet[1279]: I0805 11:33:59.069350    1279 memory_manager.go:354] "RemoveStaleState removing state" podUID="14fac8ba-adca-400e-bfb8-6320103d3061" containerName="liveness-probe"
	Aug 05 11:33:59 addons-624151 kubelet[1279]: I0805 11:33:59.138935    1279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zvd2\" (UniqueName: \"kubernetes.io/projected/a9c74668-33fb-4764-a90a-a62b6278412b-kube-api-access-7zvd2\") pod \"hello-world-app-6778b5fc9f-766vd\" (UID: \"a9c74668-33fb-4764-a90a-a62b6278412b\") " pod="default/hello-world-app-6778b5fc9f-766vd"
	Aug 05 11:34:00 addons-624151 kubelet[1279]: I0805 11:34:00.246477    1279 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6zb6\" (UniqueName: \"kubernetes.io/projected/a48e697f-4786-4387-9ef7-f15a45091c80-kube-api-access-s6zb6\") pod \"a48e697f-4786-4387-9ef7-f15a45091c80\" (UID: \"a48e697f-4786-4387-9ef7-f15a45091c80\") "
	Aug 05 11:34:00 addons-624151 kubelet[1279]: I0805 11:34:00.248664    1279 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a48e697f-4786-4387-9ef7-f15a45091c80-kube-api-access-s6zb6" (OuterVolumeSpecName: "kube-api-access-s6zb6") pod "a48e697f-4786-4387-9ef7-f15a45091c80" (UID: "a48e697f-4786-4387-9ef7-f15a45091c80"). InnerVolumeSpecName "kube-api-access-s6zb6". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 05 11:34:00 addons-624151 kubelet[1279]: I0805 11:34:00.347192    1279 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-s6zb6\" (UniqueName: \"kubernetes.io/projected/a48e697f-4786-4387-9ef7-f15a45091c80-kube-api-access-s6zb6\") on node \"addons-624151\" DevicePath \"\""
	Aug 05 11:34:00 addons-624151 kubelet[1279]: I0805 11:34:00.918795    1279 scope.go:117] "RemoveContainer" containerID="2ee2c91425101f0858c5afb9b060f75a1f36fced3fe277f3a9367bed47545d4f"
	Aug 05 11:34:00 addons-624151 kubelet[1279]: I0805 11:34:00.952128    1279 scope.go:117] "RemoveContainer" containerID="2ee2c91425101f0858c5afb9b060f75a1f36fced3fe277f3a9367bed47545d4f"
	Aug 05 11:34:00 addons-624151 kubelet[1279]: E0805 11:34:00.953073    1279 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ee2c91425101f0858c5afb9b060f75a1f36fced3fe277f3a9367bed47545d4f\": container with ID starting with 2ee2c91425101f0858c5afb9b060f75a1f36fced3fe277f3a9367bed47545d4f not found: ID does not exist" containerID="2ee2c91425101f0858c5afb9b060f75a1f36fced3fe277f3a9367bed47545d4f"
	Aug 05 11:34:00 addons-624151 kubelet[1279]: I0805 11:34:00.953269    1279 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ee2c91425101f0858c5afb9b060f75a1f36fced3fe277f3a9367bed47545d4f"} err="failed to get container status \"2ee2c91425101f0858c5afb9b060f75a1f36fced3fe277f3a9367bed47545d4f\": rpc error: code = NotFound desc = could not find container \"2ee2c91425101f0858c5afb9b060f75a1f36fced3fe277f3a9367bed47545d4f\": container with ID starting with 2ee2c91425101f0858c5afb9b060f75a1f36fced3fe277f3a9367bed47545d4f not found: ID does not exist"
	Aug 05 11:34:01 addons-624151 kubelet[1279]: I0805 11:34:01.335389    1279 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a48e697f-4786-4387-9ef7-f15a45091c80" path="/var/lib/kubelet/pods/a48e697f-4786-4387-9ef7-f15a45091c80/volumes"
	Aug 05 11:34:03 addons-624151 kubelet[1279]: I0805 11:34:03.332220    1279 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac13ed42-f259-4e3c-af23-d3dd00413b01" path="/var/lib/kubelet/pods/ac13ed42-f259-4e3c-af23-d3dd00413b01/volumes"
	Aug 05 11:34:03 addons-624151 kubelet[1279]: I0805 11:34:03.332674    1279 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b609656d-850a-459b-9f53-d5183d9245a3" path="/var/lib/kubelet/pods/b609656d-850a-459b-9f53-d5183d9245a3/volumes"
	Aug 05 11:34:04 addons-624151 kubelet[1279]: I0805 11:34:04.783316    1279 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4386ba62-d693-491b-8189-b027aa4647ee-webhook-cert\") pod \"4386ba62-d693-491b-8189-b027aa4647ee\" (UID: \"4386ba62-d693-491b-8189-b027aa4647ee\") "
	Aug 05 11:34:04 addons-624151 kubelet[1279]: I0805 11:34:04.783370    1279 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-774c2\" (UniqueName: \"kubernetes.io/projected/4386ba62-d693-491b-8189-b027aa4647ee-kube-api-access-774c2\") pod \"4386ba62-d693-491b-8189-b027aa4647ee\" (UID: \"4386ba62-d693-491b-8189-b027aa4647ee\") "
	Aug 05 11:34:04 addons-624151 kubelet[1279]: I0805 11:34:04.785700    1279 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4386ba62-d693-491b-8189-b027aa4647ee-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "4386ba62-d693-491b-8189-b027aa4647ee" (UID: "4386ba62-d693-491b-8189-b027aa4647ee"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 05 11:34:04 addons-624151 kubelet[1279]: I0805 11:34:04.790030    1279 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4386ba62-d693-491b-8189-b027aa4647ee-kube-api-access-774c2" (OuterVolumeSpecName: "kube-api-access-774c2") pod "4386ba62-d693-491b-8189-b027aa4647ee" (UID: "4386ba62-d693-491b-8189-b027aa4647ee"). InnerVolumeSpecName "kube-api-access-774c2". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 05 11:34:04 addons-624151 kubelet[1279]: I0805 11:34:04.884277    1279 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-774c2\" (UniqueName: \"kubernetes.io/projected/4386ba62-d693-491b-8189-b027aa4647ee-kube-api-access-774c2\") on node \"addons-624151\" DevicePath \"\""
	Aug 05 11:34:04 addons-624151 kubelet[1279]: I0805 11:34:04.884325    1279 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4386ba62-d693-491b-8189-b027aa4647ee-webhook-cert\") on node \"addons-624151\" DevicePath \"\""
	Aug 05 11:34:04 addons-624151 kubelet[1279]: I0805 11:34:04.941083    1279 scope.go:117] "RemoveContainer" containerID="32f6f5423b8f0124d451b545d8f6b067ed2e7891e1bd367ad5739e35951d2cee"
	Aug 05 11:34:04 addons-624151 kubelet[1279]: I0805 11:34:04.959620    1279 scope.go:117] "RemoveContainer" containerID="32f6f5423b8f0124d451b545d8f6b067ed2e7891e1bd367ad5739e35951d2cee"
	Aug 05 11:34:04 addons-624151 kubelet[1279]: E0805 11:34:04.960205    1279 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32f6f5423b8f0124d451b545d8f6b067ed2e7891e1bd367ad5739e35951d2cee\": container with ID starting with 32f6f5423b8f0124d451b545d8f6b067ed2e7891e1bd367ad5739e35951d2cee not found: ID does not exist" containerID="32f6f5423b8f0124d451b545d8f6b067ed2e7891e1bd367ad5739e35951d2cee"
	Aug 05 11:34:04 addons-624151 kubelet[1279]: I0805 11:34:04.960230    1279 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32f6f5423b8f0124d451b545d8f6b067ed2e7891e1bd367ad5739e35951d2cee"} err="failed to get container status \"32f6f5423b8f0124d451b545d8f6b067ed2e7891e1bd367ad5739e35951d2cee\": rpc error: code = NotFound desc = could not find container \"32f6f5423b8f0124d451b545d8f6b067ed2e7891e1bd367ad5739e35951d2cee\": container with ID starting with 32f6f5423b8f0124d451b545d8f6b067ed2e7891e1bd367ad5739e35951d2cee not found: ID does not exist"
	Aug 05 11:34:05 addons-624151 kubelet[1279]: I0805 11:34:05.329387    1279 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Aug 05 11:34:05 addons-624151 kubelet[1279]: I0805 11:34:05.332623    1279 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4386ba62-d693-491b-8189-b027aa4647ee" path="/var/lib/kubelet/pods/4386ba62-d693-491b-8189-b027aa4647ee/volumes"
	
	
	==> storage-provisioner [c11137f2baffb9dff05dff3e4c5264eb5ba8e5ede5a9153db347bfb605a09a4c] <==
	I0805 11:28:59.388610       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0805 11:28:59.466562       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0805 11:28:59.466635       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0805 11:28:59.513055       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0805 11:28:59.513221       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-624151_675c37c5-20bf-454e-96ac-66a43e0d8ee8!
	I0805 11:28:59.514192       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a6bd3dc1-59d0-408a-80c8-4e964d76e9ca", APIVersion:"v1", ResourceVersion:"608", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-624151_675c37c5-20bf-454e-96ac-66a43e0d8ee8 became leader
	I0805 11:28:59.614995       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-624151_675c37c5-20bf-454e-96ac-66a43e0d8ee8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-624151 -n addons-624151
helpers_test.go:261: (dbg) Run:  kubectl --context addons-624151 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (155.66s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (330.33s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.951484ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-f96nq" [7b3be79e-f92b-4158-8829-8fc50c6ebbd1] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.006029173s
addons_test.go:417: (dbg) Run:  kubectl --context addons-624151 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-624151 top pods -n kube-system: exit status 1 (92.48441ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/etcd-addons-624151, age: 2m13.595916409s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-624151 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-624151 top pods -n kube-system: exit status 1 (85.502812ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-s7xqd, age: 2m3.096494364s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-624151 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-624151 top pods -n kube-system: exit status 1 (80.617596ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-s7xqd, age: 2m7.456908799s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-624151 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-624151 top pods -n kube-system: exit status 1 (73.956355ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-s7xqd, age: 2m16.088061561s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-624151 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-624151 top pods -n kube-system: exit status 1 (72.251822ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-s7xqd, age: 2m23.691612142s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-624151 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-624151 top pods -n kube-system: exit status 1 (77.075107ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-s7xqd, age: 2m44.440708254s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-624151 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-624151 top pods -n kube-system: exit status 1 (67.40494ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-s7xqd, age: 3m2.766634816s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-624151 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-624151 top pods -n kube-system: exit status 1 (87.840907ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-s7xqd, age: 3m24.883850322s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-624151 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-624151 top pods -n kube-system: exit status 1 (72.990857ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-s7xqd, age: 4m19.592705484s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-624151 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-624151 top pods -n kube-system: exit status 1 (62.528659ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-s7xqd, age: 5m14.491567029s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-624151 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-624151 top pods -n kube-system: exit status 1 (68.390577ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-s7xqd, age: 6m41.797463979s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-624151 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-624151 top pods -n kube-system: exit status 1 (69.478138ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-s7xqd, age: 7m21.093173494s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-624151 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-624151 -n addons-624151
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-624151 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-624151 logs -n 25: (1.305366408s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-413572                                                                     | download-only-413572 | jenkins | v1.33.1 | 05 Aug 24 11:27 UTC | 05 Aug 24 11:27 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-355673 | jenkins | v1.33.1 | 05 Aug 24 11:27 UTC |                     |
	|         | binary-mirror-355673                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:37911                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-355673                                                                     | binary-mirror-355673 | jenkins | v1.33.1 | 05 Aug 24 11:27 UTC | 05 Aug 24 11:27 UTC |
	| addons  | disable dashboard -p                                                                        | addons-624151        | jenkins | v1.33.1 | 05 Aug 24 11:27 UTC |                     |
	|         | addons-624151                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-624151        | jenkins | v1.33.1 | 05 Aug 24 11:27 UTC |                     |
	|         | addons-624151                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-624151 --wait=true                                                                | addons-624151        | jenkins | v1.33.1 | 05 Aug 24 11:27 UTC | 05 Aug 24 11:30 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-624151 addons disable                                                                | addons-624151        | jenkins | v1.33.1 | 05 Aug 24 11:30 UTC | 05 Aug 24 11:30 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-624151        | jenkins | v1.33.1 | 05 Aug 24 11:30 UTC | 05 Aug 24 11:30 UTC |
	|         | addons-624151                                                                               |                      |         |         |                     |                     |
	| ip      | addons-624151 ip                                                                            | addons-624151        | jenkins | v1.33.1 | 05 Aug 24 11:31 UTC | 05 Aug 24 11:31 UTC |
	| addons  | addons-624151 addons disable                                                                | addons-624151        | jenkins | v1.33.1 | 05 Aug 24 11:31 UTC | 05 Aug 24 11:31 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-624151 addons disable                                                                | addons-624151        | jenkins | v1.33.1 | 05 Aug 24 11:31 UTC | 05 Aug 24 11:31 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-624151        | jenkins | v1.33.1 | 05 Aug 24 11:31 UTC | 05 Aug 24 11:31 UTC |
	|         | -p addons-624151                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-624151        | jenkins | v1.33.1 | 05 Aug 24 11:31 UTC | 05 Aug 24 11:31 UTC |
	|         | addons-624151                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-624151        | jenkins | v1.33.1 | 05 Aug 24 11:31 UTC | 05 Aug 24 11:31 UTC |
	|         | -p addons-624151                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-624151 ssh cat                                                                       | addons-624151        | jenkins | v1.33.1 | 05 Aug 24 11:31 UTC | 05 Aug 24 11:31 UTC |
	|         | /opt/local-path-provisioner/pvc-04dfcdb1-8800-4729-a32a-d013816c2f92_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-624151 addons disable                                                                | addons-624151        | jenkins | v1.33.1 | 05 Aug 24 11:31 UTC | 05 Aug 24 11:32 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-624151 addons disable                                                                | addons-624151        | jenkins | v1.33.1 | 05 Aug 24 11:31 UTC | 05 Aug 24 11:31 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-624151 ssh curl -s                                                                   | addons-624151        | jenkins | v1.33.1 | 05 Aug 24 11:31 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-624151 addons                                                                        | addons-624151        | jenkins | v1.33.1 | 05 Aug 24 11:32 UTC | 05 Aug 24 11:32 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-624151 addons disable                                                                | addons-624151        | jenkins | v1.33.1 | 05 Aug 24 11:32 UTC | 05 Aug 24 11:32 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-624151 addons                                                                        | addons-624151        | jenkins | v1.33.1 | 05 Aug 24 11:32 UTC | 05 Aug 24 11:32 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-624151 ip                                                                            | addons-624151        | jenkins | v1.33.1 | 05 Aug 24 11:33 UTC | 05 Aug 24 11:33 UTC |
	| addons  | addons-624151 addons disable                                                                | addons-624151        | jenkins | v1.33.1 | 05 Aug 24 11:33 UTC | 05 Aug 24 11:34 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-624151 addons disable                                                                | addons-624151        | jenkins | v1.33.1 | 05 Aug 24 11:34 UTC | 05 Aug 24 11:34 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-624151 addons                                                                        | addons-624151        | jenkins | v1.33.1 | 05 Aug 24 11:36 UTC | 05 Aug 24 11:36 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 11:27:52
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 11:27:52.759052  392242 out.go:291] Setting OutFile to fd 1 ...
	I0805 11:27:52.759333  392242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 11:27:52.759344  392242 out.go:304] Setting ErrFile to fd 2...
	I0805 11:27:52.759351  392242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 11:27:52.759531  392242 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
	I0805 11:27:52.760217  392242 out.go:298] Setting JSON to false
	I0805 11:27:52.761164  392242 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":4220,"bootTime":1722853053,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0805 11:27:52.761228  392242 start.go:139] virtualization: kvm guest
	I0805 11:27:52.763387  392242 out.go:177] * [addons-624151] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0805 11:27:52.764728  392242 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 11:27:52.764752  392242 notify.go:220] Checking for updates...
	I0805 11:27:52.767246  392242 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 11:27:52.768625  392242 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 11:27:52.769916  392242 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 11:27:52.771087  392242 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0805 11:27:52.772244  392242 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 11:27:52.773484  392242 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 11:27:52.804582  392242 out.go:177] * Using the kvm2 driver based on user configuration
	I0805 11:27:52.805771  392242 start.go:297] selected driver: kvm2
	I0805 11:27:52.805789  392242 start.go:901] validating driver "kvm2" against <nil>
	I0805 11:27:52.805802  392242 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 11:27:52.806576  392242 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 11:27:52.806676  392242 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19377-383955/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0805 11:27:52.821329  392242 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0805 11:27:52.821382  392242 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 11:27:52.821622  392242 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 11:27:52.821698  392242 cni.go:84] Creating CNI manager for ""
	I0805 11:27:52.821716  392242 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 11:27:52.821723  392242 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 11:27:52.821793  392242 start.go:340] cluster config:
	{Name:addons-624151 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-624151 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 11:27:52.821902  392242 iso.go:125] acquiring lock: {Name:mk78a4988ea0dfb86bb6f7367e362683a39fd912 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 11:27:52.823867  392242 out.go:177] * Starting "addons-624151" primary control-plane node in "addons-624151" cluster
	I0805 11:27:52.825635  392242 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 11:27:52.825677  392242 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0805 11:27:52.825686  392242 cache.go:56] Caching tarball of preloaded images
	I0805 11:27:52.825772  392242 preload.go:172] Found /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0805 11:27:52.825785  392242 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0805 11:27:52.826133  392242 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/config.json ...
	I0805 11:27:52.826161  392242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/config.json: {Name:mk494d23b64500b0325395df24dde97d7c38f780 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:27:52.826317  392242 start.go:360] acquireMachinesLock for addons-624151: {Name:mk3babe91d55c30c0b650587cdec6489eb3a7ed6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 11:27:52.826371  392242 start.go:364] duration metric: took 38.07µs to acquireMachinesLock for "addons-624151"
	I0805 11:27:52.826392  392242 start.go:93] Provisioning new machine with config: &{Name:addons-624151 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-624151 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 11:27:52.826461  392242 start.go:125] createHost starting for "" (driver="kvm2")
	I0805 11:27:52.828342  392242 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0805 11:27:52.828501  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:27:52.828562  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:27:52.843342  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39573
	I0805 11:27:52.843875  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:27:52.844516  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:27:52.844540  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:27:52.844889  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:27:52.845059  392242 main.go:141] libmachine: (addons-624151) Calling .GetMachineName
	I0805 11:27:52.845220  392242 main.go:141] libmachine: (addons-624151) Calling .DriverName
	I0805 11:27:52.845422  392242 start.go:159] libmachine.API.Create for "addons-624151" (driver="kvm2")
	I0805 11:27:52.845453  392242 client.go:168] LocalClient.Create starting
	I0805 11:27:52.845489  392242 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem
	I0805 11:27:53.055523  392242 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem
	I0805 11:27:53.217162  392242 main.go:141] libmachine: Running pre-create checks...
	I0805 11:27:53.217192  392242 main.go:141] libmachine: (addons-624151) Calling .PreCreateCheck
	I0805 11:27:53.217788  392242 main.go:141] libmachine: (addons-624151) Calling .GetConfigRaw
	I0805 11:27:53.218271  392242 main.go:141] libmachine: Creating machine...
	I0805 11:27:53.218286  392242 main.go:141] libmachine: (addons-624151) Calling .Create
	I0805 11:27:53.218462  392242 main.go:141] libmachine: (addons-624151) Creating KVM machine...
	I0805 11:27:53.219850  392242 main.go:141] libmachine: (addons-624151) DBG | found existing default KVM network
	I0805 11:27:53.220696  392242 main.go:141] libmachine: (addons-624151) DBG | I0805 11:27:53.220519  392264 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015330}
	I0805 11:27:53.220778  392242 main.go:141] libmachine: (addons-624151) DBG | created network xml: 
	I0805 11:27:53.220804  392242 main.go:141] libmachine: (addons-624151) DBG | <network>
	I0805 11:27:53.220818  392242 main.go:141] libmachine: (addons-624151) DBG |   <name>mk-addons-624151</name>
	I0805 11:27:53.220831  392242 main.go:141] libmachine: (addons-624151) DBG |   <dns enable='no'/>
	I0805 11:27:53.220845  392242 main.go:141] libmachine: (addons-624151) DBG |   
	I0805 11:27:53.220858  392242 main.go:141] libmachine: (addons-624151) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0805 11:27:53.220871  392242 main.go:141] libmachine: (addons-624151) DBG |     <dhcp>
	I0805 11:27:53.220882  392242 main.go:141] libmachine: (addons-624151) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0805 11:27:53.220893  392242 main.go:141] libmachine: (addons-624151) DBG |     </dhcp>
	I0805 11:27:53.220912  392242 main.go:141] libmachine: (addons-624151) DBG |   </ip>
	I0805 11:27:53.220926  392242 main.go:141] libmachine: (addons-624151) DBG |   
	I0805 11:27:53.220935  392242 main.go:141] libmachine: (addons-624151) DBG | </network>
	I0805 11:27:53.220948  392242 main.go:141] libmachine: (addons-624151) DBG | 
	I0805 11:27:53.226001  392242 main.go:141] libmachine: (addons-624151) DBG | trying to create private KVM network mk-addons-624151 192.168.39.0/24...
	I0805 11:27:53.292284  392242 main.go:141] libmachine: (addons-624151) DBG | private KVM network mk-addons-624151 192.168.39.0/24 created
	I0805 11:27:53.292343  392242 main.go:141] libmachine: (addons-624151) Setting up store path in /home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151 ...
	I0805 11:27:53.292370  392242 main.go:141] libmachine: (addons-624151) Building disk image from file:///home/jenkins/minikube-integration/19377-383955/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0805 11:27:53.292387  392242 main.go:141] libmachine: (addons-624151) DBG | I0805 11:27:53.292267  392264 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 11:27:53.292413  392242 main.go:141] libmachine: (addons-624151) Downloading /home/jenkins/minikube-integration/19377-383955/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19377-383955/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0805 11:27:53.594302  392242 main.go:141] libmachine: (addons-624151) DBG | I0805 11:27:53.594181  392264 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/id_rsa...
	I0805 11:27:53.825891  392242 main.go:141] libmachine: (addons-624151) DBG | I0805 11:27:53.825744  392264 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/addons-624151.rawdisk...
	I0805 11:27:53.825921  392242 main.go:141] libmachine: (addons-624151) DBG | Writing magic tar header
	I0805 11:27:53.825933  392242 main.go:141] libmachine: (addons-624151) DBG | Writing SSH key tar header
	I0805 11:27:53.825946  392242 main.go:141] libmachine: (addons-624151) DBG | I0805 11:27:53.825873  392264 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151 ...
	I0805 11:27:53.826034  392242 main.go:141] libmachine: (addons-624151) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151
	I0805 11:27:53.826061  392242 main.go:141] libmachine: (addons-624151) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19377-383955/.minikube/machines
	I0805 11:27:53.826076  392242 main.go:141] libmachine: (addons-624151) Setting executable bit set on /home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151 (perms=drwx------)
	I0805 11:27:53.826093  392242 main.go:141] libmachine: (addons-624151) Setting executable bit set on /home/jenkins/minikube-integration/19377-383955/.minikube/machines (perms=drwxr-xr-x)
	I0805 11:27:53.826105  392242 main.go:141] libmachine: (addons-624151) Setting executable bit set on /home/jenkins/minikube-integration/19377-383955/.minikube (perms=drwxr-xr-x)
	I0805 11:27:53.826116  392242 main.go:141] libmachine: (addons-624151) Setting executable bit set on /home/jenkins/minikube-integration/19377-383955 (perms=drwxrwxr-x)
	I0805 11:27:53.826126  392242 main.go:141] libmachine: (addons-624151) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0805 11:27:53.826140  392242 main.go:141] libmachine: (addons-624151) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 11:27:53.826150  392242 main.go:141] libmachine: (addons-624151) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0805 11:27:53.826163  392242 main.go:141] libmachine: (addons-624151) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19377-383955
	I0805 11:27:53.826175  392242 main.go:141] libmachine: (addons-624151) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0805 11:27:53.826183  392242 main.go:141] libmachine: (addons-624151) DBG | Checking permissions on dir: /home/jenkins
	I0805 11:27:53.826195  392242 main.go:141] libmachine: (addons-624151) DBG | Checking permissions on dir: /home
	I0805 11:27:53.826206  392242 main.go:141] libmachine: (addons-624151) DBG | Skipping /home - not owner
	I0805 11:27:53.826213  392242 main.go:141] libmachine: (addons-624151) Creating domain...
	I0805 11:27:53.827286  392242 main.go:141] libmachine: (addons-624151) define libvirt domain using xml: 
	I0805 11:27:53.827323  392242 main.go:141] libmachine: (addons-624151) <domain type='kvm'>
	I0805 11:27:53.827335  392242 main.go:141] libmachine: (addons-624151)   <name>addons-624151</name>
	I0805 11:27:53.827348  392242 main.go:141] libmachine: (addons-624151)   <memory unit='MiB'>4000</memory>
	I0805 11:27:53.827355  392242 main.go:141] libmachine: (addons-624151)   <vcpu>2</vcpu>
	I0805 11:27:53.827361  392242 main.go:141] libmachine: (addons-624151)   <features>
	I0805 11:27:53.827366  392242 main.go:141] libmachine: (addons-624151)     <acpi/>
	I0805 11:27:53.827370  392242 main.go:141] libmachine: (addons-624151)     <apic/>
	I0805 11:27:53.827378  392242 main.go:141] libmachine: (addons-624151)     <pae/>
	I0805 11:27:53.827382  392242 main.go:141] libmachine: (addons-624151)     
	I0805 11:27:53.827387  392242 main.go:141] libmachine: (addons-624151)   </features>
	I0805 11:27:53.827394  392242 main.go:141] libmachine: (addons-624151)   <cpu mode='host-passthrough'>
	I0805 11:27:53.827399  392242 main.go:141] libmachine: (addons-624151)   
	I0805 11:27:53.827408  392242 main.go:141] libmachine: (addons-624151)   </cpu>
	I0805 11:27:53.827413  392242 main.go:141] libmachine: (addons-624151)   <os>
	I0805 11:27:53.827419  392242 main.go:141] libmachine: (addons-624151)     <type>hvm</type>
	I0805 11:27:53.827445  392242 main.go:141] libmachine: (addons-624151)     <boot dev='cdrom'/>
	I0805 11:27:53.827474  392242 main.go:141] libmachine: (addons-624151)     <boot dev='hd'/>
	I0805 11:27:53.827483  392242 main.go:141] libmachine: (addons-624151)     <bootmenu enable='no'/>
	I0805 11:27:53.827490  392242 main.go:141] libmachine: (addons-624151)   </os>
	I0805 11:27:53.827495  392242 main.go:141] libmachine: (addons-624151)   <devices>
	I0805 11:27:53.827501  392242 main.go:141] libmachine: (addons-624151)     <disk type='file' device='cdrom'>
	I0805 11:27:53.827511  392242 main.go:141] libmachine: (addons-624151)       <source file='/home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/boot2docker.iso'/>
	I0805 11:27:53.827520  392242 main.go:141] libmachine: (addons-624151)       <target dev='hdc' bus='scsi'/>
	I0805 11:27:53.827528  392242 main.go:141] libmachine: (addons-624151)       <readonly/>
	I0805 11:27:53.827543  392242 main.go:141] libmachine: (addons-624151)     </disk>
	I0805 11:27:53.827556  392242 main.go:141] libmachine: (addons-624151)     <disk type='file' device='disk'>
	I0805 11:27:53.827568  392242 main.go:141] libmachine: (addons-624151)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0805 11:27:53.827583  392242 main.go:141] libmachine: (addons-624151)       <source file='/home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/addons-624151.rawdisk'/>
	I0805 11:27:53.827592  392242 main.go:141] libmachine: (addons-624151)       <target dev='hda' bus='virtio'/>
	I0805 11:27:53.827597  392242 main.go:141] libmachine: (addons-624151)     </disk>
	I0805 11:27:53.827607  392242 main.go:141] libmachine: (addons-624151)     <interface type='network'>
	I0805 11:27:53.827623  392242 main.go:141] libmachine: (addons-624151)       <source network='mk-addons-624151'/>
	I0805 11:27:53.827635  392242 main.go:141] libmachine: (addons-624151)       <model type='virtio'/>
	I0805 11:27:53.827647  392242 main.go:141] libmachine: (addons-624151)     </interface>
	I0805 11:27:53.827657  392242 main.go:141] libmachine: (addons-624151)     <interface type='network'>
	I0805 11:27:53.827668  392242 main.go:141] libmachine: (addons-624151)       <source network='default'/>
	I0805 11:27:53.827683  392242 main.go:141] libmachine: (addons-624151)       <model type='virtio'/>
	I0805 11:27:53.827688  392242 main.go:141] libmachine: (addons-624151)     </interface>
	I0805 11:27:53.827696  392242 main.go:141] libmachine: (addons-624151)     <serial type='pty'>
	I0805 11:27:53.827705  392242 main.go:141] libmachine: (addons-624151)       <target port='0'/>
	I0805 11:27:53.827715  392242 main.go:141] libmachine: (addons-624151)     </serial>
	I0805 11:27:53.827726  392242 main.go:141] libmachine: (addons-624151)     <console type='pty'>
	I0805 11:27:53.827738  392242 main.go:141] libmachine: (addons-624151)       <target type='serial' port='0'/>
	I0805 11:27:53.827779  392242 main.go:141] libmachine: (addons-624151)     </console>
	I0805 11:27:53.827799  392242 main.go:141] libmachine: (addons-624151)     <rng model='virtio'>
	I0805 11:27:53.827826  392242 main.go:141] libmachine: (addons-624151)       <backend model='random'>/dev/random</backend>
	I0805 11:27:53.827838  392242 main.go:141] libmachine: (addons-624151)     </rng>
	I0805 11:27:53.827847  392242 main.go:141] libmachine: (addons-624151)     
	I0805 11:27:53.827859  392242 main.go:141] libmachine: (addons-624151)     
	I0805 11:27:53.827867  392242 main.go:141] libmachine: (addons-624151)   </devices>
	I0805 11:27:53.827872  392242 main.go:141] libmachine: (addons-624151) </domain>
	I0805 11:27:53.827880  392242 main.go:141] libmachine: (addons-624151) 
	I0805 11:27:53.833598  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:4f:e3:f8 in network default
	I0805 11:27:53.834140  392242 main.go:141] libmachine: (addons-624151) Ensuring networks are active...
	I0805 11:27:53.834159  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:27:53.834898  392242 main.go:141] libmachine: (addons-624151) Ensuring network default is active
	I0805 11:27:53.835256  392242 main.go:141] libmachine: (addons-624151) Ensuring network mk-addons-624151 is active
	I0805 11:27:53.835801  392242 main.go:141] libmachine: (addons-624151) Getting domain xml...
	I0805 11:27:53.836572  392242 main.go:141] libmachine: (addons-624151) Creating domain...
	I0805 11:27:55.232003  392242 main.go:141] libmachine: (addons-624151) Waiting to get IP...
	I0805 11:27:55.232750  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:27:55.233144  392242 main.go:141] libmachine: (addons-624151) DBG | unable to find current IP address of domain addons-624151 in network mk-addons-624151
	I0805 11:27:55.233168  392242 main.go:141] libmachine: (addons-624151) DBG | I0805 11:27:55.233126  392264 retry.go:31] will retry after 267.947848ms: waiting for machine to come up
	I0805 11:27:55.502543  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:27:55.503046  392242 main.go:141] libmachine: (addons-624151) DBG | unable to find current IP address of domain addons-624151 in network mk-addons-624151
	I0805 11:27:55.503073  392242 main.go:141] libmachine: (addons-624151) DBG | I0805 11:27:55.503008  392264 retry.go:31] will retry after 343.226091ms: waiting for machine to come up
	I0805 11:27:55.847465  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:27:55.847806  392242 main.go:141] libmachine: (addons-624151) DBG | unable to find current IP address of domain addons-624151 in network mk-addons-624151
	I0805 11:27:55.847828  392242 main.go:141] libmachine: (addons-624151) DBG | I0805 11:27:55.847774  392264 retry.go:31] will retry after 296.941317ms: waiting for machine to come up
	I0805 11:27:56.146181  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:27:56.146506  392242 main.go:141] libmachine: (addons-624151) DBG | unable to find current IP address of domain addons-624151 in network mk-addons-624151
	I0805 11:27:56.146539  392242 main.go:141] libmachine: (addons-624151) DBG | I0805 11:27:56.146466  392264 retry.go:31] will retry after 435.407049ms: waiting for machine to come up
	I0805 11:27:56.583207  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:27:56.583658  392242 main.go:141] libmachine: (addons-624151) DBG | unable to find current IP address of domain addons-624151 in network mk-addons-624151
	I0805 11:27:56.583680  392242 main.go:141] libmachine: (addons-624151) DBG | I0805 11:27:56.583609  392264 retry.go:31] will retry after 601.17555ms: waiting for machine to come up
	I0805 11:27:57.186468  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:27:57.186967  392242 main.go:141] libmachine: (addons-624151) DBG | unable to find current IP address of domain addons-624151 in network mk-addons-624151
	I0805 11:27:57.186995  392242 main.go:141] libmachine: (addons-624151) DBG | I0805 11:27:57.186926  392264 retry.go:31] will retry after 719.110935ms: waiting for machine to come up
	I0805 11:27:57.907567  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:27:57.908039  392242 main.go:141] libmachine: (addons-624151) DBG | unable to find current IP address of domain addons-624151 in network mk-addons-624151
	I0805 11:27:57.908070  392242 main.go:141] libmachine: (addons-624151) DBG | I0805 11:27:57.908008  392264 retry.go:31] will retry after 934.35208ms: waiting for machine to come up
	I0805 11:27:58.844305  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:27:58.844653  392242 main.go:141] libmachine: (addons-624151) DBG | unable to find current IP address of domain addons-624151 in network mk-addons-624151
	I0805 11:27:58.844683  392242 main.go:141] libmachine: (addons-624151) DBG | I0805 11:27:58.844602  392264 retry.go:31] will retry after 1.082420814s: waiting for machine to come up
	I0805 11:27:59.928932  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:27:59.929392  392242 main.go:141] libmachine: (addons-624151) DBG | unable to find current IP address of domain addons-624151 in network mk-addons-624151
	I0805 11:27:59.929419  392242 main.go:141] libmachine: (addons-624151) DBG | I0805 11:27:59.929340  392264 retry.go:31] will retry after 1.228963819s: waiting for machine to come up
	I0805 11:28:01.159962  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:01.160367  392242 main.go:141] libmachine: (addons-624151) DBG | unable to find current IP address of domain addons-624151 in network mk-addons-624151
	I0805 11:28:01.160386  392242 main.go:141] libmachine: (addons-624151) DBG | I0805 11:28:01.160331  392264 retry.go:31] will retry after 2.152496576s: waiting for machine to come up
	I0805 11:28:03.314877  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:03.315338  392242 main.go:141] libmachine: (addons-624151) DBG | unable to find current IP address of domain addons-624151 in network mk-addons-624151
	I0805 11:28:03.315416  392242 main.go:141] libmachine: (addons-624151) DBG | I0805 11:28:03.315306  392264 retry.go:31] will retry after 2.810488145s: waiting for machine to come up
	I0805 11:28:06.127079  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:06.127443  392242 main.go:141] libmachine: (addons-624151) DBG | unable to find current IP address of domain addons-624151 in network mk-addons-624151
	I0805 11:28:06.127467  392242 main.go:141] libmachine: (addons-624151) DBG | I0805 11:28:06.127392  392264 retry.go:31] will retry after 2.755271269s: waiting for machine to come up
	I0805 11:28:08.883971  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:08.884504  392242 main.go:141] libmachine: (addons-624151) DBG | unable to find current IP address of domain addons-624151 in network mk-addons-624151
	I0805 11:28:08.884531  392242 main.go:141] libmachine: (addons-624151) DBG | I0805 11:28:08.884427  392264 retry.go:31] will retry after 4.321043706s: waiting for machine to come up
	I0805 11:28:13.207117  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:13.207475  392242 main.go:141] libmachine: (addons-624151) DBG | unable to find current IP address of domain addons-624151 in network mk-addons-624151
	I0805 11:28:13.207499  392242 main.go:141] libmachine: (addons-624151) DBG | I0805 11:28:13.207423  392264 retry.go:31] will retry after 5.45439584s: waiting for machine to come up
	I0805 11:28:18.663890  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:18.664358  392242 main.go:141] libmachine: (addons-624151) Found IP for machine: 192.168.39.142
	I0805 11:28:18.664393  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has current primary IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:18.664402  392242 main.go:141] libmachine: (addons-624151) Reserving static IP address...
	I0805 11:28:18.664790  392242 main.go:141] libmachine: (addons-624151) DBG | unable to find host DHCP lease matching {name: "addons-624151", mac: "52:54:00:7b:74:67", ip: "192.168.39.142"} in network mk-addons-624151
	I0805 11:28:18.736843  392242 main.go:141] libmachine: (addons-624151) DBG | Getting to WaitForSSH function...
	I0805 11:28:18.736877  392242 main.go:141] libmachine: (addons-624151) Reserved static IP address: 192.168.39.142
	I0805 11:28:18.736892  392242 main.go:141] libmachine: (addons-624151) Waiting for SSH to be available...
	I0805 11:28:18.739335  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:18.739596  392242 main.go:141] libmachine: (addons-624151) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151
	I0805 11:28:18.739622  392242 main.go:141] libmachine: (addons-624151) DBG | unable to find defined IP address of network mk-addons-624151 interface with MAC address 52:54:00:7b:74:67
	I0805 11:28:18.739927  392242 main.go:141] libmachine: (addons-624151) DBG | Using SSH client type: external
	I0805 11:28:18.739952  392242 main.go:141] libmachine: (addons-624151) DBG | Using SSH private key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/id_rsa (-rw-------)
	I0805 11:28:18.739993  392242 main.go:141] libmachine: (addons-624151) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 11:28:18.740021  392242 main.go:141] libmachine: (addons-624151) DBG | About to run SSH command:
	I0805 11:28:18.740037  392242 main.go:141] libmachine: (addons-624151) DBG | exit 0
	I0805 11:28:18.743941  392242 main.go:141] libmachine: (addons-624151) DBG | SSH cmd err, output: exit status 255: 
	I0805 11:28:18.743965  392242 main.go:141] libmachine: (addons-624151) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0805 11:28:18.743974  392242 main.go:141] libmachine: (addons-624151) DBG | command : exit 0
	I0805 11:28:18.743981  392242 main.go:141] libmachine: (addons-624151) DBG | err     : exit status 255
	I0805 11:28:18.743991  392242 main.go:141] libmachine: (addons-624151) DBG | output  : 
	I0805 11:28:21.746187  392242 main.go:141] libmachine: (addons-624151) DBG | Getting to WaitForSSH function...
	I0805 11:28:21.748602  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:21.748946  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:21.748977  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:21.749101  392242 main.go:141] libmachine: (addons-624151) DBG | Using SSH client type: external
	I0805 11:28:21.749130  392242 main.go:141] libmachine: (addons-624151) DBG | Using SSH private key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/id_rsa (-rw-------)
	I0805 11:28:21.749170  392242 main.go:141] libmachine: (addons-624151) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.142 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 11:28:21.749187  392242 main.go:141] libmachine: (addons-624151) DBG | About to run SSH command:
	I0805 11:28:21.749216  392242 main.go:141] libmachine: (addons-624151) DBG | exit 0
	I0805 11:28:21.875926  392242 main.go:141] libmachine: (addons-624151) DBG | SSH cmd err, output: <nil>: 
	I0805 11:28:21.876279  392242 main.go:141] libmachine: (addons-624151) KVM machine creation complete!
	I0805 11:28:21.876518  392242 main.go:141] libmachine: (addons-624151) Calling .GetConfigRaw
	I0805 11:28:21.877280  392242 main.go:141] libmachine: (addons-624151) Calling .DriverName
	I0805 11:28:21.877491  392242 main.go:141] libmachine: (addons-624151) Calling .DriverName
	I0805 11:28:21.877656  392242 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0805 11:28:21.877673  392242 main.go:141] libmachine: (addons-624151) Calling .GetState
	I0805 11:28:21.878964  392242 main.go:141] libmachine: Detecting operating system of created instance...
	I0805 11:28:21.878978  392242 main.go:141] libmachine: Waiting for SSH to be available...
	I0805 11:28:21.878984  392242 main.go:141] libmachine: Getting to WaitForSSH function...
	I0805 11:28:21.878989  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:28:21.881208  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:21.881591  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:21.881619  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:21.881751  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:28:21.881920  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:21.882080  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:21.882215  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:28:21.882396  392242 main.go:141] libmachine: Using SSH client type: native
	I0805 11:28:21.882628  392242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0805 11:28:21.882643  392242 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0805 11:28:21.995193  392242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 11:28:21.995218  392242 main.go:141] libmachine: Detecting the provisioner...
	I0805 11:28:21.995228  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:28:21.998288  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:21.998695  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:21.998721  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:21.998924  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:28:21.999175  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:21.999370  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:21.999526  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:28:21.999785  392242 main.go:141] libmachine: Using SSH client type: native
	I0805 11:28:21.999977  392242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0805 11:28:21.999989  392242 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0805 11:28:22.112625  392242 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0805 11:28:22.112723  392242 main.go:141] libmachine: found compatible host: buildroot
	I0805 11:28:22.112734  392242 main.go:141] libmachine: Provisioning with buildroot...
	I0805 11:28:22.112743  392242 main.go:141] libmachine: (addons-624151) Calling .GetMachineName
	I0805 11:28:22.112981  392242 buildroot.go:166] provisioning hostname "addons-624151"
	I0805 11:28:22.113012  392242 main.go:141] libmachine: (addons-624151) Calling .GetMachineName
	I0805 11:28:22.113226  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:28:22.115718  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:22.116158  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:22.116185  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:22.116360  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:28:22.116936  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:22.117434  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:22.117676  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:28:22.117893  392242 main.go:141] libmachine: Using SSH client type: native
	I0805 11:28:22.118123  392242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0805 11:28:22.118141  392242 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-624151 && echo "addons-624151" | sudo tee /etc/hostname
	I0805 11:28:22.243247  392242 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-624151
	
	I0805 11:28:22.243274  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:28:22.246134  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:22.246505  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:22.246543  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:22.246755  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:28:22.246955  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:22.247138  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:22.247292  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:28:22.247446  392242 main.go:141] libmachine: Using SSH client type: native
	I0805 11:28:22.247652  392242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0805 11:28:22.247669  392242 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-624151' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-624151/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-624151' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 11:28:22.369203  392242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 11:28:22.369245  392242 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19377-383955/.minikube CaCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19377-383955/.minikube}
	I0805 11:28:22.369322  392242 buildroot.go:174] setting up certificates
	I0805 11:28:22.369350  392242 provision.go:84] configureAuth start
	I0805 11:28:22.369373  392242 main.go:141] libmachine: (addons-624151) Calling .GetMachineName
	I0805 11:28:22.369705  392242 main.go:141] libmachine: (addons-624151) Calling .GetIP
	I0805 11:28:22.372267  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:22.372559  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:22.372587  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:22.372718  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:28:22.374796  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:22.375130  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:22.375157  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:22.375314  392242 provision.go:143] copyHostCerts
	I0805 11:28:22.375399  392242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem (1082 bytes)
	I0805 11:28:22.375597  392242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem (1123 bytes)
	I0805 11:28:22.375682  392242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem (1675 bytes)
	I0805 11:28:22.375772  392242 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem org=jenkins.addons-624151 san=[127.0.0.1 192.168.39.142 addons-624151 localhost minikube]
	I0805 11:28:22.534263  392242 provision.go:177] copyRemoteCerts
	I0805 11:28:22.534327  392242 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 11:28:22.534354  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:28:22.537700  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:22.538089  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:22.538115  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:22.538372  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:28:22.538581  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:22.538732  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:28:22.538853  392242 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/id_rsa Username:docker}
	I0805 11:28:22.626460  392242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 11:28:22.651152  392242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0805 11:28:22.676715  392242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0805 11:28:22.702742  392242 provision.go:87] duration metric: took 333.372423ms to configureAuth
	I0805 11:28:22.702777  392242 buildroot.go:189] setting minikube options for container-runtime
	I0805 11:28:22.703027  392242 config.go:182] Loaded profile config "addons-624151": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 11:28:22.703127  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:28:22.705594  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:22.705948  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:22.705974  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:22.706116  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:28:22.706321  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:22.706519  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:22.706671  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:28:22.706805  392242 main.go:141] libmachine: Using SSH client type: native
	I0805 11:28:22.706965  392242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0805 11:28:22.706979  392242 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 11:28:22.985822  392242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 11:28:22.985854  392242 main.go:141] libmachine: Checking connection to Docker...
	I0805 11:28:22.985863  392242 main.go:141] libmachine: (addons-624151) Calling .GetURL
	I0805 11:28:22.987239  392242 main.go:141] libmachine: (addons-624151) DBG | Using libvirt version 6000000
	I0805 11:28:22.989198  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:22.989656  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:22.989687  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:22.989910  392242 main.go:141] libmachine: Docker is up and running!
	I0805 11:28:22.989924  392242 main.go:141] libmachine: Reticulating splines...
	I0805 11:28:22.989932  392242 client.go:171] duration metric: took 30.144472044s to LocalClient.Create
	I0805 11:28:22.989962  392242 start.go:167] duration metric: took 30.144541719s to libmachine.API.Create "addons-624151"
	I0805 11:28:22.989976  392242 start.go:293] postStartSetup for "addons-624151" (driver="kvm2")
	I0805 11:28:22.989991  392242 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 11:28:22.990014  392242 main.go:141] libmachine: (addons-624151) Calling .DriverName
	I0805 11:28:22.990290  392242 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 11:28:22.990315  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:28:22.992656  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:22.992993  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:22.993023  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:22.993147  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:28:22.993326  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:22.993511  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:28:22.993670  392242 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/id_rsa Username:docker}
	I0805 11:28:23.078033  392242 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 11:28:23.082418  392242 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 11:28:23.082461  392242 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/addons for local assets ...
	I0805 11:28:23.082553  392242 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/files for local assets ...
	I0805 11:28:23.082577  392242 start.go:296] duration metric: took 92.592498ms for postStartSetup
	I0805 11:28:23.082617  392242 main.go:141] libmachine: (addons-624151) Calling .GetConfigRaw
	I0805 11:28:23.083314  392242 main.go:141] libmachine: (addons-624151) Calling .GetIP
	I0805 11:28:23.086031  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:23.086373  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:23.086399  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:23.086618  392242 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/config.json ...
	I0805 11:28:23.086841  392242 start.go:128] duration metric: took 30.260368337s to createHost
	I0805 11:28:23.086880  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:28:23.089102  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:23.089425  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:23.089449  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:23.089562  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:28:23.089824  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:23.090014  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:23.090179  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:28:23.090359  392242 main.go:141] libmachine: Using SSH client type: native
	I0805 11:28:23.090529  392242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0805 11:28:23.090540  392242 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 11:28:23.204745  392242 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722857303.187203054
	
	I0805 11:28:23.204774  392242 fix.go:216] guest clock: 1722857303.187203054
	I0805 11:28:23.204784  392242 fix.go:229] Guest: 2024-08-05 11:28:23.187203054 +0000 UTC Remote: 2024-08-05 11:28:23.086854803 +0000 UTC m=+30.362744727 (delta=100.348251ms)
	I0805 11:28:23.204851  392242 fix.go:200] guest clock delta is within tolerance: 100.348251ms
	I0805 11:28:23.204874  392242 start.go:83] releasing machines lock for "addons-624151", held for 30.378492825s
	I0805 11:28:23.204908  392242 main.go:141] libmachine: (addons-624151) Calling .DriverName
	I0805 11:28:23.205244  392242 main.go:141] libmachine: (addons-624151) Calling .GetIP
	I0805 11:28:23.207972  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:23.208595  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:23.208608  392242 main.go:141] libmachine: (addons-624151) Calling .DriverName
	I0805 11:28:23.208643  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:23.209124  392242 main.go:141] libmachine: (addons-624151) Calling .DriverName
	I0805 11:28:23.209307  392242 main.go:141] libmachine: (addons-624151) Calling .DriverName
	I0805 11:28:23.209434  392242 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 11:28:23.209482  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:28:23.209547  392242 ssh_runner.go:195] Run: cat /version.json
	I0805 11:28:23.209569  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:28:23.212093  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:23.212264  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:23.212450  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:23.212477  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:23.212623  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:23.212652  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:23.212657  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:28:23.212823  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:28:23.212868  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:23.213043  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:23.213049  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:28:23.213240  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:28:23.213243  392242 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/id_rsa Username:docker}
	I0805 11:28:23.213385  392242 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/id_rsa Username:docker}
	I0805 11:28:23.293185  392242 ssh_runner.go:195] Run: systemctl --version
	I0805 11:28:23.320391  392242 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 11:28:23.480109  392242 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 11:28:23.486161  392242 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 11:28:23.486235  392242 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 11:28:23.502622  392242 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 11:28:23.502652  392242 start.go:495] detecting cgroup driver to use...
	I0805 11:28:23.502735  392242 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 11:28:23.520055  392242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 11:28:23.534083  392242 docker.go:217] disabling cri-docker service (if available) ...
	I0805 11:28:23.534157  392242 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 11:28:23.549226  392242 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 11:28:23.563199  392242 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 11:28:23.677620  392242 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 11:28:23.819780  392242 docker.go:233] disabling docker service ...
	I0805 11:28:23.819865  392242 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 11:28:23.833808  392242 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 11:28:23.848833  392242 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 11:28:23.986678  392242 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 11:28:24.120165  392242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 11:28:24.133913  392242 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 11:28:24.152724  392242 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0805 11:28:24.152802  392242 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:28:24.163359  392242 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 11:28:24.163478  392242 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:28:24.174058  392242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:28:24.184879  392242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:28:24.195734  392242 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 11:28:24.206141  392242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:28:24.216026  392242 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:28:24.233192  392242 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:28:24.243197  392242 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 11:28:24.252414  392242 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0805 11:28:24.252470  392242 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0805 11:28:24.266281  392242 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 11:28:24.276080  392242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 11:28:24.398683  392242 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 11:28:24.535410  392242 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 11:28:24.535513  392242 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 11:28:24.541078  392242 start.go:563] Will wait 60s for crictl version
	I0805 11:28:24.541177  392242 ssh_runner.go:195] Run: which crictl
	I0805 11:28:24.545109  392242 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 11:28:24.585245  392242 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 11:28:24.585380  392242 ssh_runner.go:195] Run: crio --version
	I0805 11:28:24.614699  392242 ssh_runner.go:195] Run: crio --version
	I0805 11:28:24.644681  392242 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0805 11:28:24.645937  392242 main.go:141] libmachine: (addons-624151) Calling .GetIP
	I0805 11:28:24.648641  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:24.648975  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:24.649005  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:24.649256  392242 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0805 11:28:24.653580  392242 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 11:28:24.667222  392242 kubeadm.go:883] updating cluster {Name:addons-624151 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-624151 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 11:28:24.667345  392242 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 11:28:24.667401  392242 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 11:28:24.700316  392242 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0805 11:28:24.700380  392242 ssh_runner.go:195] Run: which lz4
	I0805 11:28:24.707369  392242 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0805 11:28:24.715155  392242 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 11:28:24.715200  392242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0805 11:28:26.073477  392242 crio.go:462] duration metric: took 1.36614873s to copy over tarball
	I0805 11:28:26.073551  392242 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 11:28:28.350339  392242 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.276737729s)
	I0805 11:28:28.350374  392242 crio.go:469] duration metric: took 2.276864789s to extract the tarball
	I0805 11:28:28.350385  392242 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0805 11:28:28.390523  392242 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 11:28:28.434767  392242 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 11:28:28.434808  392242 cache_images.go:84] Images are preloaded, skipping loading
	I0805 11:28:28.434819  392242 kubeadm.go:934] updating node { 192.168.39.142 8443 v1.30.3 crio true true} ...
	I0805 11:28:28.434970  392242 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-624151 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.142
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-624151 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 11:28:28.435042  392242 ssh_runner.go:195] Run: crio config
	I0805 11:28:28.479939  392242 cni.go:84] Creating CNI manager for ""
	I0805 11:28:28.479958  392242 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 11:28:28.479968  392242 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 11:28:28.479989  392242 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.142 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-624151 NodeName:addons-624151 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.142"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.142 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 11:28:28.480125  392242 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.142
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-624151"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.142
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.142"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 11:28:28.480197  392242 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 11:28:28.490472  392242 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 11:28:28.490545  392242 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 11:28:28.500350  392242 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0805 11:28:28.517032  392242 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 11:28:28.533680  392242 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0805 11:28:28.550370  392242 ssh_runner.go:195] Run: grep 192.168.39.142	control-plane.minikube.internal$ /etc/hosts
	I0805 11:28:28.554386  392242 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.142	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 11:28:28.567368  392242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 11:28:28.686987  392242 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 11:28:28.705218  392242 certs.go:68] Setting up /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151 for IP: 192.168.39.142
	I0805 11:28:28.705245  392242 certs.go:194] generating shared ca certs ...
	I0805 11:28:28.705264  392242 certs.go:226] acquiring lock for ca certs: {Name:mk0abfcaff3883fbb5243c47b487f9200d9166d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:28:28.705439  392242 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key
	I0805 11:28:28.796681  392242 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt ...
	I0805 11:28:28.796715  392242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt: {Name:mkd5fdcf45ea9df6d5fa18d45bdea63152eca76d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:28:28.796932  392242 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key ...
	I0805 11:28:28.796951  392242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key: {Name:mk1bbb53bb80f9444fe0f770cd146b0ddaa8afc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:28:28.797064  392242 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key
	I0805 11:28:29.059373  392242 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt ...
	I0805 11:28:29.059411  392242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt: {Name:mkb34ce72bc362bcbac0cd9684abb1d30ca4c34b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:28:29.059603  392242 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key ...
	I0805 11:28:29.059614  392242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key: {Name:mked69d0ac8bf6b4a6eff42e658df9ea29c964f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:28:29.059692  392242 certs.go:256] generating profile certs ...
	I0805 11:28:29.059779  392242 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.key
	I0805 11:28:29.059794  392242 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.crt with IP's: []
	I0805 11:28:29.251125  392242 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.crt ...
	I0805 11:28:29.251161  392242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.crt: {Name:mk4ec282ef4daa54f044621721118c8d98e31968 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:28:29.251330  392242 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.key ...
	I0805 11:28:29.251343  392242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.key: {Name:mk6589561b42c2c2c2be68e99be6d652fd418e21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:28:29.251414  392242 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/apiserver.key.b5e194bb
	I0805 11:28:29.251433  392242 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/apiserver.crt.b5e194bb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.142]
	I0805 11:28:29.398753  392242 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/apiserver.crt.b5e194bb ...
	I0805 11:28:29.398790  392242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/apiserver.crt.b5e194bb: {Name:mke94d236913fbf5b761f9dc674c8d40be6f2163 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:28:29.398980  392242 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/apiserver.key.b5e194bb ...
	I0805 11:28:29.398995  392242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/apiserver.key.b5e194bb: {Name:mk2bf2fa082dc57ec25f039d12e629f1e37991c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:28:29.399085  392242 certs.go:381] copying /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/apiserver.crt.b5e194bb -> /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/apiserver.crt
	I0805 11:28:29.399167  392242 certs.go:385] copying /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/apiserver.key.b5e194bb -> /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/apiserver.key
	I0805 11:28:29.399221  392242 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/proxy-client.key
	I0805 11:28:29.399247  392242 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/proxy-client.crt with IP's: []
	I0805 11:28:29.465109  392242 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/proxy-client.crt ...
	I0805 11:28:29.465143  392242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/proxy-client.crt: {Name:mk3f6493e491a193b1aa934ef0ff5632e2d4f042 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:28:29.465310  392242 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/proxy-client.key ...
	I0805 11:28:29.465323  392242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/proxy-client.key: {Name:mk0a5e508065b816cfb38cb7260296cbd40974f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:28:29.465491  392242 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 11:28:29.465532  392242 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem (1082 bytes)
	I0805 11:28:29.465559  392242 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem (1123 bytes)
	I0805 11:28:29.465586  392242 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem (1675 bytes)
	I0805 11:28:29.466240  392242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 11:28:29.500549  392242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 11:28:29.525523  392242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 11:28:29.557252  392242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 11:28:29.581272  392242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0805 11:28:29.606488  392242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0805 11:28:29.631674  392242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 11:28:29.658485  392242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0805 11:28:29.683337  392242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 11:28:29.707430  392242 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 11:28:29.725871  392242 ssh_runner.go:195] Run: openssl version
	I0805 11:28:29.731606  392242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 11:28:29.742664  392242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 11:28:29.747361  392242 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0805 11:28:29.747428  392242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 11:28:29.753430  392242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 11:28:29.765074  392242 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 11:28:29.769282  392242 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0805 11:28:29.769361  392242 kubeadm.go:392] StartCluster: {Name:addons-624151 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-624151 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 11:28:29.769443  392242 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 11:28:29.769634  392242 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 11:28:29.806553  392242 cri.go:89] found id: ""
	I0805 11:28:29.806646  392242 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 11:28:29.817288  392242 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 11:28:29.827658  392242 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 11:28:29.838106  392242 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 11:28:29.838141  392242 kubeadm.go:157] found existing configuration files:
	
	I0805 11:28:29.838201  392242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 11:28:29.849540  392242 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 11:28:29.849612  392242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 11:28:29.861521  392242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 11:28:29.871652  392242 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 11:28:29.871718  392242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 11:28:29.881852  392242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 11:28:29.891180  392242 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 11:28:29.891234  392242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 11:28:29.901243  392242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 11:28:29.911541  392242 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 11:28:29.911611  392242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 11:28:29.921893  392242 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 11:28:29.984348  392242 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0805 11:28:29.984412  392242 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 11:28:30.123453  392242 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 11:28:30.123636  392242 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 11:28:30.123815  392242 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 11:28:30.372922  392242 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 11:28:30.455517  392242 out.go:204]   - Generating certificates and keys ...
	I0805 11:28:30.455678  392242 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 11:28:30.455793  392242 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 11:28:30.455891  392242 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0805 11:28:30.542510  392242 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0805 11:28:30.805444  392242 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0805 11:28:30.929956  392242 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0805 11:28:31.306559  392242 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0805 11:28:31.306818  392242 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-624151 localhost] and IPs [192.168.39.142 127.0.0.1 ::1]
	I0805 11:28:31.525964  392242 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0805 11:28:31.526266  392242 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-624151 localhost] and IPs [192.168.39.142 127.0.0.1 ::1]
	I0805 11:28:31.798928  392242 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0805 11:28:31.889013  392242 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0805 11:28:32.078756  392242 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0805 11:28:32.078825  392242 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 11:28:32.257153  392242 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 11:28:32.553855  392242 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0805 11:28:32.641238  392242 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 11:28:32.849721  392242 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 11:28:32.933788  392242 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 11:28:32.934422  392242 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 11:28:32.936908  392242 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 11:28:32.938773  392242 out.go:204]   - Booting up control plane ...
	I0805 11:28:32.938897  392242 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 11:28:32.938988  392242 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 11:28:32.939219  392242 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 11:28:32.954696  392242 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 11:28:32.955636  392242 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 11:28:32.955703  392242 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 11:28:33.084896  392242 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0805 11:28:33.085004  392242 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0805 11:28:33.586597  392242 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.990366ms
	I0805 11:28:33.586724  392242 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0805 11:28:38.586530  392242 kubeadm.go:310] [api-check] The API server is healthy after 5.001960162s
	I0805 11:28:38.598390  392242 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 11:28:38.613336  392242 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 11:28:38.638697  392242 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0805 11:28:38.638920  392242 kubeadm.go:310] [mark-control-plane] Marking the node addons-624151 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 11:28:38.654612  392242 kubeadm.go:310] [bootstrap-token] Using token: 9dtprs.4desv7mp1hzrofda
	I0805 11:28:38.655937  392242 out.go:204]   - Configuring RBAC rules ...
	I0805 11:28:38.656045  392242 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 11:28:38.663853  392242 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 11:28:38.674484  392242 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 11:28:38.681354  392242 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 11:28:38.685389  392242 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 11:28:38.688922  392242 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 11:28:38.993498  392242 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 11:28:39.435299  392242 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0805 11:28:39.994201  392242 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0805 11:28:39.994227  392242 kubeadm.go:310] 
	I0805 11:28:39.994289  392242 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0805 11:28:39.994301  392242 kubeadm.go:310] 
	I0805 11:28:39.994384  392242 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0805 11:28:39.994426  392242 kubeadm.go:310] 
	I0805 11:28:39.994485  392242 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0805 11:28:39.994560  392242 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 11:28:39.994632  392242 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 11:28:39.994640  392242 kubeadm.go:310] 
	I0805 11:28:39.994726  392242 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0805 11:28:39.994741  392242 kubeadm.go:310] 
	I0805 11:28:39.994808  392242 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 11:28:39.994817  392242 kubeadm.go:310] 
	I0805 11:28:39.994859  392242 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0805 11:28:39.994923  392242 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 11:28:39.994985  392242 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 11:28:39.995000  392242 kubeadm.go:310] 
	I0805 11:28:39.995085  392242 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0805 11:28:39.995174  392242 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0805 11:28:39.995181  392242 kubeadm.go:310] 
	I0805 11:28:39.995287  392242 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9dtprs.4desv7mp1hzrofda \
	I0805 11:28:39.995386  392242 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d5d31a77e9c4cbf19599d2fca5d8f2345e115b01301fa4b841f92bcfec86ddc6 \
	I0805 11:28:39.995405  392242 kubeadm.go:310] 	--control-plane 
	I0805 11:28:39.995432  392242 kubeadm.go:310] 
	I0805 11:28:39.995543  392242 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0805 11:28:39.995552  392242 kubeadm.go:310] 
	I0805 11:28:39.995644  392242 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9dtprs.4desv7mp1hzrofda \
	I0805 11:28:39.995788  392242 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d5d31a77e9c4cbf19599d2fca5d8f2345e115b01301fa4b841f92bcfec86ddc6 
	I0805 11:28:39.996279  392242 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 11:28:39.996309  392242 cni.go:84] Creating CNI manager for ""
	I0805 11:28:39.996324  392242 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 11:28:39.998756  392242 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0805 11:28:39.999913  392242 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0805 11:28:40.012040  392242 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0805 11:28:40.033421  392242 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 11:28:40.033514  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:40.033525  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-624151 minikube.k8s.io/updated_at=2024_08_05T11_28_40_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f minikube.k8s.io/name=addons-624151 minikube.k8s.io/primary=true
	I0805 11:28:40.082033  392242 ops.go:34] apiserver oom_adj: -16
	I0805 11:28:40.168298  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:40.669195  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:41.168732  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:41.668805  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:42.168893  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:42.668347  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:43.168590  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:43.668603  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:44.168689  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:44.668341  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:45.168382  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:45.668964  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:46.168441  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:46.668705  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:47.169012  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:47.668975  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:48.169164  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:48.669111  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:49.168358  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:49.668340  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:50.168917  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:50.668835  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:51.169206  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:51.668968  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:52.169264  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:52.669007  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:53.168480  392242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:28:53.272052  392242 kubeadm.go:1113] duration metric: took 13.238603143s to wait for elevateKubeSystemPrivileges
	I0805 11:28:53.272103  392242 kubeadm.go:394] duration metric: took 23.502751026s to StartCluster
	I0805 11:28:53.272130  392242 settings.go:142] acquiring lock: {Name:mkef693333292ed53a03690c72ec170ce2e26d3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:28:53.272306  392242 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 11:28:53.272827  392242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/kubeconfig: {Name:mkf2ea766e58530103015ce4ba9d1ed3336f3926 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:28:53.273073  392242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0805 11:28:53.273102  392242 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 11:28:53.273190  392242 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0805 11:28:53.273300  392242 addons.go:69] Setting yakd=true in profile "addons-624151"
	I0805 11:28:53.273308  392242 addons.go:69] Setting gcp-auth=true in profile "addons-624151"
	I0805 11:28:53.273324  392242 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-624151"
	I0805 11:28:53.273347  392242 addons.go:69] Setting default-storageclass=true in profile "addons-624151"
	I0805 11:28:53.273354  392242 addons.go:69] Setting helm-tiller=true in profile "addons-624151"
	I0805 11:28:53.273370  392242 config.go:182] Loaded profile config "addons-624151": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 11:28:53.273385  392242 addons.go:69] Setting ingress-dns=true in profile "addons-624151"
	I0805 11:28:53.273392  392242 addons.go:69] Setting inspektor-gadget=true in profile "addons-624151"
	I0805 11:28:53.273396  392242 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-624151"
	I0805 11:28:53.273396  392242 addons.go:69] Setting storage-provisioner=true in profile "addons-624151"
	I0805 11:28:53.273404  392242 addons.go:69] Setting metrics-server=true in profile "addons-624151"
	I0805 11:28:53.273413  392242 addons.go:234] Setting addon ingress-dns=true in "addons-624151"
	I0805 11:28:53.273418  392242 addons.go:234] Setting addon storage-provisioner=true in "addons-624151"
	I0805 11:28:53.273421  392242 addons.go:69] Setting ingress=true in profile "addons-624151"
	I0805 11:28:53.273425  392242 addons.go:234] Setting addon metrics-server=true in "addons-624151"
	I0805 11:28:53.273435  392242 host.go:66] Checking if "addons-624151" exists ...
	I0805 11:28:53.273441  392242 addons.go:234] Setting addon ingress=true in "addons-624151"
	I0805 11:28:53.273450  392242 host.go:66] Checking if "addons-624151" exists ...
	I0805 11:28:53.273453  392242 host.go:66] Checking if "addons-624151" exists ...
	I0805 11:28:53.273456  392242 host.go:66] Checking if "addons-624151" exists ...
	I0805 11:28:53.273466  392242 host.go:66] Checking if "addons-624151" exists ...
	I0805 11:28:53.273341  392242 mustload.go:65] Loading cluster: addons-624151
	I0805 11:28:53.273379  392242 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-624151"
	I0805 11:28:53.273585  392242 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-624151"
	I0805 11:28:53.273605  392242 host.go:66] Checking if "addons-624151" exists ...
	I0805 11:28:53.273750  392242 config.go:182] Loaded profile config "addons-624151": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 11:28:53.273348  392242 addons.go:234] Setting addon yakd=true in "addons-624151"
	I0805 11:28:53.273924  392242 host.go:66] Checking if "addons-624151" exists ...
	I0805 11:28:53.273996  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.273371  392242 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-624151"
	I0805 11:28:53.274043  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.273422  392242 addons.go:234] Setting addon inspektor-gadget=true in "addons-624151"
	I0805 11:28:53.273379  392242 addons.go:69] Setting registry=true in profile "addons-624151"
	I0805 11:28:53.274122  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.274132  392242 addons.go:234] Setting addon registry=true in "addons-624151"
	I0805 11:28:53.274164  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.274168  392242 host.go:66] Checking if "addons-624151" exists ...
	I0805 11:28:53.274127  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.274246  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.273388  392242 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-624151"
	I0805 11:28:53.274302  392242 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-624151"
	I0805 11:28:53.273397  392242 addons.go:69] Setting volumesnapshots=true in profile "addons-624151"
	I0805 11:28:53.274317  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.274333  392242 addons.go:234] Setting addon volumesnapshots=true in "addons-624151"
	I0805 11:28:53.273940  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.274349  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.274360  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.273925  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.273372  392242 addons.go:234] Setting addon helm-tiller=true in "addons-624151"
	I0805 11:28:53.274446  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.274486  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.274500  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.274510  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.274529  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.273349  392242 addons.go:69] Setting cloud-spanner=true in profile "addons-624151"
	I0805 11:28:53.274578  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.274592  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.274594  392242 addons.go:234] Setting addon cloud-spanner=true in "addons-624151"
	I0805 11:28:53.274599  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.274556  392242 host.go:66] Checking if "addons-624151" exists ...
	I0805 11:28:53.274613  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.274762  392242 host.go:66] Checking if "addons-624151" exists ...
	I0805 11:28:53.274790  392242 host.go:66] Checking if "addons-624151" exists ...
	I0805 11:28:53.274956  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.273388  392242 addons.go:69] Setting volcano=true in profile "addons-624151"
	I0805 11:28:53.274974  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.274990  392242 addons.go:234] Setting addon volcano=true in "addons-624151"
	I0805 11:28:53.275035  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.275068  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.275085  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.275102  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.275105  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.275126  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.275366  392242 host.go:66] Checking if "addons-624151" exists ...
	I0805 11:28:53.275896  392242 host.go:66] Checking if "addons-624151" exists ...
	I0805 11:28:53.281283  392242 out.go:177] * Verifying Kubernetes components...
	I0805 11:28:53.282778  392242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 11:28:53.290478  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35467
	I0805 11:28:53.290993  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.291483  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.291506  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.297093  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.297420  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.297430  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.297477  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.297485  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.298060  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.298098  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.313640  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33461
	I0805 11:28:53.314400  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.315025  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.315045  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.315482  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.316131  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.316177  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.319860  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41177
	I0805 11:28:53.320377  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.320968  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.320998  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.321357  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.321574  392242 main.go:141] libmachine: (addons-624151) Calling .GetState
	I0805 11:28:53.325643  392242 addons.go:234] Setting addon default-storageclass=true in "addons-624151"
	I0805 11:28:53.325695  392242 host.go:66] Checking if "addons-624151" exists ...
	I0805 11:28:53.326050  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.326085  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.326352  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41021
	I0805 11:28:53.326851  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.327387  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.327413  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.327810  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.328393  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.328421  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.329323  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41335
	I0805 11:28:53.329817  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.330300  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.330316  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.330720  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.331241  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.331270  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.338402  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36467
	I0805 11:28:53.338842  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.339356  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.339375  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.339712  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.339906  392242 main.go:141] libmachine: (addons-624151) Calling .GetState
	I0805 11:28:53.341718  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46867
	I0805 11:28:53.341898  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41525
	I0805 11:28:53.343191  392242 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-624151"
	I0805 11:28:53.343231  392242 host.go:66] Checking if "addons-624151" exists ...
	I0805 11:28:53.343560  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.343599  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.343853  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.344461  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.344479  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.344914  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.345463  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.345506  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.346621  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.347304  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.347321  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.347761  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.348337  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.348375  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.350054  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35389
	I0805 11:28:53.350602  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.352436  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.352453  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.353043  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.353844  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.353871  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.354160  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34711
	I0805 11:28:53.354663  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.355244  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.355262  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.355642  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.356225  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.356266  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.357988  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38179
	I0805 11:28:53.358436  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.358789  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36093
	I0805 11:28:53.359086  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.359102  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.359440  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.359729  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.360008  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.360023  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.360674  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.360719  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.361810  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42937
	I0805 11:28:53.362362  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.362895  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.362911  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.363288  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.363841  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.363876  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.368411  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.369041  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.369098  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.370015  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37015
	I0805 11:28:53.370550  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.371131  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.371157  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.371221  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35015
	I0805 11:28:53.371852  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.371986  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.372464  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.372482  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.372876  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.373147  392242 main.go:141] libmachine: (addons-624151) Calling .GetState
	I0805 11:28:53.374104  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33075
	I0805 11:28:53.374566  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.375013  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.375030  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.375391  392242 main.go:141] libmachine: (addons-624151) Calling .DriverName
	I0805 11:28:53.375451  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.376203  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.376244  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.376441  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.376484  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.377640  392242 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0805 11:28:53.378853  392242 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0805 11:28:53.378880  392242 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0805 11:28:53.378909  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:28:53.381767  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45513
	I0805 11:28:53.382397  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.383051  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.383070  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.383475  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.383708  392242 main.go:141] libmachine: (addons-624151) Calling .GetState
	I0805 11:28:53.384348  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.384867  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:53.384895  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.385210  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:28:53.385394  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:53.385543  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:28:53.385726  392242 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/id_rsa Username:docker}
	I0805 11:28:53.387348  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46339
	I0805 11:28:53.387495  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38139
	I0805 11:28:53.388211  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45101
	I0805 11:28:53.388803  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.388918  392242 host.go:66] Checking if "addons-624151" exists ...
	I0805 11:28:53.389327  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.389365  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.389616  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.389646  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.390264  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.390282  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.390349  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33503
	I0805 11:28:53.390585  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.390606  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.390858  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.391052  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.391294  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.391310  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.391435  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.391526  392242 main.go:141] libmachine: (addons-624151) Calling .GetState
	I0805 11:28:53.392021  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.392051  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.392417  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.393249  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.393291  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.393593  392242 main.go:141] libmachine: (addons-624151) Calling .DriverName
	I0805 11:28:53.393749  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.393946  392242 main.go:141] libmachine: (addons-624151) Calling .GetState
	I0805 11:28:53.394279  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:28:53.394335  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:28:53.395767  392242 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.1
	I0805 11:28:53.396005  392242 main.go:141] libmachine: (addons-624151) Calling .DriverName
	I0805 11:28:53.396323  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44169
	I0805 11:28:53.397140  392242 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0805 11:28:53.397163  392242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0805 11:28:53.397184  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:28:53.397747  392242 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 11:28:53.398465  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.399099  392242 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 11:28:53.399122  392242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 11:28:53.399143  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:28:53.399459  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.399478  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.400454  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.401018  392242 main.go:141] libmachine: (addons-624151) Calling .GetState
	I0805 11:28:53.401686  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39545
	I0805 11:28:53.401877  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41983
	I0805 11:28:53.402381  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.402473  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45181
	I0805 11:28:53.402685  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.402821  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.403311  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.403339  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.403427  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.403518  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:53.403534  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.403567  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:28:53.403811  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.403867  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:53.404164  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:28:53.404226  392242 main.go:141] libmachine: (addons-624151) Calling .GetState
	I0805 11:28:53.404642  392242 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/id_rsa Username:docker}
	I0805 11:28:53.404998  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:53.405019  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.405281  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:28:53.405499  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:53.405658  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:28:53.405854  392242 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/id_rsa Username:docker}
	I0805 11:28:53.406480  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.406502  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.406966  392242 main.go:141] libmachine: (addons-624151) Calling .DriverName
	I0805 11:28:53.407036  392242 main.go:141] libmachine: (addons-624151) Calling .DriverName
	I0805 11:28:53.407089  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.407286  392242 main.go:141] libmachine: (addons-624151) Calling .GetState
	I0805 11:28:53.408736  392242 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0805 11:28:53.408795  392242 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0805 11:28:53.409932  392242 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0805 11:28:53.409952  392242 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0805 11:28:53.409974  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:28:53.410170  392242 main.go:141] libmachine: (addons-624151) Calling .DriverName
	I0805 11:28:53.410468  392242 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 11:28:53.410481  392242 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 11:28:53.410499  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:28:53.410681  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.411576  392242 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0805 11:28:53.412333  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.412354  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.414015  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.414239  392242 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0805 11:28:53.414493  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:53.414517  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.414666  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:28:53.414717  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.414847  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:53.414988  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:28:53.415118  392242 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/id_rsa Username:docker}
	I0805 11:28:53.415325  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:53.415349  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.415761  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.415811  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:28:53.416030  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:53.416306  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:28:53.416500  392242 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/id_rsa Username:docker}
	I0805 11:28:53.416985  392242 main.go:141] libmachine: (addons-624151) Calling .GetState
	I0805 11:28:53.417452  392242 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0805 11:28:53.418676  392242 main.go:141] libmachine: (addons-624151) Calling .DriverName
	I0805 11:28:53.420037  392242 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0805 11:28:53.420038  392242 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0805 11:28:53.420615  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38759
	I0805 11:28:53.422886  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33073
	I0805 11:28:53.423364  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.423412  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.423905  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.423945  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.424160  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.424184  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.424399  392242 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0805 11:28:53.424423  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.424452  392242 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0805 11:28:53.424625  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.424669  392242 main.go:141] libmachine: (addons-624151) Calling .GetState
	I0805 11:28:53.424794  392242 main.go:141] libmachine: (addons-624151) Calling .GetState
	I0805 11:28:53.426533  392242 main.go:141] libmachine: (addons-624151) Calling .DriverName
	I0805 11:28:53.427024  392242 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0805 11:28:53.427057  392242 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0805 11:28:53.427287  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33529
	I0805 11:28:53.428237  392242 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0805 11:28:53.428469  392242 main.go:141] libmachine: (addons-624151) Calling .DriverName
	I0805 11:28:53.428583  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.428697  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40243
	I0805 11:28:53.429116  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.429293  392242 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0805 11:28:53.429311  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.429320  392242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0805 11:28:53.429325  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.429340  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:28:53.429705  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.429836  392242 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0805 11:28:53.429843  392242 main.go:141] libmachine: (addons-624151) Calling .GetState
	I0805 11:28:53.429851  392242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0805 11:28:53.429867  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:28:53.430163  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38621
	I0805 11:28:53.430192  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34365
	I0805 11:28:53.430656  392242 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0805 11:28:53.430747  392242 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0805 11:28:53.430805  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.431307  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.431317  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.431484  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.431548  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37125
	I0805 11:28:53.431641  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.431663  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.431852  392242 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0805 11:28:53.431868  392242 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0805 11:28:53.431896  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:28:53.432225  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.432403  392242 main.go:141] libmachine: (addons-624151) Calling .GetState
	I0805 11:28:53.433120  392242 out.go:177]   - Using image docker.io/registry:2.8.3
	I0805 11:28:53.433230  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.433653  392242 main.go:141] libmachine: (addons-624151) Calling .DriverName
	I0805 11:28:53.433891  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:28:53.433906  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:28:53.433914  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41061
	I0805 11:28:53.434032  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.434093  392242 main.go:141] libmachine: (addons-624151) DBG | Closing plugin on server side
	I0805 11:28:53.434114  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:28:53.434121  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:28:53.434133  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:28:53.434141  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:28:53.434226  392242 main.go:141] libmachine: (addons-624151) Calling .GetState
	I0805 11:28:53.434310  392242 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0805 11:28:53.434322  392242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0805 11:28:53.434340  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:28:53.434447  392242 main.go:141] libmachine: (addons-624151) DBG | Closing plugin on server side
	I0805 11:28:53.434461  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:28:53.434468  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:28:53.434467  392242 main.go:141] libmachine: (addons-624151) Calling .DriverName
	W0805 11:28:53.434529  392242 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0805 11:28:53.435011  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.435565  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:53.435590  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.435901  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:28:53.435926  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.436219  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:53.436583  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.436598  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.437132  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.437208  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:28:53.437293  392242 main.go:141] libmachine: (addons-624151) Calling .GetState
	I0805 11:28:53.437489  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.437456  392242 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/id_rsa Username:docker}
	I0805 11:28:53.437518  392242 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0805 11:28:53.438095  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:53.438116  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.438383  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:28:53.438587  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:53.438744  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:28:53.438879  392242 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/id_rsa Username:docker}
	I0805 11:28:53.439023  392242 main.go:141] libmachine: (addons-624151) Calling .DriverName
	I0805 11:28:53.439058  392242 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0805 11:28:53.439075  392242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0805 11:28:53.439091  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:28:53.439487  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.439501  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.439570  392242 main.go:141] libmachine: (addons-624151) Calling .DriverName
	I0805 11:28:53.440007  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.440015  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.440178  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.440271  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.440288  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.440319  392242 main.go:141] libmachine: (addons-624151) Calling .GetState
	I0805 11:28:53.440507  392242 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0805 11:28:53.440584  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:53.440849  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.440789  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:28:53.440822  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:53.440913  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.441023  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:53.441235  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:28:53.441247  392242 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0805 11:28:53.441282  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:28:53.441422  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:53.441763  392242 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/id_rsa Username:docker}
	I0805 11:28:53.441781  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:28:53.441901  392242 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/id_rsa Username:docker}
	I0805 11:28:53.442007  392242 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0805 11:28:53.442024  392242 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0805 11:28:53.442045  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:28:53.442125  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.442409  392242 main.go:141] libmachine: (addons-624151) Calling .DriverName
	I0805 11:28:53.442718  392242 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0805 11:28:53.442735  392242 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0805 11:28:53.442753  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:28:53.443037  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.443189  392242 main.go:141] libmachine: (addons-624151) Calling .DriverName
	I0805 11:28:53.443574  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:53.443592  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.443790  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:28:53.444496  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:53.444684  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:28:53.444830  392242 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/id_rsa Username:docker}
	I0805 11:28:53.445733  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.445833  392242 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0805 11:28:53.446208  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:53.446287  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.446365  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:28:53.446523  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:53.446669  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:28:53.446786  392242 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/id_rsa Username:docker}
	I0805 11:28:53.447125  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.447240  392242 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0805 11:28:53.447254  392242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0805 11:28:53.447269  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:28:53.447507  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:53.447522  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.447784  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:28:53.447955  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:53.448128  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:28:53.448274  392242 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/id_rsa Username:docker}
	I0805 11:28:53.450072  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.450504  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:53.450527  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.450698  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:28:53.450847  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:53.450995  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:28:53.451148  392242 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/id_rsa Username:docker}
	W0805 11:28:53.452038  392242 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:53610->192.168.39.142:22: read: connection reset by peer
	I0805 11:28:53.452065  392242 retry.go:31] will retry after 145.787354ms: ssh: handshake failed: read tcp 192.168.39.1:53610->192.168.39.142:22: read: connection reset by peer
	I0805 11:28:53.454613  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35789
	I0805 11:28:53.454963  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:28:53.455451  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:28:53.455476  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:28:53.455913  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:28:53.456108  392242 main.go:141] libmachine: (addons-624151) Calling .GetState
	I0805 11:28:53.457761  392242 main.go:141] libmachine: (addons-624151) Calling .DriverName
	I0805 11:28:53.459541  392242 out.go:177]   - Using image docker.io/busybox:stable
	I0805 11:28:53.461088  392242 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0805 11:28:53.462298  392242 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0805 11:28:53.462316  392242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0805 11:28:53.462337  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:28:53.465279  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.465664  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:28:53.465691  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:28:53.466181  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:28:53.466462  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:28:53.466661  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:28:53.466840  392242 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/id_rsa Username:docker}
	W0805 11:28:53.485314  392242 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:53616->192.168.39.142:22: read: connection reset by peer
	I0805 11:28:53.485355  392242 retry.go:31] will retry after 197.740583ms: ssh: handshake failed: read tcp 192.168.39.1:53616->192.168.39.142:22: read: connection reset by peer
	W0805 11:28:53.684682  392242 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0805 11:28:53.684735  392242 retry.go:31] will retry after 453.404253ms: ssh: handshake failed: EOF
	I0805 11:28:53.722077  392242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 11:28:53.839879  392242 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0805 11:28:53.839905  392242 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0805 11:28:53.873551  392242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0805 11:28:53.937056  392242 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0805 11:28:53.937086  392242 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0805 11:28:53.939593  392242 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0805 11:28:53.939616  392242 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0805 11:28:53.961178  392242 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0805 11:28:53.961216  392242 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0805 11:28:53.968824  392242 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 11:28:53.968845  392242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0805 11:28:53.975230  392242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 11:28:53.978631  392242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0805 11:28:53.994060  392242 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0805 11:28:53.994093  392242 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0805 11:28:54.064218  392242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0805 11:28:54.076542  392242 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0805 11:28:54.076568  392242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0805 11:28:54.078434  392242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0805 11:28:54.084690  392242 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0805 11:28:54.084716  392242 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0805 11:28:54.088075  392242 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0805 11:28:54.088100  392242 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0805 11:28:54.099056  392242 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0805 11:28:54.099083  392242 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0805 11:28:54.103353  392242 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0805 11:28:54.103380  392242 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0805 11:28:54.130326  392242 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0805 11:28:54.130355  392242 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0805 11:28:54.217782  392242 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0805 11:28:54.217810  392242 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0805 11:28:54.218499  392242 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0805 11:28:54.218526  392242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0805 11:28:54.220191  392242 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0805 11:28:54.220210  392242 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0805 11:28:54.348362  392242 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0805 11:28:54.348402  392242 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0805 11:28:54.364151  392242 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0805 11:28:54.364176  392242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0805 11:28:54.411467  392242 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0805 11:28:54.411497  392242 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0805 11:28:54.417184  392242 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0805 11:28:54.417207  392242 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0805 11:28:54.421354  392242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0805 11:28:54.425905  392242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0805 11:28:54.532320  392242 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0805 11:28:54.532355  392242 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0805 11:28:54.680122  392242 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0805 11:28:54.680159  392242 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0805 11:28:54.681761  392242 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0805 11:28:54.681787  392242 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0805 11:28:54.701559  392242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0805 11:28:54.721417  392242 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0805 11:28:54.721453  392242 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0805 11:28:54.871016  392242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0805 11:28:54.904692  392242 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0805 11:28:54.904732  392242 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0805 11:28:54.970601  392242 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0805 11:28:54.970635  392242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0805 11:28:55.016003  392242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0805 11:28:55.018227  392242 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0805 11:28:55.018253  392242 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0805 11:28:55.267010  392242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0805 11:28:55.299260  392242 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0805 11:28:55.299285  392242 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0805 11:28:55.512866  392242 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0805 11:28:55.512893  392242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0805 11:28:55.541255  392242 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0805 11:28:55.541285  392242 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0805 11:28:55.713412  392242 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0805 11:28:55.713444  392242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0805 11:28:55.789517  392242 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0805 11:28:55.789554  392242 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0805 11:28:56.173717  392242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0805 11:28:56.178268  392242 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0805 11:28:56.178311  392242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0805 11:28:56.564459  392242 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0805 11:28:56.564486  392242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0805 11:28:56.956022  392242 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0805 11:28:56.956203  392242 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0805 11:28:57.170027  392242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0805 11:28:57.535128  392242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.813003444s)
	I0805 11:28:57.535168  392242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.661577619s)
	I0805 11:28:57.535200  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:28:57.535211  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:28:57.535218  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:28:57.535224  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:28:57.535248  392242 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.566379929s)
	I0805 11:28:57.535276  392242 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0805 11:28:57.535218  392242 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.566352546s)
	I0805 11:28:57.535337  392242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.560080123s)
	I0805 11:28:57.535367  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:28:57.535377  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:28:57.535781  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:28:57.535799  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:28:57.535809  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:28:57.535817  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:28:57.536401  392242 node_ready.go:35] waiting up to 6m0s for node "addons-624151" to be "Ready" ...
	I0805 11:28:57.536503  392242 main.go:141] libmachine: (addons-624151) DBG | Closing plugin on server side
	I0805 11:28:57.536522  392242 main.go:141] libmachine: (addons-624151) DBG | Closing plugin on server side
	I0805 11:28:57.536551  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:28:57.536559  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:28:57.536569  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:28:57.536576  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:28:57.536595  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:28:57.536610  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:28:57.536623  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:28:57.536632  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:28:57.536847  392242 main.go:141] libmachine: (addons-624151) DBG | Closing plugin on server side
	I0805 11:28:57.536918  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:28:57.536925  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:28:57.537093  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:28:57.537104  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:28:57.537198  392242 main.go:141] libmachine: (addons-624151) DBG | Closing plugin on server side
	I0805 11:28:57.537236  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:28:57.537251  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:28:57.615292  392242 node_ready.go:49] node "addons-624151" has status "Ready":"True"
	I0805 11:28:57.615325  392242 node_ready.go:38] duration metric: took 78.902708ms for node "addons-624151" to be "Ready" ...
	I0805 11:28:57.615338  392242 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 11:28:57.701322  392242 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gmwdn" in "kube-system" namespace to be "Ready" ...
	I0805 11:28:57.706215  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:28:57.706237  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:28:57.706610  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:28:57.706634  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:28:58.073820  392242 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-624151" context rescaled to 1 replicas
	I0805 11:28:58.709710  392242 pod_ready.go:92] pod "coredns-7db6d8ff4d-gmwdn" in "kube-system" namespace has status "Ready":"True"
	I0805 11:28:58.709738  392242 pod_ready.go:81] duration metric: took 1.008372873s for pod "coredns-7db6d8ff4d-gmwdn" in "kube-system" namespace to be "Ready" ...
	I0805 11:28:58.709751  392242 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-s7xqd" in "kube-system" namespace to be "Ready" ...
	I0805 11:28:58.723918  392242 pod_ready.go:92] pod "coredns-7db6d8ff4d-s7xqd" in "kube-system" namespace has status "Ready":"True"
	I0805 11:28:58.723947  392242 pod_ready.go:81] duration metric: took 14.18693ms for pod "coredns-7db6d8ff4d-s7xqd" in "kube-system" namespace to be "Ready" ...
	I0805 11:28:58.723958  392242 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-624151" in "kube-system" namespace to be "Ready" ...
	I0805 11:28:58.730686  392242 pod_ready.go:92] pod "etcd-addons-624151" in "kube-system" namespace has status "Ready":"True"
	I0805 11:28:58.730714  392242 pod_ready.go:81] duration metric: took 6.746982ms for pod "etcd-addons-624151" in "kube-system" namespace to be "Ready" ...
	I0805 11:28:58.730725  392242 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-624151" in "kube-system" namespace to be "Ready" ...
	I0805 11:28:58.742980  392242 pod_ready.go:92] pod "kube-apiserver-addons-624151" in "kube-system" namespace has status "Ready":"True"
	I0805 11:28:58.743011  392242 pod_ready.go:81] duration metric: took 12.277228ms for pod "kube-apiserver-addons-624151" in "kube-system" namespace to be "Ready" ...
	I0805 11:28:58.743024  392242 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-624151" in "kube-system" namespace to be "Ready" ...
	I0805 11:28:58.763952  392242 pod_ready.go:92] pod "kube-controller-manager-addons-624151" in "kube-system" namespace has status "Ready":"True"
	I0805 11:28:58.763978  392242 pod_ready.go:81] duration metric: took 20.944907ms for pod "kube-controller-manager-addons-624151" in "kube-system" namespace to be "Ready" ...
	I0805 11:28:58.763994  392242 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nbpvj" in "kube-system" namespace to be "Ready" ...
	I0805 11:28:59.180033  392242 pod_ready.go:92] pod "kube-proxy-nbpvj" in "kube-system" namespace has status "Ready":"True"
	I0805 11:28:59.180061  392242 pod_ready.go:81] duration metric: took 416.060257ms for pod "kube-proxy-nbpvj" in "kube-system" namespace to be "Ready" ...
	I0805 11:28:59.180071  392242 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-624151" in "kube-system" namespace to be "Ready" ...
	I0805 11:28:59.600019  392242 pod_ready.go:92] pod "kube-scheduler-addons-624151" in "kube-system" namespace has status "Ready":"True"
	I0805 11:28:59.600046  392242 pod_ready.go:81] duration metric: took 419.968921ms for pod "kube-scheduler-addons-624151" in "kube-system" namespace to be "Ready" ...
	I0805 11:28:59.600057  392242 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-kgtjf" in "kube-system" namespace to be "Ready" ...
	I0805 11:29:00.500985  392242 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0805 11:29:00.501027  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:29:00.504481  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:29:00.505035  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:29:00.505069  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:29:00.505265  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:29:00.505557  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:29:00.505728  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:29:00.505889  392242 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/id_rsa Username:docker}
	I0805 11:29:00.785710  392242 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0805 11:29:01.020555  392242 addons.go:234] Setting addon gcp-auth=true in "addons-624151"
	I0805 11:29:01.020615  392242 host.go:66] Checking if "addons-624151" exists ...
	I0805 11:29:01.020920  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:29:01.020949  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:29:01.037267  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35773
	I0805 11:29:01.037751  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:29:01.038337  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:29:01.038362  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:29:01.038731  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:29:01.039252  392242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:29:01.039287  392242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:29:01.054696  392242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38945
	I0805 11:29:01.055168  392242 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:29:01.055705  392242 main.go:141] libmachine: Using API Version  1
	I0805 11:29:01.055733  392242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:29:01.056084  392242 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:29:01.056275  392242 main.go:141] libmachine: (addons-624151) Calling .GetState
	I0805 11:29:01.058094  392242 main.go:141] libmachine: (addons-624151) Calling .DriverName
	I0805 11:29:01.058355  392242 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0805 11:29:01.058380  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHHostname
	I0805 11:29:01.061816  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:29:01.062345  392242 main.go:141] libmachine: (addons-624151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:74:67", ip: ""} in network mk-addons-624151: {Iface:virbr1 ExpiryTime:2024-08-05 12:28:08 +0000 UTC Type:0 Mac:52:54:00:7b:74:67 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-624151 Clientid:01:52:54:00:7b:74:67}
	I0805 11:29:01.062374  392242 main.go:141] libmachine: (addons-624151) DBG | domain addons-624151 has defined IP address 192.168.39.142 and MAC address 52:54:00:7b:74:67 in network mk-addons-624151
	I0805 11:29:01.062561  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHPort
	I0805 11:29:01.062772  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHKeyPath
	I0805 11:29:01.062997  392242 main.go:141] libmachine: (addons-624151) Calling .GetSSHUsername
	I0805 11:29:01.063141  392242 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/addons-624151/id_rsa Username:docker}
	I0805 11:29:01.617911  392242 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-kgtjf" in "kube-system" namespace has status "Ready":"False"
	I0805 11:29:02.109487  392242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.130816341s)
	I0805 11:29:02.109543  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:29:02.109541  392242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.045286653s)
	I0805 11:29:02.109576  392242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.031114184s)
	I0805 11:29:02.109593  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:29:02.109618  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:29:02.109556  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:29:02.109644  392242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.683703566s)
	I0805 11:29:02.109614  392242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.688233142s)
	I0805 11:29:02.109672  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:29:02.109696  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:29:02.109704  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:29:02.109624  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:29:02.109717  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:29:02.109706  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:29:02.110096  392242 main.go:141] libmachine: (addons-624151) DBG | Closing plugin on server side
	I0805 11:29:02.110099  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:29:02.110110  392242 main.go:141] libmachine: (addons-624151) DBG | Closing plugin on server side
	I0805 11:29:02.110115  392242 main.go:141] libmachine: (addons-624151) DBG | Closing plugin on server side
	I0805 11:29:02.110118  392242 main.go:141] libmachine: (addons-624151) DBG | Closing plugin on server side
	I0805 11:29:02.110122  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:29:02.110124  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:29:02.110099  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:29:02.110127  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:29:02.110132  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:29:02.110139  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:29:02.110140  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:29:02.110147  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:29:02.110149  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:29:02.110155  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:29:02.110157  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:29:02.110161  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:29:02.110169  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:29:02.110096  392242 main.go:141] libmachine: (addons-624151) DBG | Closing plugin on server side
	I0805 11:29:02.110098  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:29:02.110223  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:29:02.110232  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:29:02.110239  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:29:02.110132  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:29:02.110281  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:29:02.110323  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:29:02.110330  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:29:02.110464  392242 main.go:141] libmachine: (addons-624151) DBG | Closing plugin on server side
	I0805 11:29:02.110483  392242 main.go:141] libmachine: (addons-624151) DBG | Closing plugin on server side
	I0805 11:29:02.110504  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:29:02.110510  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:29:02.110570  392242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.408976092s)
	W0805 11:29:02.110603  392242 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0805 11:29:02.110625  392242 retry.go:31] will retry after 322.036799ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0805 11:29:02.110605  392242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.239561603s)
	I0805 11:29:02.110653  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:29:02.110673  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:29:02.110678  392242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.094649931s)
	I0805 11:29:02.110684  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:29:02.110694  392242 addons.go:475] Verifying addon registry=true in "addons-624151"
	I0805 11:29:02.110737  392242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.843697299s)
	I0805 11:29:02.110751  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:29:02.110839  392242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.937089503s)
	I0805 11:29:02.110853  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:29:02.110861  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:29:02.110753  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:29:02.111086  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:29:02.111156  392242 main.go:141] libmachine: (addons-624151) DBG | Closing plugin on server side
	I0805 11:29:02.111182  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:29:02.111188  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:29:02.111195  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:29:02.111202  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:29:02.112065  392242 main.go:141] libmachine: (addons-624151) DBG | Closing plugin on server side
	I0805 11:29:02.112096  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:29:02.112103  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:29:02.110697  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:29:02.112133  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:29:02.112111  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:29:02.112186  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:29:02.112518  392242 main.go:141] libmachine: (addons-624151) DBG | Closing plugin on server side
	I0805 11:29:02.112544  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:29:02.112558  392242 main.go:141] libmachine: (addons-624151) DBG | Closing plugin on server side
	I0805 11:29:02.112561  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:29:02.112570  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:29:02.112579  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:29:02.112584  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:29:02.112591  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:29:02.112676  392242 out.go:177] * Verifying registry addon...
	I0805 11:29:02.112790  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:29:02.112801  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:29:02.112810  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:29:02.112818  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:29:02.112867  392242 main.go:141] libmachine: (addons-624151) DBG | Closing plugin on server side
	I0805 11:29:02.112877  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:29:02.112885  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:29:02.112897  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:29:02.112905  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:29:02.113159  392242 main.go:141] libmachine: (addons-624151) DBG | Closing plugin on server side
	I0805 11:29:02.113179  392242 main.go:141] libmachine: (addons-624151) DBG | Closing plugin on server side
	I0805 11:29:02.113210  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:29:02.113218  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:29:02.113465  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:29:02.113766  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:29:02.113777  392242 addons.go:475] Verifying addon ingress=true in "addons-624151"
	I0805 11:29:02.113489  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:29:02.114924  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:29:02.114935  392242 addons.go:475] Verifying addon metrics-server=true in "addons-624151"
	I0805 11:29:02.113494  392242 main.go:141] libmachine: (addons-624151) DBG | Closing plugin on server side
	I0805 11:29:02.115860  392242 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-624151 service yakd-dashboard -n yakd-dashboard
	
	I0805 11:29:02.116799  392242 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0805 11:29:02.117102  392242 out.go:177] * Verifying ingress addon...
	I0805 11:29:02.119458  392242 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0805 11:29:02.133081  392242 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0805 11:29:02.133100  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:02.133390  392242 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0805 11:29:02.133413  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:02.156755  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:29:02.156777  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:29:02.157172  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:29:02.157196  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:29:02.432920  392242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0805 11:29:02.634791  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:02.641958  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:03.160178  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:03.192398  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:03.345417  392242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.175330258s)
	I0805 11:29:03.345485  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:29:03.345497  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:29:03.345503  392242 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.28711956s)
	I0805 11:29:03.345865  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:29:03.345913  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:29:03.345927  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:29:03.345936  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:29:03.345895  392242 main.go:141] libmachine: (addons-624151) DBG | Closing plugin on server side
	I0805 11:29:03.346162  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:29:03.346176  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:29:03.346196  392242 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-624151"
	I0805 11:29:03.347049  392242 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0805 11:29:03.348045  392242 out.go:177] * Verifying csi-hostpath-driver addon...
	I0805 11:29:03.349208  392242 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0805 11:29:03.350308  392242 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0805 11:29:03.350400  392242 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0805 11:29:03.350423  392242 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0805 11:29:03.365249  392242 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0805 11:29:03.365275  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:03.503754  392242 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0805 11:29:03.503788  392242 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0805 11:29:03.622251  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:03.626042  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:03.630350  392242 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0805 11:29:03.630369  392242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0805 11:29:03.728581  392242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0805 11:29:03.856312  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:04.106628  392242 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-kgtjf" in "kube-system" namespace has status "Ready":"False"
	I0805 11:29:04.121758  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:04.123389  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:04.286549  392242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.853568405s)
	I0805 11:29:04.286610  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:29:04.286627  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:29:04.286951  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:29:04.286971  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:29:04.286981  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:29:04.286990  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:29:04.287239  392242 main.go:141] libmachine: (addons-624151) DBG | Closing plugin on server side
	I0805 11:29:04.287302  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:29:04.287332  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:29:04.357236  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:04.620700  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:04.626357  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:04.873459  392242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.144827306s)
	I0805 11:29:04.873560  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:29:04.873578  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:29:04.873901  392242 main.go:141] libmachine: (addons-624151) DBG | Closing plugin on server side
	I0805 11:29:04.873933  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:29:04.873977  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:29:04.873990  392242 main.go:141] libmachine: Making call to close driver server
	I0805 11:29:04.873998  392242 main.go:141] libmachine: (addons-624151) Calling .Close
	I0805 11:29:04.874315  392242 main.go:141] libmachine: (addons-624151) DBG | Closing plugin on server side
	I0805 11:29:04.874356  392242 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:29:04.874366  392242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:29:04.876305  392242 addons.go:475] Verifying addon gcp-auth=true in "addons-624151"
	I0805 11:29:04.878162  392242 out.go:177] * Verifying gcp-auth addon...
	I0805 11:29:04.880791  392242 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0805 11:29:04.913922  392242 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0805 11:29:04.913946  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:04.920031  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:05.157737  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:05.162084  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:05.361530  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:05.385307  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:05.623386  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:05.627488  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:05.860821  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:05.884885  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:06.110986  392242 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-kgtjf" in "kube-system" namespace has status "Ready":"False"
	I0805 11:29:06.124453  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:06.127324  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:06.356481  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:06.385153  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:06.623434  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:06.623700  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:06.856109  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:06.885131  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:07.122068  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:07.124591  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:07.357294  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:07.384976  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:07.623498  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:07.623635  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:07.858727  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:07.885027  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:08.121476  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:08.123483  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:08.355842  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:08.384129  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:08.606923  392242 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-kgtjf" in "kube-system" namespace has status "Ready":"False"
	I0805 11:29:08.622016  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:08.623790  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:08.856633  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:08.884973  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:09.124834  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:09.125398  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:09.356576  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:09.384524  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:09.623927  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:09.624445  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:10.062752  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:10.065566  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:10.120994  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:10.123410  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:10.357489  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:10.385844  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:10.621626  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:10.623637  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:10.856584  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:10.884955  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:11.106106  392242 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-kgtjf" in "kube-system" namespace has status "Ready":"False"
	I0805 11:29:11.121246  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:11.123514  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:11.356465  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:11.385108  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:11.621620  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:11.623180  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:11.856118  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:11.884542  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:12.123102  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:12.124350  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:12.355811  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:12.384586  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:12.621317  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:12.622533  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:12.855702  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:12.884737  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:13.107308  392242 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-kgtjf" in "kube-system" namespace has status "Ready":"False"
	I0805 11:29:13.121265  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:13.123618  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:13.356739  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:13.384345  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:13.622007  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:13.624070  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:13.856884  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:13.884280  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:14.120581  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:14.122920  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:14.356840  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:14.384440  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:14.624233  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:14.626668  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:14.855993  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:14.884965  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:15.122551  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:15.129841  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:15.365715  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:15.385488  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:15.608138  392242 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-kgtjf" in "kube-system" namespace has status "Ready":"False"
	I0805 11:29:15.624823  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:15.627576  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:15.856769  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:15.885352  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:16.121787  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:16.130019  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:16.357716  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:16.385492  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:16.621832  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:16.625102  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:16.856923  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:16.885548  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:17.121658  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:17.123527  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:17.356229  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:17.384482  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:17.622382  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:17.624637  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:17.856820  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:17.885581  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:18.431663  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:18.438989  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:18.444580  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:18.444972  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:18.449980  392242 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-kgtjf" in "kube-system" namespace has status "Ready":"False"
	I0805 11:29:18.620788  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:18.623249  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:18.855599  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:18.885061  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:19.124627  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:19.126007  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:19.358446  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:19.384782  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:19.621585  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:19.624423  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:19.856296  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:19.884812  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:20.122446  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:20.123672  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:20.356373  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:20.385012  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:20.607038  392242 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-kgtjf" in "kube-system" namespace has status "Ready":"False"
	I0805 11:29:20.622565  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:20.624345  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:21.041841  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:21.042951  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:21.121973  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:21.123618  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:21.356276  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:21.385444  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:21.622067  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:21.625061  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:21.858688  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:21.884672  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:22.122074  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:22.123419  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:22.356770  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:22.384772  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:22.606728  392242 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-kgtjf" in "kube-system" namespace has status "Ready":"True"
	I0805 11:29:22.606751  392242 pod_ready.go:81] duration metric: took 23.006687541s for pod "nvidia-device-plugin-daemonset-kgtjf" in "kube-system" namespace to be "Ready" ...
	I0805 11:29:22.606761  392242 pod_ready.go:38] duration metric: took 24.991409704s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 11:29:22.606795  392242 api_server.go:52] waiting for apiserver process to appear ...
	I0805 11:29:22.606862  392242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 11:29:22.621701  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:22.624813  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:22.629676  392242 api_server.go:72] duration metric: took 29.356540455s to wait for apiserver process to appear ...
	I0805 11:29:22.629693  392242 api_server.go:88] waiting for apiserver healthz status ...
	I0805 11:29:22.629770  392242 api_server.go:253] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
	I0805 11:29:22.634749  392242 api_server.go:279] https://192.168.39.142:8443/healthz returned 200:
	ok
	I0805 11:29:22.635757  392242 api_server.go:141] control plane version: v1.30.3
	I0805 11:29:22.635786  392242 api_server.go:131] duration metric: took 6.08634ms to wait for apiserver health ...
	I0805 11:29:22.635794  392242 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 11:29:22.644836  392242 system_pods.go:59] 18 kube-system pods found
	I0805 11:29:22.644862  392242 system_pods.go:61] "coredns-7db6d8ff4d-s7xqd" [6dee3eaa-4dd1-4077-889c-712056552228] Running
	I0805 11:29:22.644870  392242 system_pods.go:61] "csi-hostpath-attacher-0" [2a900d97-2723-48f6-9ef3-6afbc793b8a7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0805 11:29:22.644876  392242 system_pods.go:61] "csi-hostpath-resizer-0" [ff440e79-141d-4812-bfe9-c7d044fb5399] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0805 11:29:22.644883  392242 system_pods.go:61] "csi-hostpathplugin-bcjcs" [14fac8ba-adca-400e-bfb8-6320103d3061] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0805 11:29:22.644888  392242 system_pods.go:61] "etcd-addons-624151" [f47c334a-333f-499a-987d-ce5af8753b5e] Running
	I0805 11:29:22.644892  392242 system_pods.go:61] "kube-apiserver-addons-624151" [d7cef200-3ea7-4e6a-90a3-0a6cdd323229] Running
	I0805 11:29:22.644895  392242 system_pods.go:61] "kube-controller-manager-addons-624151" [19bf1de0-8ca6-4c6f-b142-84e105adc647] Running
	I0805 11:29:22.644899  392242 system_pods.go:61] "kube-ingress-dns-minikube" [a48e697f-4786-4387-9ef7-f15a45091c80] Running
	I0805 11:29:22.644902  392242 system_pods.go:61] "kube-proxy-nbpvj" [65b10013-8b12-4e89-b735-91ae7c4b32f8] Running
	I0805 11:29:22.644906  392242 system_pods.go:61] "kube-scheduler-addons-624151" [0d5635e3-6d61-40c7-b101-8e3176b4bb01] Running
	I0805 11:29:22.644911  392242 system_pods.go:61] "metrics-server-c59844bb4-f96nq" [7b3be79e-f92b-4158-8829-8fc50c6ebbd1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 11:29:22.644918  392242 system_pods.go:61] "nvidia-device-plugin-daemonset-kgtjf" [bb17bf33-643c-4417-8bb1-1814162e0e18] Running
	I0805 11:29:22.644924  392242 system_pods.go:61] "registry-698f998955-kbn7c" [825a2f6e-bea8-4451-bc76-8ab82bd3e8f4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0805 11:29:22.644931  392242 system_pods.go:61] "registry-proxy-6z85d" [f926e212-9d55-48fa-8149-0c86aaff8647] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0805 11:29:22.644940  392242 system_pods.go:61] "snapshot-controller-745499f584-nft7w" [fd109bf8-f9d0-479f-92af-d7ecbc0b4975] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0805 11:29:22.644946  392242 system_pods.go:61] "snapshot-controller-745499f584-szg99" [4754f0ca-4286-41b7-ab92-ce41eaf84ae6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0805 11:29:22.644950  392242 system_pods.go:61] "storage-provisioner" [3bfbac9c-232e-4b87-bd62-216bc17fad0e] Running
	I0805 11:29:22.644956  392242 system_pods.go:61] "tiller-deploy-6677d64bcd-g6dj9" [b48dc3b9-5ca0-4b5c-a47b-ed3b9a318ea5] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0805 11:29:22.644963  392242 system_pods.go:74] duration metric: took 9.163687ms to wait for pod list to return data ...
	I0805 11:29:22.644973  392242 default_sa.go:34] waiting for default service account to be created ...
	I0805 11:29:22.647017  392242 default_sa.go:45] found service account: "default"
	I0805 11:29:22.647033  392242 default_sa.go:55] duration metric: took 2.053393ms for default service account to be created ...
	I0805 11:29:22.647040  392242 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 11:29:22.655474  392242 system_pods.go:86] 18 kube-system pods found
	I0805 11:29:22.655497  392242 system_pods.go:89] "coredns-7db6d8ff4d-s7xqd" [6dee3eaa-4dd1-4077-889c-712056552228] Running
	I0805 11:29:22.655505  392242 system_pods.go:89] "csi-hostpath-attacher-0" [2a900d97-2723-48f6-9ef3-6afbc793b8a7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0805 11:29:22.655514  392242 system_pods.go:89] "csi-hostpath-resizer-0" [ff440e79-141d-4812-bfe9-c7d044fb5399] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0805 11:29:22.655522  392242 system_pods.go:89] "csi-hostpathplugin-bcjcs" [14fac8ba-adca-400e-bfb8-6320103d3061] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0805 11:29:22.655527  392242 system_pods.go:89] "etcd-addons-624151" [f47c334a-333f-499a-987d-ce5af8753b5e] Running
	I0805 11:29:22.655532  392242 system_pods.go:89] "kube-apiserver-addons-624151" [d7cef200-3ea7-4e6a-90a3-0a6cdd323229] Running
	I0805 11:29:22.655536  392242 system_pods.go:89] "kube-controller-manager-addons-624151" [19bf1de0-8ca6-4c6f-b142-84e105adc647] Running
	I0805 11:29:22.655541  392242 system_pods.go:89] "kube-ingress-dns-minikube" [a48e697f-4786-4387-9ef7-f15a45091c80] Running
	I0805 11:29:22.655545  392242 system_pods.go:89] "kube-proxy-nbpvj" [65b10013-8b12-4e89-b735-91ae7c4b32f8] Running
	I0805 11:29:22.655551  392242 system_pods.go:89] "kube-scheduler-addons-624151" [0d5635e3-6d61-40c7-b101-8e3176b4bb01] Running
	I0805 11:29:22.655560  392242 system_pods.go:89] "metrics-server-c59844bb4-f96nq" [7b3be79e-f92b-4158-8829-8fc50c6ebbd1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 11:29:22.655564  392242 system_pods.go:89] "nvidia-device-plugin-daemonset-kgtjf" [bb17bf33-643c-4417-8bb1-1814162e0e18] Running
	I0805 11:29:22.655572  392242 system_pods.go:89] "registry-698f998955-kbn7c" [825a2f6e-bea8-4451-bc76-8ab82bd3e8f4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0805 11:29:22.655577  392242 system_pods.go:89] "registry-proxy-6z85d" [f926e212-9d55-48fa-8149-0c86aaff8647] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0805 11:29:22.655587  392242 system_pods.go:89] "snapshot-controller-745499f584-nft7w" [fd109bf8-f9d0-479f-92af-d7ecbc0b4975] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0805 11:29:22.655595  392242 system_pods.go:89] "snapshot-controller-745499f584-szg99" [4754f0ca-4286-41b7-ab92-ce41eaf84ae6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0805 11:29:22.655599  392242 system_pods.go:89] "storage-provisioner" [3bfbac9c-232e-4b87-bd62-216bc17fad0e] Running
	I0805 11:29:22.655607  392242 system_pods.go:89] "tiller-deploy-6677d64bcd-g6dj9" [b48dc3b9-5ca0-4b5c-a47b-ed3b9a318ea5] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0805 11:29:22.655613  392242 system_pods.go:126] duration metric: took 8.567274ms to wait for k8s-apps to be running ...
	I0805 11:29:22.655629  392242 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 11:29:22.655675  392242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 11:29:22.671442  392242 system_svc.go:56] duration metric: took 15.803966ms WaitForService to wait for kubelet
	I0805 11:29:22.671471  392242 kubeadm.go:582] duration metric: took 29.398338375s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 11:29:22.671492  392242 node_conditions.go:102] verifying NodePressure condition ...
	I0805 11:29:22.674666  392242 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 11:29:22.674693  392242 node_conditions.go:123] node cpu capacity is 2
	I0805 11:29:22.674705  392242 node_conditions.go:105] duration metric: took 3.207964ms to run NodePressure ...
	I0805 11:29:22.674717  392242 start.go:241] waiting for startup goroutines ...
	I0805 11:29:22.856263  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:22.885118  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:23.122316  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:23.123798  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:23.357767  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:23.384936  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:23.623568  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:23.625298  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:23.856549  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:23.885037  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:24.121440  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:24.124160  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:24.356180  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:24.384937  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:25.095250  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:25.096628  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:25.098590  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:25.100300  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:25.122180  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:25.124424  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:25.356216  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:25.384890  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:25.622571  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:25.624756  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:25.857101  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:25.885600  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:26.123076  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:26.125246  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:26.356104  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:26.385220  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:26.621672  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:26.624280  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:26.856357  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:26.884817  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:27.122228  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:27.123832  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:27.355221  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:27.390133  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:27.621276  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:27.625340  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:27.856513  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:27.884914  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:28.122533  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:28.125108  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:28.357666  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:28.384101  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:28.622364  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:28.623923  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:28.855842  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:28.884755  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:29.123174  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:29.124439  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:29.362642  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:29.385260  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:29.623961  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:29.627641  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:29.857411  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:29.885201  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:30.122809  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:30.125113  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:30.356135  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:30.385231  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:30.621792  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:30.624999  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:30.856688  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:30.884241  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:31.122104  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:31.124984  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:31.356400  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:31.385521  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:31.622253  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:31.623404  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:31.855950  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:31.884279  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:32.125311  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:32.126562  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:32.357049  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:32.384338  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:32.889258  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:32.891080  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:32.895056  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:32.895996  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:33.124216  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:33.124573  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:33.356053  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:33.384477  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:33.622122  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:33.623706  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:33.858896  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:33.886533  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:34.122430  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:34.123857  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:34.357212  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:34.384108  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:34.623909  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:34.624363  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:34.857345  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:34.885375  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:35.121976  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:35.125256  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:35.355889  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:35.384539  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:35.623342  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:35.625106  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:35.856447  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:35.886289  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:36.121721  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:36.124197  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:36.355871  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:36.384423  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:36.622951  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:36.625086  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:36.855474  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:36.885608  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:37.121883  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:37.124140  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:37.356106  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:37.384331  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:37.620762  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:37.623258  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:37.856305  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:37.884560  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:38.121752  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:38.124610  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:38.357451  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:38.384766  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:38.621809  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:38.624936  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:38.855327  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:38.885055  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:39.121864  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:39.125025  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:39.356489  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:39.385083  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:39.622308  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:39.624786  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:39.855027  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:39.884042  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:40.121210  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:40.124944  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:40.358169  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:40.386479  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:40.624354  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:40.624478  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:40.858583  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:40.884861  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:41.122067  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:41.125065  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:41.355893  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:41.385073  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:41.621340  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:41.623996  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:41.858831  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:41.888443  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:42.121923  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:42.124835  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:42.357172  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:42.384306  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:42.622332  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:42.624915  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:42.855994  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:42.885125  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:43.122661  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:43.125070  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:43.355523  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:43.385282  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:43.621430  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 11:29:43.624628  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:43.856434  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:43.886215  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:44.121666  392242 kapi.go:107] duration metric: took 42.004863243s to wait for kubernetes.io/minikube-addons=registry ...
	I0805 11:29:44.124108  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:44.355872  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:44.384579  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:44.624150  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:44.855976  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:44.884996  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:45.125286  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:45.356496  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:45.385257  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:45.624446  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:45.856504  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:45.885553  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:46.124854  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:46.357088  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:46.385348  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:46.624429  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:46.856287  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:46.884335  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:47.124695  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:47.357652  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:47.384647  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:47.623930  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:47.855805  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:47.885304  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:48.124493  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:48.356858  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:48.385025  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:48.626305  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:48.859226  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:48.885358  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:49.124525  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:49.356179  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:49.384715  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:50.080392  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:50.081072  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:50.081387  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:50.129186  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:50.356113  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:50.384521  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:50.626737  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:50.861620  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:50.885098  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:51.124185  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:51.356637  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:51.384285  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:51.624183  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:51.856460  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:51.884725  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:52.123708  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:52.356737  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:52.384632  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:52.624289  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:52.857576  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:52.885439  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:53.123843  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:53.356488  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:53.384729  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:53.624298  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:53.856557  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:53.886346  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:54.124598  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:54.357428  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:54.384696  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:54.623908  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:54.855693  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:54.885457  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:55.136039  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:55.358111  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:55.384546  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:55.624353  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:55.857544  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:55.886717  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:56.127388  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:56.355905  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:56.385341  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:56.624261  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:56.856501  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:56.889564  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:57.123220  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:57.356162  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:57.384686  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:57.623479  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:57.857683  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:57.884618  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:58.123206  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:58.355921  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:58.384448  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:58.625053  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:58.856097  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:58.885758  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:59.123581  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:59.357011  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:59.385142  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:29:59.624300  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:29:59.856838  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:29:59.885463  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:00.124600  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:00.355604  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:00.392602  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:00.624931  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:00.857249  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:00.884297  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:01.124223  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:01.356509  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:01.385046  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:01.625917  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:01.855760  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:01.884098  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:02.130252  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:02.356266  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:02.384461  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:02.626641  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:02.859094  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:02.887856  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:03.124566  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:03.355211  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:03.384806  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:03.625147  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:03.855803  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:03.884107  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:04.126328  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:04.361618  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:04.388864  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:04.624162  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:04.856352  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:04.884563  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:05.126913  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:05.356823  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:05.384978  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:05.624535  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:05.857619  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:05.885480  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:06.128626  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:06.357289  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:06.384146  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:06.624498  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:06.856645  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:06.884503  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:07.124695  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:07.626477  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:07.627950  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:07.629189  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:07.859854  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:07.886994  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:08.130587  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:08.356930  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:08.392798  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:08.625098  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:08.867376  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:08.886332  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:09.124820  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:09.359000  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:09.386725  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:09.623804  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:09.856249  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:09.884619  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:10.123581  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:10.357777  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:10.385105  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:10.623787  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:10.856635  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:10.886102  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:11.124326  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:11.356075  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:11.385632  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:11.637153  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:11.855995  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:11.884458  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:12.124139  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:12.356952  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:12.401403  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:12.625332  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:12.858980  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:12.885311  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:13.124566  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:13.356510  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:13.384996  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:13.624551  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:14.041621  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:14.042809  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:14.123987  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:14.355651  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:14.384402  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:14.624179  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:14.855502  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:14.885175  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:15.124687  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:15.356137  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:15.385035  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:15.627146  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:15.856372  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:15.885436  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:16.124934  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:16.356001  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:16.384893  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:16.624579  392242 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 11:30:16.868866  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:16.884032  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:17.123999  392242 kapi.go:107] duration metric: took 1m15.004537622s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0805 11:30:17.359205  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:17.384813  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:17.856438  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:17.884152  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:18.357127  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:18.384525  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:18.856132  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:18.884722  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:19.356552  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:19.386136  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:19.859469  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:19.884979  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:20.357822  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:20.384777  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:20.857192  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:20.884936  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:21.358242  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:21.384653  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:21.857808  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:21.885176  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:22.358778  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:22.385912  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:22.856861  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:22.888993  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 11:30:23.358307  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:23.384940  392242 kapi.go:107] duration metric: took 1m18.50415092s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0805 11:30:23.386817  392242 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-624151 cluster.
	I0805 11:30:23.388173  392242 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0805 11:30:23.389479  392242 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0805 11:30:23.857022  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:24.544940  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:24.855422  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:25.357268  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:25.856640  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:26.359105  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:26.856358  392242 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 11:30:27.356018  392242 kapi.go:107] duration metric: took 1m24.005708679s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0805 11:30:27.357712  392242 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, default-storageclass, helm-tiller, nvidia-device-plugin, inspektor-gadget, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0805 11:30:27.359214  392242 addons.go:510] duration metric: took 1m34.086031545s for enable addons: enabled=[cloud-spanner storage-provisioner default-storageclass helm-tiller nvidia-device-plugin inspektor-gadget ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0805 11:30:27.359251  392242 start.go:246] waiting for cluster config update ...
	I0805 11:30:27.359268  392242 start.go:255] writing updated cluster config ...
	I0805 11:30:27.359539  392242 ssh_runner.go:195] Run: rm -f paused
	I0805 11:30:27.412705  392242 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0805 11:30:27.414651  392242 out.go:177] * Done! kubectl is now configured to use "addons-624151" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 05 11:36:15 addons-624151 crio[682]: time="2024-08-05 11:36:15.492135594Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722857775492107009,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589584,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c29aef96-642c-4ba8-8afd-a2b02a472f96 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 11:36:15 addons-624151 crio[682]: time="2024-08-05 11:36:15.492753194Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e27ce1f8-c65b-488a-9d7d-18572b46520f name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 11:36:15 addons-624151 crio[682]: time="2024-08-05 11:36:15.492824247Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e27ce1f8-c65b-488a-9d7d-18572b46520f name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 11:36:15 addons-624151 crio[682]: time="2024-08-05 11:36:15.493180000Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:29967c1a34fdd70f38938c81105eee2de604acad5c342ceb0528aea72aeaa6b2,PodSandboxId:5af76ddc44262f44f77a78f8266db7a3f6a4a8eb3cf17ed5a253203e5bbf0f3d,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722857642178722530,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-766vd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a9c74668-33fb-4764-a90a-a62b6278412b,},Annotations:map[string]string{io.kubernetes.container.hash: 4bc11893,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db49fd8f431c9e97fddb6f55e5323780f859599ab01f8c5e5a140995076f8112,PodSandboxId:92acbbfa3733bd8790a8f1df24d4db773591b117bf036aadd7059d0063828729,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722857502510523235,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8c086e51-e9aa-47d4-b5da-7196cbb25a28,},Annotations:map[string]string{io.kubernet
es.container.hash: 57261cd3,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f269744d60f0b2b6497481c781c5c68ef2a589cb1ec32595a1a3a2ded79fead3,PodSandboxId:7e71318d6d00288614e6c56f44eda2a01ffecf1485b418000f2289fe3ac1f81c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722857431226184122,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 615be7ce-86e2-476f-8
1a0-9c656f5b27ad,},Annotations:map[string]string{io.kubernetes.container.hash: d45a858f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a21dd489749ff06cb7a6f20a0f0b8aec17868a25adc1ae767f7f5f3843c78fbf,PodSandboxId:18a4f31a0fb9b487e490d8a6fcd523237e7d26d378a924036a8d02270bcb219b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722857367357639978,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-f96nq,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 7b3be79e-f92b-4158-8829-8fc50c6ebbd1,},Annotations:map[string]string{io.kubernetes.container.hash: 1bddcb70,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c11137f2baffb9dff05dff3e4c5264eb5ba8e5ede5a9153db347bfb605a09a4c,PodSandboxId:aef1255e7abe477800eda44354377e46c87feec3a47ab6320c1d5e22f71c01b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722857338775026247,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bfbac9c-232e-4b87-bd62-216bc17fad0e,},Annotations:map[string]string{io.kubernetes.container.hash: 72dfffbf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93a73ab29d9dbbf0c576f434d6ba9272e2177689692a8f9aa8aac76ed4fc9028,PodSandboxId:2202578a252573a14cc49f72422bd2c2e36ae6488cf22805191908c9f0dd29ee,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722857336396783354,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6
d8ff4d-s7xqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dee3eaa-4dd1-4077-889c-712056552228,},Annotations:map[string]string{io.kubernetes.container.hash: 748f2ff8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf8488bfd154a167ef0bfbc64858c06825f1ace9af3108b2fb8282fa505ec428,PodSandboxId:e948c450eec0ec2bf0c69c35be019b5a77e99b275771cfe3f32e96355fd1e5a3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381
d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722857334011999123,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nbpvj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b10013-8b12-4e89-b735-91ae7c4b32f8,},Annotations:map[string]string{io.kubernetes.container.hash: 92541790,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fbda93b1b1b61d3a7df0606e39ec3314e927ecdaca7d4ce96af0bf43dd56928,PodSandboxId:8160c730de833c3f57575a025962de757bfbc94cfc9443b91505de1bcdedfadb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e
5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722857314210285013,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-624151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e410e9957c9bb7ef05423de94b75d113,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea77cae92000d4c3148918a81861d1b11bd73553e26ec733761698ded5b7c2e9,PodSandboxId:4eff185c0adebdaed2d00dc578176bfab416d976f3301d55cd6f3725d5c2f82d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_RUNNING,CreatedAt:1722857314201454698,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-624151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f0da1f77e0045af5e54ecd4e121311e,},Annotations:map[string]string{io.kubernetes.container.hash: a0ed5d74,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7da48507d2311bebc83cd3bf4ca2cef602ec05ea529873ed1d454439bf7073f,PodSandboxId:0daf17a00555e0f3f4f4f026d6a90a88e3925ba06b19fbe913cf57eef9b92a8b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:17228573141233
49501,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-624151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2128468ad8768be81ae3f787fbd1b4d,},Annotations:map[string]string{io.kubernetes.container.hash: f00d3253,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f11721040ddc212db151c004c40ef0a7e7a27d51f5c8fb06955f93ae7edb02a,PodSandboxId:2a6d39055991a2056ae7148b498e25a2d06df7d6c41c5b6c99a8305ffbf2aa0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722857314094956467,La
bels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-624151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 600ddff116cca39e51d6b17e354e744e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e27ce1f8-c65b-488a-9d7d-18572b46520f name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 11:36:15 addons-624151 crio[682]: time="2024-08-05 11:36:15.535712500Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e653a4b4-8f39-4b2d-8e82-ec7d17e7ab9e name=/runtime.v1.RuntimeService/Version
	Aug 05 11:36:15 addons-624151 crio[682]: time="2024-08-05 11:36:15.535816621Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e653a4b4-8f39-4b2d-8e82-ec7d17e7ab9e name=/runtime.v1.RuntimeService/Version
	Aug 05 11:36:15 addons-624151 crio[682]: time="2024-08-05 11:36:15.537402110Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=af302e9f-46df-4a03-b4d8-aaa3f324be60 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 11:36:15 addons-624151 crio[682]: time="2024-08-05 11:36:15.538661842Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722857775538634561,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589584,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=af302e9f-46df-4a03-b4d8-aaa3f324be60 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 11:36:15 addons-624151 crio[682]: time="2024-08-05 11:36:15.539365711Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6e7341b6-2fed-4088-abab-b36741c4cd8c name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 11:36:15 addons-624151 crio[682]: time="2024-08-05 11:36:15.539432544Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6e7341b6-2fed-4088-abab-b36741c4cd8c name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 11:36:15 addons-624151 crio[682]: time="2024-08-05 11:36:15.539699850Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:29967c1a34fdd70f38938c81105eee2de604acad5c342ceb0528aea72aeaa6b2,PodSandboxId:5af76ddc44262f44f77a78f8266db7a3f6a4a8eb3cf17ed5a253203e5bbf0f3d,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722857642178722530,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-766vd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a9c74668-33fb-4764-a90a-a62b6278412b,},Annotations:map[string]string{io.kubernetes.container.hash: 4bc11893,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db49fd8f431c9e97fddb6f55e5323780f859599ab01f8c5e5a140995076f8112,PodSandboxId:92acbbfa3733bd8790a8f1df24d4db773591b117bf036aadd7059d0063828729,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722857502510523235,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8c086e51-e9aa-47d4-b5da-7196cbb25a28,},Annotations:map[string]string{io.kubernet
es.container.hash: 57261cd3,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f269744d60f0b2b6497481c781c5c68ef2a589cb1ec32595a1a3a2ded79fead3,PodSandboxId:7e71318d6d00288614e6c56f44eda2a01ffecf1485b418000f2289fe3ac1f81c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722857431226184122,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 615be7ce-86e2-476f-8
1a0-9c656f5b27ad,},Annotations:map[string]string{io.kubernetes.container.hash: d45a858f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a21dd489749ff06cb7a6f20a0f0b8aec17868a25adc1ae767f7f5f3843c78fbf,PodSandboxId:18a4f31a0fb9b487e490d8a6fcd523237e7d26d378a924036a8d02270bcb219b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722857367357639978,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-f96nq,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 7b3be79e-f92b-4158-8829-8fc50c6ebbd1,},Annotations:map[string]string{io.kubernetes.container.hash: 1bddcb70,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c11137f2baffb9dff05dff3e4c5264eb5ba8e5ede5a9153db347bfb605a09a4c,PodSandboxId:aef1255e7abe477800eda44354377e46c87feec3a47ab6320c1d5e22f71c01b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722857338775026247,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bfbac9c-232e-4b87-bd62-216bc17fad0e,},Annotations:map[string]string{io.kubernetes.container.hash: 72dfffbf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93a73ab29d9dbbf0c576f434d6ba9272e2177689692a8f9aa8aac76ed4fc9028,PodSandboxId:2202578a252573a14cc49f72422bd2c2e36ae6488cf22805191908c9f0dd29ee,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722857336396783354,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6
d8ff4d-s7xqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dee3eaa-4dd1-4077-889c-712056552228,},Annotations:map[string]string{io.kubernetes.container.hash: 748f2ff8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf8488bfd154a167ef0bfbc64858c06825f1ace9af3108b2fb8282fa505ec428,PodSandboxId:e948c450eec0ec2bf0c69c35be019b5a77e99b275771cfe3f32e96355fd1e5a3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381
d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722857334011999123,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nbpvj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b10013-8b12-4e89-b735-91ae7c4b32f8,},Annotations:map[string]string{io.kubernetes.container.hash: 92541790,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fbda93b1b1b61d3a7df0606e39ec3314e927ecdaca7d4ce96af0bf43dd56928,PodSandboxId:8160c730de833c3f57575a025962de757bfbc94cfc9443b91505de1bcdedfadb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e
5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722857314210285013,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-624151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e410e9957c9bb7ef05423de94b75d113,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea77cae92000d4c3148918a81861d1b11bd73553e26ec733761698ded5b7c2e9,PodSandboxId:4eff185c0adebdaed2d00dc578176bfab416d976f3301d55cd6f3725d5c2f82d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_RUNNING,CreatedAt:1722857314201454698,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-624151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f0da1f77e0045af5e54ecd4e121311e,},Annotations:map[string]string{io.kubernetes.container.hash: a0ed5d74,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7da48507d2311bebc83cd3bf4ca2cef602ec05ea529873ed1d454439bf7073f,PodSandboxId:0daf17a00555e0f3f4f4f026d6a90a88e3925ba06b19fbe913cf57eef9b92a8b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:17228573141233
49501,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-624151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2128468ad8768be81ae3f787fbd1b4d,},Annotations:map[string]string{io.kubernetes.container.hash: f00d3253,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f11721040ddc212db151c004c40ef0a7e7a27d51f5c8fb06955f93ae7edb02a,PodSandboxId:2a6d39055991a2056ae7148b498e25a2d06df7d6c41c5b6c99a8305ffbf2aa0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722857314094956467,La
bels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-624151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 600ddff116cca39e51d6b17e354e744e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6e7341b6-2fed-4088-abab-b36741c4cd8c name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 11:36:15 addons-624151 crio[682]: time="2024-08-05 11:36:15.580608018Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=37d6fa3c-9f42-4c22-b068-181be27b5849 name=/runtime.v1.RuntimeService/Version
	Aug 05 11:36:15 addons-624151 crio[682]: time="2024-08-05 11:36:15.580683405Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=37d6fa3c-9f42-4c22-b068-181be27b5849 name=/runtime.v1.RuntimeService/Version
	Aug 05 11:36:15 addons-624151 crio[682]: time="2024-08-05 11:36:15.582395392Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=698122d5-c9e0-4f22-a6be-2f0b191c547d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 11:36:15 addons-624151 crio[682]: time="2024-08-05 11:36:15.583588304Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722857775583563865,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589584,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=698122d5-c9e0-4f22-a6be-2f0b191c547d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 11:36:15 addons-624151 crio[682]: time="2024-08-05 11:36:15.584250604Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4c3418ab-6ab0-48d0-abaf-a26e118f1bd5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 11:36:15 addons-624151 crio[682]: time="2024-08-05 11:36:15.584317668Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4c3418ab-6ab0-48d0-abaf-a26e118f1bd5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 11:36:15 addons-624151 crio[682]: time="2024-08-05 11:36:15.584592274Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:29967c1a34fdd70f38938c81105eee2de604acad5c342ceb0528aea72aeaa6b2,PodSandboxId:5af76ddc44262f44f77a78f8266db7a3f6a4a8eb3cf17ed5a253203e5bbf0f3d,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722857642178722530,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-766vd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a9c74668-33fb-4764-a90a-a62b6278412b,},Annotations:map[string]string{io.kubernetes.container.hash: 4bc11893,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db49fd8f431c9e97fddb6f55e5323780f859599ab01f8c5e5a140995076f8112,PodSandboxId:92acbbfa3733bd8790a8f1df24d4db773591b117bf036aadd7059d0063828729,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722857502510523235,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8c086e51-e9aa-47d4-b5da-7196cbb25a28,},Annotations:map[string]string{io.kubernet
es.container.hash: 57261cd3,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f269744d60f0b2b6497481c781c5c68ef2a589cb1ec32595a1a3a2ded79fead3,PodSandboxId:7e71318d6d00288614e6c56f44eda2a01ffecf1485b418000f2289fe3ac1f81c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722857431226184122,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 615be7ce-86e2-476f-8
1a0-9c656f5b27ad,},Annotations:map[string]string{io.kubernetes.container.hash: d45a858f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a21dd489749ff06cb7a6f20a0f0b8aec17868a25adc1ae767f7f5f3843c78fbf,PodSandboxId:18a4f31a0fb9b487e490d8a6fcd523237e7d26d378a924036a8d02270bcb219b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722857367357639978,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-f96nq,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 7b3be79e-f92b-4158-8829-8fc50c6ebbd1,},Annotations:map[string]string{io.kubernetes.container.hash: 1bddcb70,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c11137f2baffb9dff05dff3e4c5264eb5ba8e5ede5a9153db347bfb605a09a4c,PodSandboxId:aef1255e7abe477800eda44354377e46c87feec3a47ab6320c1d5e22f71c01b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722857338775026247,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bfbac9c-232e-4b87-bd62-216bc17fad0e,},Annotations:map[string]string{io.kubernetes.container.hash: 72dfffbf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93a73ab29d9dbbf0c576f434d6ba9272e2177689692a8f9aa8aac76ed4fc9028,PodSandboxId:2202578a252573a14cc49f72422bd2c2e36ae6488cf22805191908c9f0dd29ee,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722857336396783354,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6
d8ff4d-s7xqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dee3eaa-4dd1-4077-889c-712056552228,},Annotations:map[string]string{io.kubernetes.container.hash: 748f2ff8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf8488bfd154a167ef0bfbc64858c06825f1ace9af3108b2fb8282fa505ec428,PodSandboxId:e948c450eec0ec2bf0c69c35be019b5a77e99b275771cfe3f32e96355fd1e5a3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381
d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722857334011999123,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nbpvj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b10013-8b12-4e89-b735-91ae7c4b32f8,},Annotations:map[string]string{io.kubernetes.container.hash: 92541790,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fbda93b1b1b61d3a7df0606e39ec3314e927ecdaca7d4ce96af0bf43dd56928,PodSandboxId:8160c730de833c3f57575a025962de757bfbc94cfc9443b91505de1bcdedfadb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e
5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722857314210285013,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-624151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e410e9957c9bb7ef05423de94b75d113,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea77cae92000d4c3148918a81861d1b11bd73553e26ec733761698ded5b7c2e9,PodSandboxId:4eff185c0adebdaed2d00dc578176bfab416d976f3301d55cd6f3725d5c2f82d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_RUNNING,CreatedAt:1722857314201454698,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-624151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f0da1f77e0045af5e54ecd4e121311e,},Annotations:map[string]string{io.kubernetes.container.hash: a0ed5d74,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7da48507d2311bebc83cd3bf4ca2cef602ec05ea529873ed1d454439bf7073f,PodSandboxId:0daf17a00555e0f3f4f4f026d6a90a88e3925ba06b19fbe913cf57eef9b92a8b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:17228573141233
49501,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-624151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2128468ad8768be81ae3f787fbd1b4d,},Annotations:map[string]string{io.kubernetes.container.hash: f00d3253,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f11721040ddc212db151c004c40ef0a7e7a27d51f5c8fb06955f93ae7edb02a,PodSandboxId:2a6d39055991a2056ae7148b498e25a2d06df7d6c41c5b6c99a8305ffbf2aa0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722857314094956467,La
bels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-624151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 600ddff116cca39e51d6b17e354e744e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4c3418ab-6ab0-48d0-abaf-a26e118f1bd5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 11:36:15 addons-624151 crio[682]: time="2024-08-05 11:36:15.618653145Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=15bf15d3-5602-46c4-966f-0592af198a84 name=/runtime.v1.RuntimeService/Version
	Aug 05 11:36:15 addons-624151 crio[682]: time="2024-08-05 11:36:15.618747206Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=15bf15d3-5602-46c4-966f-0592af198a84 name=/runtime.v1.RuntimeService/Version
	Aug 05 11:36:15 addons-624151 crio[682]: time="2024-08-05 11:36:15.619767094Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e24837c9-c1fb-4f58-bba9-ee1e9a41169f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 11:36:15 addons-624151 crio[682]: time="2024-08-05 11:36:15.621491616Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722857775621464758,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589584,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e24837c9-c1fb-4f58-bba9-ee1e9a41169f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 11:36:15 addons-624151 crio[682]: time="2024-08-05 11:36:15.622053553Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3faca23c-507f-429c-b911-f633a4eac280 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 11:36:15 addons-624151 crio[682]: time="2024-08-05 11:36:15.622123992Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3faca23c-507f-429c-b911-f633a4eac280 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 11:36:15 addons-624151 crio[682]: time="2024-08-05 11:36:15.622373300Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:29967c1a34fdd70f38938c81105eee2de604acad5c342ceb0528aea72aeaa6b2,PodSandboxId:5af76ddc44262f44f77a78f8266db7a3f6a4a8eb3cf17ed5a253203e5bbf0f3d,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722857642178722530,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-766vd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a9c74668-33fb-4764-a90a-a62b6278412b,},Annotations:map[string]string{io.kubernetes.container.hash: 4bc11893,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db49fd8f431c9e97fddb6f55e5323780f859599ab01f8c5e5a140995076f8112,PodSandboxId:92acbbfa3733bd8790a8f1df24d4db773591b117bf036aadd7059d0063828729,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722857502510523235,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8c086e51-e9aa-47d4-b5da-7196cbb25a28,},Annotations:map[string]string{io.kubernet
es.container.hash: 57261cd3,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f269744d60f0b2b6497481c781c5c68ef2a589cb1ec32595a1a3a2ded79fead3,PodSandboxId:7e71318d6d00288614e6c56f44eda2a01ffecf1485b418000f2289fe3ac1f81c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722857431226184122,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 615be7ce-86e2-476f-8
1a0-9c656f5b27ad,},Annotations:map[string]string{io.kubernetes.container.hash: d45a858f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a21dd489749ff06cb7a6f20a0f0b8aec17868a25adc1ae767f7f5f3843c78fbf,PodSandboxId:18a4f31a0fb9b487e490d8a6fcd523237e7d26d378a924036a8d02270bcb219b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722857367357639978,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-f96nq,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 7b3be79e-f92b-4158-8829-8fc50c6ebbd1,},Annotations:map[string]string{io.kubernetes.container.hash: 1bddcb70,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c11137f2baffb9dff05dff3e4c5264eb5ba8e5ede5a9153db347bfb605a09a4c,PodSandboxId:aef1255e7abe477800eda44354377e46c87feec3a47ab6320c1d5e22f71c01b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722857338775026247,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bfbac9c-232e-4b87-bd62-216bc17fad0e,},Annotations:map[string]string{io.kubernetes.container.hash: 72dfffbf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93a73ab29d9dbbf0c576f434d6ba9272e2177689692a8f9aa8aac76ed4fc9028,PodSandboxId:2202578a252573a14cc49f72422bd2c2e36ae6488cf22805191908c9f0dd29ee,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722857336396783354,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6
d8ff4d-s7xqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dee3eaa-4dd1-4077-889c-712056552228,},Annotations:map[string]string{io.kubernetes.container.hash: 748f2ff8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf8488bfd154a167ef0bfbc64858c06825f1ace9af3108b2fb8282fa505ec428,PodSandboxId:e948c450eec0ec2bf0c69c35be019b5a77e99b275771cfe3f32e96355fd1e5a3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381
d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722857334011999123,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nbpvj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b10013-8b12-4e89-b735-91ae7c4b32f8,},Annotations:map[string]string{io.kubernetes.container.hash: 92541790,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fbda93b1b1b61d3a7df0606e39ec3314e927ecdaca7d4ce96af0bf43dd56928,PodSandboxId:8160c730de833c3f57575a025962de757bfbc94cfc9443b91505de1bcdedfadb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e
5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722857314210285013,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-624151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e410e9957c9bb7ef05423de94b75d113,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea77cae92000d4c3148918a81861d1b11bd73553e26ec733761698ded5b7c2e9,PodSandboxId:4eff185c0adebdaed2d00dc578176bfab416d976f3301d55cd6f3725d5c2f82d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_RUNNING,CreatedAt:1722857314201454698,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-624151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f0da1f77e0045af5e54ecd4e121311e,},Annotations:map[string]string{io.kubernetes.container.hash: a0ed5d74,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7da48507d2311bebc83cd3bf4ca2cef602ec05ea529873ed1d454439bf7073f,PodSandboxId:0daf17a00555e0f3f4f4f026d6a90a88e3925ba06b19fbe913cf57eef9b92a8b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:17228573141233
49501,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-624151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2128468ad8768be81ae3f787fbd1b4d,},Annotations:map[string]string{io.kubernetes.container.hash: f00d3253,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f11721040ddc212db151c004c40ef0a7e7a27d51f5c8fb06955f93ae7edb02a,PodSandboxId:2a6d39055991a2056ae7148b498e25a2d06df7d6c41c5b6c99a8305ffbf2aa0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722857314094956467,La
bels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-624151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 600ddff116cca39e51d6b17e354e744e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3faca23c-507f-429c-b911-f633a4eac280 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	29967c1a34fdd       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   5af76ddc44262       hello-world-app-6778b5fc9f-766vd
	db49fd8f431c9       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         4 minutes ago       Running             nginx                     0                   92acbbfa3733b       nginx
	f269744d60f0b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     5 minutes ago       Running             busybox                   0                   7e71318d6d002       busybox
	a21dd489749ff       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   6 minutes ago       Running             metrics-server            0                   18a4f31a0fb9b       metrics-server-c59844bb4-f96nq
	c11137f2baffb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   aef1255e7abe4       storage-provisioner
	93a73ab29d9db       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        7 minutes ago       Running             coredns                   0                   2202578a25257       coredns-7db6d8ff4d-s7xqd
	bf8488bfd154a       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                        7 minutes ago       Running             kube-proxy                0                   e948c450eec0e       kube-proxy-nbpvj
	1fbda93b1b1b6       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                        7 minutes ago       Running             kube-scheduler            0                   8160c730de833       kube-scheduler-addons-624151
	ea77cae92000d       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        7 minutes ago       Running             etcd                      0                   4eff185c0adeb       etcd-addons-624151
	c7da48507d231       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                        7 minutes ago       Running             kube-apiserver            0                   0daf17a00555e       kube-apiserver-addons-624151
	7f11721040ddc       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                        7 minutes ago       Running             kube-controller-manager   0                   2a6d39055991a       kube-controller-manager-addons-624151
	
	
	==> coredns [93a73ab29d9dbbf0c576f434d6ba9272e2177689692a8f9aa8aac76ed4fc9028] <==
	[INFO] 10.244.0.7:40176 - 13185 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000425464s
	[INFO] 10.244.0.7:34591 - 49047 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000095625s
	[INFO] 10.244.0.7:34591 - 3945 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000055104s
	[INFO] 10.244.0.7:49726 - 49481 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000076157s
	[INFO] 10.244.0.7:49726 - 40023 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00097384s
	[INFO] 10.244.0.7:51199 - 14602 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000257994s
	[INFO] 10.244.0.7:51199 - 31496 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000202688s
	[INFO] 10.244.0.7:59351 - 13513 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000057116s
	[INFO] 10.244.0.7:59351 - 63683 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000193861s
	[INFO] 10.244.0.7:52073 - 10001 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000033086s
	[INFO] 10.244.0.7:52073 - 28951 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000170336s
	[INFO] 10.244.0.7:37647 - 22512 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000056482s
	[INFO] 10.244.0.7:37647 - 23794 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00002733s
	[INFO] 10.244.0.7:32997 - 30507 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000080099s
	[INFO] 10.244.0.7:32997 - 46121 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000117637s
	[INFO] 10.244.0.22:56323 - 29064 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00075055s
	[INFO] 10.244.0.22:60511 - 4358 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000151184s
	[INFO] 10.244.0.22:43808 - 51766 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000480222s
	[INFO] 10.244.0.22:42209 - 33096 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000394441s
	[INFO] 10.244.0.22:47722 - 40890 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000089353s
	[INFO] 10.244.0.22:58614 - 9577 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000063844s
	[INFO] 10.244.0.22:45815 - 62497 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00086798s
	[INFO] 10.244.0.22:58134 - 62897 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000967947s
	[INFO] 10.244.0.24:52178 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00040037s
	[INFO] 10.244.0.24:48746 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000145385s
	
	
	==> describe nodes <==
	Name:               addons-624151
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-624151
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f
	                    minikube.k8s.io/name=addons-624151
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_05T11_28_40_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-624151
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 11:28:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-624151
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 11:36:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 11:34:16 +0000   Mon, 05 Aug 2024 11:28:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 11:34:16 +0000   Mon, 05 Aug 2024 11:28:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 11:34:16 +0000   Mon, 05 Aug 2024 11:28:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 11:34:16 +0000   Mon, 05 Aug 2024 11:28:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.142
	  Hostname:    addons-624151
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 300940b5f96141d395f2b88a58f331cd
	  System UUID:                300940b5-f961-41d3-95f2-b88a58f331cd
	  Boot ID:                    e20994b0-235e-42ff-8124-7b64eb456736
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m48s
	  default                     hello-world-app-6778b5fc9f-766vd         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m16s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m40s
	  kube-system                 coredns-7db6d8ff4d-s7xqd                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     7m22s
	  kube-system                 etcd-addons-624151                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         7m36s
	  kube-system                 kube-apiserver-addons-624151             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m36s
	  kube-system                 kube-controller-manager-addons-624151    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m36s
	  kube-system                 kube-proxy-nbpvj                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m23s
	  kube-system                 kube-scheduler-addons-624151             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m36s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m21s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  7m42s (x8 over 7m42s)  kubelet          Node addons-624151 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m42s (x8 over 7m42s)  kubelet          Node addons-624151 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m42s (x7 over 7m42s)  kubelet          Node addons-624151 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m36s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m36s                  kubelet          Node addons-624151 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m36s                  kubelet          Node addons-624151 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m36s                  kubelet          Node addons-624151 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m35s                  kubelet          Node addons-624151 status is now: NodeReady
	  Normal  RegisteredNode           7m23s                  node-controller  Node addons-624151 event: Registered Node addons-624151 in Controller
	
	
	==> dmesg <==
	[  +5.027228] kauditd_printk_skb: 110 callbacks suppressed
	[Aug 5 11:29] kauditd_printk_skb: 131 callbacks suppressed
	[  +6.156128] kauditd_printk_skb: 84 callbacks suppressed
	[ +16.885125] kauditd_printk_skb: 4 callbacks suppressed
	[ +16.260495] kauditd_printk_skb: 4 callbacks suppressed
	[Aug 5 11:30] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.166187] kauditd_printk_skb: 47 callbacks suppressed
	[ +10.563756] kauditd_printk_skb: 78 callbacks suppressed
	[  +5.069184] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.115817] kauditd_printk_skb: 30 callbacks suppressed
	[ +13.284330] kauditd_printk_skb: 43 callbacks suppressed
	[ +11.978967] kauditd_printk_skb: 2 callbacks suppressed
	[Aug 5 11:31] kauditd_printk_skb: 4 callbacks suppressed
	[  +8.056795] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.764342] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.125022] kauditd_printk_skb: 37 callbacks suppressed
	[  +5.102303] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.052259] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.769727] kauditd_printk_skb: 10 callbacks suppressed
	[  +6.936499] kauditd_printk_skb: 11 callbacks suppressed
	[ +10.830297] kauditd_printk_skb: 5 callbacks suppressed
	[Aug 5 11:32] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.172160] kauditd_printk_skb: 43 callbacks suppressed
	[Aug 5 11:33] kauditd_printk_skb: 31 callbacks suppressed
	[Aug 5 11:34] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [ea77cae92000d4c3148918a81861d1b11bd73553e26ec733761698ded5b7c2e9] <==
	{"level":"warn","ts":"2024-08-05T11:30:14.021662Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-05T11:30:13.690306Z","time spent":"331.295484ms","remote":"127.0.0.1:44704","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1136 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-08-05T11:30:14.027486Z","caller":"traceutil/trace.go:171","msg":"trace[251119462] linearizableReadLoop","detail":"{readStateIndex:1171; appliedIndex:1170; }","duration":"184.168218ms","start":"2024-08-05T11:30:13.843305Z","end":"2024-08-05T11:30:14.027473Z","steps":["trace[251119462] 'read index received'  (duration: 178.681057ms)","trace[251119462] 'applied index is now lower than readState.Index'  (duration: 5.48652ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-05T11:30:14.027916Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"184.554036ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85652"}
	{"level":"info","ts":"2024-08-05T11:30:14.02813Z","caller":"traceutil/trace.go:171","msg":"trace[484340735] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1139; }","duration":"184.85522ms","start":"2024-08-05T11:30:13.843264Z","end":"2024-08-05T11:30:14.028119Z","steps":["trace[484340735] 'agreement among raft nodes before linearized reading'  (duration: 184.436694ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-05T11:30:14.028385Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.898977ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11790"}
	{"level":"info","ts":"2024-08-05T11:30:14.02847Z","caller":"traceutil/trace.go:171","msg":"trace[1892054956] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1139; }","duration":"155.00604ms","start":"2024-08-05T11:30:13.873457Z","end":"2024-08-05T11:30:14.028463Z","steps":["trace[1892054956] 'agreement among raft nodes before linearized reading'  (duration: 154.864009ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-05T11:30:24.530504Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.809459ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85652"}
	{"level":"info","ts":"2024-08-05T11:30:24.530561Z","caller":"traceutil/trace.go:171","msg":"trace[1499690658] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1200; }","duration":"187.891672ms","start":"2024-08-05T11:30:24.342649Z","end":"2024-08-05T11:30:24.530541Z","steps":["trace[1499690658] 'range keys from in-memory index tree'  (duration: 187.545072ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T11:30:56.075994Z","caller":"traceutil/trace.go:171","msg":"trace[1931072354] transaction","detail":"{read_only:false; response_revision:1384; number_of_response:1; }","duration":"109.916144ms","start":"2024-08-05T11:30:55.966048Z","end":"2024-08-05T11:30:56.075965Z","steps":["trace[1931072354] 'process raft request'  (duration: 109.547607ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T11:31:22.93114Z","caller":"traceutil/trace.go:171","msg":"trace[2066504602] transaction","detail":"{read_only:false; response_revision:1579; number_of_response:1; }","duration":"169.137988ms","start":"2024-08-05T11:31:22.761969Z","end":"2024-08-05T11:31:22.931107Z","steps":["trace[2066504602] 'process raft request'  (duration: 169.092076ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T11:31:22.931371Z","caller":"traceutil/trace.go:171","msg":"trace[261962480] transaction","detail":"{read_only:false; response_revision:1578; number_of_response:1; }","duration":"352.43285ms","start":"2024-08-05T11:31:22.578926Z","end":"2024-08-05T11:31:22.931359Z","steps":["trace[261962480] 'process raft request'  (duration: 349.399033ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T11:31:22.931433Z","caller":"traceutil/trace.go:171","msg":"trace[98690444] linearizableReadLoop","detail":"{readStateIndex:1630; appliedIndex:1629; }","duration":"349.837986ms","start":"2024-08-05T11:31:22.581586Z","end":"2024-08-05T11:31:22.931424Z","steps":["trace[98690444] 'read index received'  (duration: 346.745538ms)","trace[98690444] 'applied index is now lower than readState.Index'  (duration: 3.091824ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-05T11:31:22.931522Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-05T11:31:22.578909Z","time spent":"352.503222ms","remote":"127.0.0.1:44682","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1247,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/persistentvolumes/pvc-04dfcdb1-8800-4729-a32a-d013816c2f92\" mod_revision:1512 > success:<request_put:<key:\"/registry/persistentvolumes/pvc-04dfcdb1-8800-4729-a32a-d013816c2f92\" value_size:1171 >> failure:<request_range:<key:\"/registry/persistentvolumes/pvc-04dfcdb1-8800-4729-a32a-d013816c2f92\" > >"}
	{"level":"warn","ts":"2024-08-05T11:31:22.931649Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"350.05426ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-08-05T11:31:22.931676Z","caller":"traceutil/trace.go:171","msg":"trace[1464305680] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1579; }","duration":"350.104553ms","start":"2024-08-05T11:31:22.581563Z","end":"2024-08-05T11:31:22.931668Z","steps":["trace[1464305680] 'agreement among raft nodes before linearized reading'  (duration: 350.019411ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-05T11:31:22.931696Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-05T11:31:22.581553Z","time spent":"350.138747ms","remote":"127.0.0.1:44704","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1137,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2024-08-05T11:31:22.931779Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"248.096076ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc\" ","response":"range_response_count:1 size:822"}
	{"level":"info","ts":"2024-08-05T11:31:22.931798Z","caller":"traceutil/trace.go:171","msg":"trace[1240337336] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc; range_end:; response_count:1; response_revision:1579; }","duration":"248.115772ms","start":"2024-08-05T11:31:22.683677Z","end":"2024-08-05T11:31:22.931792Z","steps":["trace[1240337336] 'agreement among raft nodes before linearized reading'  (duration: 248.057438ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T11:31:38.203176Z","caller":"traceutil/trace.go:171","msg":"trace[1974242065] transaction","detail":"{read_only:false; response_revision:1692; number_of_response:1; }","duration":"100.823026ms","start":"2024-08-05T11:31:38.102317Z","end":"2024-08-05T11:31:38.20314Z","steps":["trace[1974242065] 'process raft request'  (duration: 99.733861ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T11:32:12.616752Z","caller":"traceutil/trace.go:171","msg":"trace[1644210485] transaction","detail":"{read_only:false; response_revision:1888; number_of_response:1; }","duration":"203.426092ms","start":"2024-08-05T11:32:12.413299Z","end":"2024-08-05T11:32:12.616725Z","steps":["trace[1644210485] 'process raft request'  (duration: 196.097364ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T11:32:12.616966Z","caller":"traceutil/trace.go:171","msg":"trace[742688298] linearizableReadLoop","detail":"{readStateIndex:1957; appliedIndex:1956; }","duration":"174.866109ms","start":"2024-08-05T11:32:12.442081Z","end":"2024-08-05T11:32:12.616947Z","steps":["trace[742688298] 'read index received'  (duration: 167.324395ms)","trace[742688298] 'applied index is now lower than readState.Index'  (duration: 7.541049ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-05T11:32:12.617171Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"175.048206ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/external-health-monitor-controller-runner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-05T11:32:12.617231Z","caller":"traceutil/trace.go:171","msg":"trace[1285658805] range","detail":"{range_begin:/registry/clusterroles/external-health-monitor-controller-runner; range_end:; response_count:0; response_revision:1889; }","duration":"175.143769ms","start":"2024-08-05T11:32:12.44207Z","end":"2024-08-05T11:32:12.617214Z","steps":["trace[1285658805] 'agreement among raft nodes before linearized reading'  (duration: 175.03422ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-05T11:32:12.617426Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.995335ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/addons-624151\" ","response":"range_response_count:1 size:865"}
	{"level":"info","ts":"2024-08-05T11:32:12.61747Z","caller":"traceutil/trace.go:171","msg":"trace[1814877083] range","detail":"{range_begin:/registry/csinodes/addons-624151; range_end:; response_count:1; response_revision:1889; }","duration":"138.043873ms","start":"2024-08-05T11:32:12.479419Z","end":"2024-08-05T11:32:12.617463Z","steps":["trace[1814877083] 'agreement among raft nodes before linearized reading'  (duration: 137.902448ms)"],"step_count":1}
	
	
	==> kernel <==
	 11:36:16 up 8 min,  0 users,  load average: 0.13, 0.72, 0.52
	Linux addons-624151 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c7da48507d2311bebc83cd3bf4ca2cef602ec05ea529873ed1d454439bf7073f] <==
	I0805 11:30:29.896792       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0805 11:30:37.885626       1 conn.go:339] Error on socket receive: read tcp 192.168.39.142:8443->192.168.39.1:36500: use of closed network connection
	E0805 11:30:38.090005       1 conn.go:339] Error on socket receive: read tcp 192.168.39.142:8443->192.168.39.1:36524: use of closed network connection
	I0805 11:30:53.048469       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0805 11:30:54.102952       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0805 11:31:15.981679       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.108.152"}
	I0805 11:31:35.301754       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0805 11:31:35.520188       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.189.86"}
	E0805 11:31:38.992548       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0805 11:31:44.860237       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0805 11:32:15.189657       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.39.142:8443->10.244.0.32:34022: read: connection reset by peer
	I0805 11:32:18.170261       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0805 11:32:18.170330       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0805 11:32:18.188041       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0805 11:32:18.188205       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0805 11:32:18.210235       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0805 11:32:18.210382       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0805 11:32:18.217266       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0805 11:32:18.217391       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0805 11:32:18.310209       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0805 11:32:18.310333       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0805 11:32:19.218507       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0805 11:32:19.310970       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0805 11:32:19.317046       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0805 11:33:59.215672       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.144.139"}
	
	
	==> kube-controller-manager [7f11721040ddc212db151c004c40ef0a7e7a27d51f5c8fb06955f93ae7edb02a] <==
	W0805 11:34:17.444512       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 11:34:17.444695       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0805 11:34:18.420115       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 11:34:18.420215       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0805 11:34:18.847495       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 11:34:18.847549       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0805 11:34:20.396768       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 11:34:20.396842       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0805 11:34:51.431986       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 11:34:51.432060       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0805 11:35:08.790150       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 11:35:08.790214       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0805 11:35:09.034121       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 11:35:09.034292       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0805 11:35:10.029964       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 11:35:10.030070       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0805 11:35:48.417068       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 11:35:48.417322       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0805 11:35:49.005340       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 11:35:49.005520       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0805 11:36:03.315672       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 11:36:03.315723       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0805 11:36:07.482478       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 11:36:07.482524       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0805 11:36:14.561108       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="9.673µs"
	
	
	==> kube-proxy [bf8488bfd154a167ef0bfbc64858c06825f1ace9af3108b2fb8282fa505ec428] <==
	I0805 11:28:54.559168       1 server_linux.go:69] "Using iptables proxy"
	I0805 11:28:54.591219       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.142"]
	I0805 11:28:54.683065       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0805 11:28:54.683113       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0805 11:28:54.683129       1 server_linux.go:165] "Using iptables Proxier"
	I0805 11:28:54.687948       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0805 11:28:54.688195       1 server.go:872] "Version info" version="v1.30.3"
	I0805 11:28:54.688208       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 11:28:54.690631       1 config.go:319] "Starting node config controller"
	I0805 11:28:54.690641       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 11:28:54.691004       1 config.go:192] "Starting service config controller"
	I0805 11:28:54.691013       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 11:28:54.691028       1 config.go:101] "Starting endpoint slice config controller"
	I0805 11:28:54.691031       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 11:28:54.791113       1 shared_informer.go:320] Caches are synced for node config
	I0805 11:28:54.791157       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0805 11:28:54.791180       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [1fbda93b1b1b61d3a7df0606e39ec3314e927ecdaca7d4ce96af0bf43dd56928] <==
	W0805 11:28:36.910932       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0805 11:28:36.910966       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0805 11:28:36.911063       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0805 11:28:36.911092       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0805 11:28:36.911105       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0805 11:28:36.911112       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0805 11:28:36.911350       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0805 11:28:36.911406       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0805 11:28:37.729669       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0805 11:28:37.729731       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0805 11:28:37.771257       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0805 11:28:37.771349       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0805 11:28:37.806953       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0805 11:28:37.807361       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0805 11:28:37.935825       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0805 11:28:37.936011       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0805 11:28:38.000396       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0805 11:28:38.000576       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0805 11:28:38.096254       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0805 11:28:38.096418       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0805 11:28:38.096633       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0805 11:28:38.096736       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0805 11:28:38.156456       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0805 11:28:38.156749       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0805 11:28:38.489927       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 05 11:34:04 addons-624151 kubelet[1279]: I0805 11:34:04.959620    1279 scope.go:117] "RemoveContainer" containerID="32f6f5423b8f0124d451b545d8f6b067ed2e7891e1bd367ad5739e35951d2cee"
	Aug 05 11:34:04 addons-624151 kubelet[1279]: E0805 11:34:04.960205    1279 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32f6f5423b8f0124d451b545d8f6b067ed2e7891e1bd367ad5739e35951d2cee\": container with ID starting with 32f6f5423b8f0124d451b545d8f6b067ed2e7891e1bd367ad5739e35951d2cee not found: ID does not exist" containerID="32f6f5423b8f0124d451b545d8f6b067ed2e7891e1bd367ad5739e35951d2cee"
	Aug 05 11:34:04 addons-624151 kubelet[1279]: I0805 11:34:04.960230    1279 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32f6f5423b8f0124d451b545d8f6b067ed2e7891e1bd367ad5739e35951d2cee"} err="failed to get container status \"32f6f5423b8f0124d451b545d8f6b067ed2e7891e1bd367ad5739e35951d2cee\": rpc error: code = NotFound desc = could not find container \"32f6f5423b8f0124d451b545d8f6b067ed2e7891e1bd367ad5739e35951d2cee\": container with ID starting with 32f6f5423b8f0124d451b545d8f6b067ed2e7891e1bd367ad5739e35951d2cee not found: ID does not exist"
	Aug 05 11:34:05 addons-624151 kubelet[1279]: I0805 11:34:05.329387    1279 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Aug 05 11:34:05 addons-624151 kubelet[1279]: I0805 11:34:05.332623    1279 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4386ba62-d693-491b-8189-b027aa4647ee" path="/var/lib/kubelet/pods/4386ba62-d693-491b-8189-b027aa4647ee/volumes"
	Aug 05 11:34:39 addons-624151 kubelet[1279]: E0805 11:34:39.368449    1279 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 11:34:39 addons-624151 kubelet[1279]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 11:34:39 addons-624151 kubelet[1279]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 11:34:39 addons-624151 kubelet[1279]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 11:34:39 addons-624151 kubelet[1279]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 11:34:40 addons-624151 kubelet[1279]: I0805 11:34:40.001670    1279 scope.go:117] "RemoveContainer" containerID="3f86482c227aef4e8356044724b4dece6cafd222483cbf308f0b67cd1893de58"
	Aug 05 11:34:40 addons-624151 kubelet[1279]: I0805 11:34:40.020760    1279 scope.go:117] "RemoveContainer" containerID="ea47c54585f354161665d826e48ab1db4a14e25572006bd88ee33514ea425646"
	Aug 05 11:35:18 addons-624151 kubelet[1279]: I0805 11:35:18.330000    1279 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Aug 05 11:35:39 addons-624151 kubelet[1279]: E0805 11:35:39.369109    1279 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 11:35:39 addons-624151 kubelet[1279]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 11:35:39 addons-624151 kubelet[1279]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 11:35:39 addons-624151 kubelet[1279]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 11:35:39 addons-624151 kubelet[1279]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 11:36:14 addons-624151 kubelet[1279]: I0805 11:36:14.587042    1279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-6778b5fc9f-766vd" podStartSLOduration=133.071775524 podStartE2EDuration="2m15.586973503s" podCreationTimestamp="2024-08-05 11:33:59 +0000 UTC" firstStartedPulling="2024-08-05 11:33:59.641496971 +0000 UTC m=+320.465538470" lastFinishedPulling="2024-08-05 11:34:02.156694948 +0000 UTC m=+322.980736449" observedRunningTime="2024-08-05 11:34:02.944205444 +0000 UTC m=+323.768246965" watchObservedRunningTime="2024-08-05 11:36:14.586973503 +0000 UTC m=+455.411015004"
	Aug 05 11:36:15 addons-624151 kubelet[1279]: I0805 11:36:15.950589    1279 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qfjg\" (UniqueName: \"kubernetes.io/projected/7b3be79e-f92b-4158-8829-8fc50c6ebbd1-kube-api-access-5qfjg\") pod \"7b3be79e-f92b-4158-8829-8fc50c6ebbd1\" (UID: \"7b3be79e-f92b-4158-8829-8fc50c6ebbd1\") "
	Aug 05 11:36:15 addons-624151 kubelet[1279]: I0805 11:36:15.950660    1279 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/7b3be79e-f92b-4158-8829-8fc50c6ebbd1-tmp-dir\") pod \"7b3be79e-f92b-4158-8829-8fc50c6ebbd1\" (UID: \"7b3be79e-f92b-4158-8829-8fc50c6ebbd1\") "
	Aug 05 11:36:15 addons-624151 kubelet[1279]: I0805 11:36:15.951085    1279 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b3be79e-f92b-4158-8829-8fc50c6ebbd1-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "7b3be79e-f92b-4158-8829-8fc50c6ebbd1" (UID: "7b3be79e-f92b-4158-8829-8fc50c6ebbd1"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Aug 05 11:36:15 addons-624151 kubelet[1279]: I0805 11:36:15.961289    1279 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b3be79e-f92b-4158-8829-8fc50c6ebbd1-kube-api-access-5qfjg" (OuterVolumeSpecName: "kube-api-access-5qfjg") pod "7b3be79e-f92b-4158-8829-8fc50c6ebbd1" (UID: "7b3be79e-f92b-4158-8829-8fc50c6ebbd1"). InnerVolumeSpecName "kube-api-access-5qfjg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 05 11:36:16 addons-624151 kubelet[1279]: I0805 11:36:16.051423    1279 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-5qfjg\" (UniqueName: \"kubernetes.io/projected/7b3be79e-f92b-4158-8829-8fc50c6ebbd1-kube-api-access-5qfjg\") on node \"addons-624151\" DevicePath \"\""
	Aug 05 11:36:16 addons-624151 kubelet[1279]: I0805 11:36:16.051468    1279 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/7b3be79e-f92b-4158-8829-8fc50c6ebbd1-tmp-dir\") on node \"addons-624151\" DevicePath \"\""
	
	
	==> storage-provisioner [c11137f2baffb9dff05dff3e4c5264eb5ba8e5ede5a9153db347bfb605a09a4c] <==
	I0805 11:28:59.388610       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0805 11:28:59.466562       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0805 11:28:59.466635       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0805 11:28:59.513055       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0805 11:28:59.513221       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-624151_675c37c5-20bf-454e-96ac-66a43e0d8ee8!
	I0805 11:28:59.514192       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a6bd3dc1-59d0-408a-80c8-4e964d76e9ca", APIVersion:"v1", ResourceVersion:"608", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-624151_675c37c5-20bf-454e-96ac-66a43e0d8ee8 became leader
	I0805 11:28:59.614995       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-624151_675c37c5-20bf-454e-96ac-66a43e0d8ee8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-624151 -n addons-624151
helpers_test.go:261: (dbg) Run:  kubectl --context addons-624151 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (330.33s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.3s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-624151
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-624151: exit status 82 (2m0.472477186s)

                                                
                                                
-- stdout --
	* Stopping node "addons-624151"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-624151" : exit status 82
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-624151
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-624151: exit status 11 (21.536545686s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.142:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-624151" : exit status 11
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-624151
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-624151: exit status 11 (6.143801372s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.142:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-624151" : exit status 11
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-624151
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-624151: exit status 11 (6.143203333s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.142:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-624151" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (2.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 image ls --format short --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-014296 image ls --format short --alsologtostderr: (2.265889048s)
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-014296 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-014296 image ls --format short --alsologtostderr:
I0805 11:43:18.920735  401297 out.go:291] Setting OutFile to fd 1 ...
I0805 11:43:18.920864  401297 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 11:43:18.920874  401297 out.go:304] Setting ErrFile to fd 2...
I0805 11:43:18.920878  401297 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 11:43:18.921049  401297 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
I0805 11:43:18.921622  401297 config.go:182] Loaded profile config "functional-014296": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0805 11:43:18.921718  401297 config.go:182] Loaded profile config "functional-014296": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0805 11:43:18.922079  401297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0805 11:43:18.922140  401297 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 11:43:18.937155  401297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40943
I0805 11:43:18.937668  401297 main.go:141] libmachine: () Calling .GetVersion
I0805 11:43:18.938230  401297 main.go:141] libmachine: Using API Version  1
I0805 11:43:18.938253  401297 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 11:43:18.938642  401297 main.go:141] libmachine: () Calling .GetMachineName
I0805 11:43:18.938875  401297 main.go:141] libmachine: (functional-014296) Calling .GetState
I0805 11:43:18.940615  401297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0805 11:43:18.940654  401297 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 11:43:18.954997  401297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41311
I0805 11:43:18.955392  401297 main.go:141] libmachine: () Calling .GetVersion
I0805 11:43:18.956054  401297 main.go:141] libmachine: Using API Version  1
I0805 11:43:18.956086  401297 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 11:43:18.956410  401297 main.go:141] libmachine: () Calling .GetMachineName
I0805 11:43:18.956670  401297 main.go:141] libmachine: (functional-014296) Calling .DriverName
I0805 11:43:18.956898  401297 ssh_runner.go:195] Run: systemctl --version
I0805 11:43:18.956932  401297 main.go:141] libmachine: (functional-014296) Calling .GetSSHHostname
I0805 11:43:18.960041  401297 main.go:141] libmachine: (functional-014296) DBG | domain functional-014296 has defined MAC address 52:54:00:29:fd:03 in network mk-functional-014296
I0805 11:43:18.960612  401297 main.go:141] libmachine: (functional-014296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:fd:03", ip: ""} in network mk-functional-014296: {Iface:virbr1 ExpiryTime:2024-08-05 12:39:56 +0000 UTC Type:0 Mac:52:54:00:29:fd:03 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:functional-014296 Clientid:01:52:54:00:29:fd:03}
I0805 11:43:18.960643  401297 main.go:141] libmachine: (functional-014296) DBG | domain functional-014296 has defined IP address 192.168.39.155 and MAC address 52:54:00:29:fd:03 in network mk-functional-014296
I0805 11:43:18.960801  401297 main.go:141] libmachine: (functional-014296) Calling .GetSSHPort
I0805 11:43:18.960975  401297 main.go:141] libmachine: (functional-014296) Calling .GetSSHKeyPath
I0805 11:43:18.961207  401297 main.go:141] libmachine: (functional-014296) Calling .GetSSHUsername
I0805 11:43:18.961393  401297 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/functional-014296/id_rsa Username:docker}
I0805 11:43:19.066773  401297 ssh_runner.go:195] Run: sudo crictl images --output json
I0805 11:43:21.137006  401297 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.070189384s)
W0805 11:43:21.137084  401297 cache_images.go:721] Failed to list images for profile functional-014296 crictl images: sudo crictl images --output json: Process exited with status 1
stdout:

                                                
                                                
stderr:
E0805 11:43:21.109370    8827 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="&ImageFilter{Image:&ImageSpec{Image:,Annotations:map[string]string{},UserSpecifiedImage:,},}"
time="2024-08-05T11:43:21Z" level=fatal msg="listing images: rpc error: code = DeadlineExceeded desc = context deadline exceeded"
I0805 11:43:21.137147  401297 main.go:141] libmachine: Making call to close driver server
I0805 11:43:21.137160  401297 main.go:141] libmachine: (functional-014296) Calling .Close
I0805 11:43:21.137551  401297 main.go:141] libmachine: (functional-014296) DBG | Closing plugin on server side
I0805 11:43:21.137575  401297 main.go:141] libmachine: Successfully made call to close driver server
I0805 11:43:21.137587  401297 main.go:141] libmachine: Making call to close connection to plugin binary
I0805 11:43:21.137600  401297 main.go:141] libmachine: Making call to close driver server
I0805 11:43:21.137611  401297 main.go:141] libmachine: (functional-014296) Calling .Close
I0805 11:43:21.137860  401297 main.go:141] libmachine: Successfully made call to close driver server
I0805 11:43:21.137879  401297 main.go:141] libmachine: Making call to close connection to plugin binary
I0805 11:43:21.137898  401297 main.go:141] libmachine: (functional-014296) DBG | Closing plugin on server side
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (2.27s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (242.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-014296 /tmp/TestFunctionalparallelMountCmdany-port358739415/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722858174657329828" to /tmp/TestFunctionalparallelMountCmdany-port358739415/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722858174657329828" to /tmp/TestFunctionalparallelMountCmdany-port358739415/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722858174657329828" to /tmp/TestFunctionalparallelMountCmdany-port358739415/001/test-1722858174657329828
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-014296 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (263.81953ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug  5 11:42 created-by-test
-rw-r--r-- 1 docker docker 24 Aug  5 11:42 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug  5 11:42 test-1722858174657329828
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 ssh cat /mount-9p/test-1722858174657329828
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-014296 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [cc1fa262-b87c-4047-a9b6-79324e376ae7] Pending
helpers_test.go:344: "busybox-mount" [cc1fa262-b87c-4047-a9b6-79324e376ae7] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:329: TestFunctional/parallel/MountCmd/any-port: WARNING: pod list for "default" "integration-test=busybox-mount" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test_mount_test.go:153: ***** TestFunctional/parallel/MountCmd/any-port: pod "integration-test=busybox-mount" failed to start within 4m0s: context deadline exceeded ****
functional_test_mount_test.go:153: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-014296 -n functional-014296
functional_test_mount_test.go:153: TestFunctional/parallel/MountCmd/any-port: showing logs for failed pods as of 2024-08-05 11:46:56.639935465 +0000 UTC m=+1203.597259153
functional_test_mount_test.go:153: (dbg) Run:  kubectl --context functional-014296 describe po busybox-mount -n default
functional_test_mount_test.go:153: (dbg) kubectl --context functional-014296 describe po busybox-mount -n default:
Name:             busybox-mount
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-014296/192.168.39.155
Start Time:       Mon, 05 Aug 2024 11:42:56 +0000
Labels:           integration-test=busybox-mount
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Containers:
mount-munger:
Container ID:  
Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID:      
Port:          <none>
Host Port:     <none>
Command:
/bin/sh
-c
--
Args:
cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
State:          Waiting
Reason:       ContainerCreating
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/mount-9p from test-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7lpmh (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   False 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
test-volume:
Type:          HostPath (bare host directory volume)
Path:          /mount-9p
HostPathType:  
kube-api-access-7lpmh:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type    Reason     Age   From               Message
----    ------     ----  ----               -------
Normal  Scheduled  4m    default-scheduler  Successfully assigned default/busybox-mount to functional-014296
functional_test_mount_test.go:153: (dbg) Run:  kubectl --context functional-014296 logs busybox-mount -n default
functional_test_mount_test.go:153: (dbg) Non-zero exit: kubectl --context functional-014296 logs busybox-mount -n default: exit status 1 (68.793044ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mount-munger" in pod "busybox-mount" is waiting to start: ContainerCreating

                                                
                                                
** /stderr **
functional_test_mount_test.go:153: kubectl --context functional-014296 logs busybox-mount -n default: exit status 1
functional_test_mount_test.go:154: failed waiting for busybox-mount pod: integration-test=busybox-mount within 4m0s: context deadline exceeded
functional_test_mount_test.go:80: "TestFunctional/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-014296 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (193.86894ms)

                                                
                                                
-- stdout --
	192.168.39.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=1000,access=any,msize=65536,trans=tcp,noextend,port=41869)
	total 2
	-rw-r--r-- 1 docker docker 24 Aug  5 11:42 created-by-test
	-rw-r--r-- 1 docker docker 24 Aug  5 11:42 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Aug  5 11:42 test-1722858174657329828
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-amd64 -p functional-014296 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-014296 /tmp/TestFunctionalparallelMountCmdany-port358739415/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-amd64 mount -p functional-014296 /tmp/TestFunctionalparallelMountCmdany-port358739415/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalparallelMountCmdany-port358739415/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.39.1:41869
* Userspace file server: ufs starting
* Successfully mounted /tmp/TestFunctionalparallelMountCmdany-port358739415/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-amd64 mount -p functional-014296 /tmp/TestFunctionalparallelMountCmdany-port358739415/001:/mount-9p --alsologtostderr -v=1] stderr:
I0805 11:42:54.712560  399885 out.go:291] Setting OutFile to fd 1 ...
I0805 11:42:54.712816  399885 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 11:42:54.712840  399885 out.go:304] Setting ErrFile to fd 2...
I0805 11:42:54.712858  399885 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 11:42:54.713145  399885 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
I0805 11:42:54.713555  399885 mustload.go:65] Loading cluster: functional-014296
I0805 11:42:54.714110  399885 config.go:182] Loaded profile config "functional-014296": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0805 11:42:54.714670  399885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0805 11:42:54.714761  399885 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 11:42:54.734598  399885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33237
I0805 11:42:54.735073  399885 main.go:141] libmachine: () Calling .GetVersion
I0805 11:42:54.735756  399885 main.go:141] libmachine: Using API Version  1
I0805 11:42:54.735784  399885 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 11:42:54.736393  399885 main.go:141] libmachine: () Calling .GetMachineName
I0805 11:42:54.736604  399885 main.go:141] libmachine: (functional-014296) Calling .GetState
I0805 11:42:54.738395  399885 host.go:66] Checking if "functional-014296" exists ...
I0805 11:42:54.738686  399885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0805 11:42:54.738734  399885 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 11:42:54.756486  399885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35447
I0805 11:42:54.756846  399885 main.go:141] libmachine: () Calling .GetVersion
I0805 11:42:54.757359  399885 main.go:141] libmachine: Using API Version  1
I0805 11:42:54.757385  399885 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 11:42:54.757776  399885 main.go:141] libmachine: () Calling .GetMachineName
I0805 11:42:54.757973  399885 main.go:141] libmachine: (functional-014296) Calling .DriverName
I0805 11:42:54.758146  399885 main.go:141] libmachine: (functional-014296) Calling .DriverName
I0805 11:42:54.758302  399885 main.go:141] libmachine: (functional-014296) Calling .GetIP
I0805 11:42:54.761963  399885 main.go:141] libmachine: (functional-014296) DBG | domain functional-014296 has defined MAC address 52:54:00:29:fd:03 in network mk-functional-014296
I0805 11:42:54.762360  399885 main.go:141] libmachine: (functional-014296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:fd:03", ip: ""} in network mk-functional-014296: {Iface:virbr1 ExpiryTime:2024-08-05 12:39:56 +0000 UTC Type:0 Mac:52:54:00:29:fd:03 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:functional-014296 Clientid:01:52:54:00:29:fd:03}
I0805 11:42:54.762396  399885 main.go:141] libmachine: (functional-014296) DBG | domain functional-014296 has defined IP address 192.168.39.155 and MAC address 52:54:00:29:fd:03 in network mk-functional-014296
I0805 11:42:54.763242  399885 main.go:141] libmachine: (functional-014296) Calling .DriverName
I0805 11:42:54.765467  399885 out.go:177] * Mounting host path /tmp/TestFunctionalparallelMountCmdany-port358739415/001 into VM as /mount-9p ...
I0805 11:42:54.766846  399885 out.go:177]   - Mount type:   9p
I0805 11:42:54.768032  399885 out.go:177]   - User ID:      docker
I0805 11:42:54.769500  399885 out.go:177]   - Group ID:     docker
I0805 11:42:54.771213  399885 out.go:177]   - Version:      9p2000.L
I0805 11:42:54.772709  399885 out.go:177]   - Message Size: 262144
I0805 11:42:54.773749  399885 out.go:177]   - Options:      map[]
I0805 11:42:54.775058  399885 out.go:177]   - Bind Address: 192.168.39.1:41869
I0805 11:42:54.777055  399885 out.go:177] * Userspace file server: 
I0805 11:42:54.777126  399885 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f /mount-9p || echo "
I0805 11:42:54.777165  399885 main.go:141] libmachine: (functional-014296) Calling .GetSSHHostname
I0805 11:42:54.779939  399885 main.go:141] libmachine: (functional-014296) DBG | domain functional-014296 has defined MAC address 52:54:00:29:fd:03 in network mk-functional-014296
I0805 11:42:54.780509  399885 main.go:141] libmachine: (functional-014296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:fd:03", ip: ""} in network mk-functional-014296: {Iface:virbr1 ExpiryTime:2024-08-05 12:39:56 +0000 UTC Type:0 Mac:52:54:00:29:fd:03 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:functional-014296 Clientid:01:52:54:00:29:fd:03}
I0805 11:42:54.780543  399885 main.go:141] libmachine: (functional-014296) DBG | domain functional-014296 has defined IP address 192.168.39.155 and MAC address 52:54:00:29:fd:03 in network mk-functional-014296
I0805 11:42:54.780715  399885 main.go:141] libmachine: (functional-014296) Calling .GetSSHPort
I0805 11:42:54.780871  399885 main.go:141] libmachine: (functional-014296) Calling .GetSSHKeyPath
I0805 11:42:54.781001  399885 main.go:141] libmachine: (functional-014296) Calling .GetSSHUsername
I0805 11:42:54.781114  399885 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/functional-014296/id_rsa Username:docker}
I0805 11:42:54.930093  399885 mount.go:180] unmount for /mount-9p ran successfully
I0805 11:42:54.930126  399885 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I0805 11:42:54.961177  399885 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=41869,trans=tcp,version=9p2000.L 192.168.39.1 /mount-9p"
I0805 11:42:55.022201  399885 main.go:125] stdlog: ufs.go:141 connected
I0805 11:42:55.022397  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Tversion tag 65535 msize 65536 version '9P2000.L'
I0805 11:42:55.022463  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rversion tag 65535 msize 65536 version '9P2000'
I0805 11:42:55.022876  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I0805 11:42:55.022990  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rattach tag 0 aqid (20fa2de 225978bf 'd')
I0805 11:42:55.023320  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Tstat tag 0 fid 0
I0805 11:42:55.023495  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa2de 225978bf 'd') m d775 at 0 mt 1722858174 l 4096 t 0 d 0 ext )
I0805 11:42:55.029172  399885 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/functional-014296/.mount-process: {Name:mk4397536a4aecf80a404872aa83353c1b827c33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0805 11:42:55.029404  399885 mount.go:105] mount successful: ""
I0805 11:42:55.031486  399885 out.go:177] * Successfully mounted /tmp/TestFunctionalparallelMountCmdany-port358739415/001 to /mount-9p
I0805 11:42:55.032919  399885 out.go:177] 
I0805 11:42:55.034387  399885 out.go:177] * NOTE: This process must stay alive for the mount to be accessible ...
I0805 11:42:55.952246  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Tstat tag 0 fid 0
I0805 11:42:55.952407  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa2de 225978bf 'd') m d775 at 0 mt 1722858174 l 4096 t 0 d 0 ext )
I0805 11:42:55.954857  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Twalk tag 0 fid 0 newfid 1 
I0805 11:42:55.954915  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rwalk tag 0 
I0805 11:42:55.962759  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Topen tag 0 fid 1 mode 0
I0805 11:42:55.962833  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Ropen tag 0 qid (20fa2de 225978bf 'd') iounit 0
I0805 11:42:55.963040  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Tstat tag 0 fid 0
I0805 11:42:55.963162  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa2de 225978bf 'd') m d775 at 0 mt 1722858174 l 4096 t 0 d 0 ext )
I0805 11:42:55.963402  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Tread tag 0 fid 1 offset 0 count 65512
I0805 11:42:55.963565  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rread tag 0 count 258
I0805 11:42:55.963776  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Tread tag 0 fid 1 offset 258 count 65254
I0805 11:42:55.963813  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rread tag 0 count 0
I0805 11:42:55.964038  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Tread tag 0 fid 1 offset 258 count 65512
I0805 11:42:55.964064  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rread tag 0 count 0
I0805 11:42:55.964550  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Twalk tag 0 fid 0 newfid 2 0:'test-1722858174657329828' 
I0805 11:42:55.964594  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rwalk tag 0 (20fa2e6 225978bf '') 
I0805 11:42:55.964775  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Tstat tag 0 fid 2
I0805 11:42:55.964844  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rstat tag 0 st ('test-1722858174657329828' 'jenkins' 'balintp' '' q (20fa2e6 225978bf '') m 644 at 0 mt 1722858174 l 24 t 0 d 0 ext )
I0805 11:42:55.964999  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Tstat tag 0 fid 2
I0805 11:42:55.965082  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rstat tag 0 st ('test-1722858174657329828' 'jenkins' 'balintp' '' q (20fa2e6 225978bf '') m 644 at 0 mt 1722858174 l 24 t 0 d 0 ext )
I0805 11:42:55.965201  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Tclunk tag 0 fid 2
I0805 11:42:55.965223  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rclunk tag 0
I0805 11:42:55.965415  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I0805 11:42:55.965456  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rwalk tag 0 (20fa2e5 225978bf '') 
I0805 11:42:55.965630  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Tstat tag 0 fid 2
I0805 11:42:55.965686  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa2e5 225978bf '') m 644 at 0 mt 1722858174 l 24 t 0 d 0 ext )
I0805 11:42:55.965792  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Tstat tag 0 fid 2
I0805 11:42:55.965879  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa2e5 225978bf '') m 644 at 0 mt 1722858174 l 24 t 0 d 0 ext )
I0805 11:42:55.965976  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Tclunk tag 0 fid 2
I0805 11:42:55.965993  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rclunk tag 0
I0805 11:42:55.966191  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I0805 11:42:55.966231  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rwalk tag 0 (20fa2e2 225978bf '') 
I0805 11:42:55.966344  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Tstat tag 0 fid 2
I0805 11:42:55.966396  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa2e2 225978bf '') m 644 at 0 mt 1722858174 l 24 t 0 d 0 ext )
I0805 11:42:55.966509  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Tstat tag 0 fid 2
I0805 11:42:55.966591  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa2e2 225978bf '') m 644 at 0 mt 1722858174 l 24 t 0 d 0 ext )
I0805 11:42:55.966693  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Tclunk tag 0 fid 2
I0805 11:42:55.966712  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rclunk tag 0
I0805 11:42:55.966806  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Tread tag 0 fid 1 offset 258 count 65512
I0805 11:42:55.966839  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rread tag 0 count 0
I0805 11:42:55.966929  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Tclunk tag 0 fid 1
I0805 11:42:55.966950  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rclunk tag 0
I0805 11:42:56.218225  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Twalk tag 0 fid 0 newfid 1 0:'test-1722858174657329828' 
I0805 11:42:56.218288  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rwalk tag 0 (20fa2e6 225978bf '') 
I0805 11:42:56.219992  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Tstat tag 0 fid 1
I0805 11:42:56.220121  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rstat tag 0 st ('test-1722858174657329828' 'jenkins' 'balintp' '' q (20fa2e6 225978bf '') m 644 at 0 mt 1722858174 l 24 t 0 d 0 ext )
I0805 11:42:56.220402  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Twalk tag 0 fid 1 newfid 2 
I0805 11:42:56.220449  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rwalk tag 0 
I0805 11:42:56.220755  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Topen tag 0 fid 2 mode 0
I0805 11:42:56.220831  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Ropen tag 0 qid (20fa2e6 225978bf '') iounit 0
I0805 11:42:56.221016  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Tstat tag 0 fid 1
I0805 11:42:56.221119  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rstat tag 0 st ('test-1722858174657329828' 'jenkins' 'balintp' '' q (20fa2e6 225978bf '') m 644 at 0 mt 1722858174 l 24 t 0 d 0 ext )
I0805 11:42:56.221441  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Tread tag 0 fid 2 offset 0 count 65512
I0805 11:42:56.221516  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rread tag 0 count 24
I0805 11:42:56.221822  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Tread tag 0 fid 2 offset 24 count 65512
I0805 11:42:56.221854  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rread tag 0 count 0
I0805 11:42:56.222077  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Tread tag 0 fid 2 offset 24 count 65512
I0805 11:42:56.222131  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rread tag 0 count 0
I0805 11:42:56.222381  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Tclunk tag 0 fid 2
I0805 11:42:56.222418  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rclunk tag 0
I0805 11:42:56.222673  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Tclunk tag 0 fid 1
I0805 11:42:56.222719  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rclunk tag 0
I0805 11:46:56.961110  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Tstat tag 0 fid 0
I0805 11:46:56.961312  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa2de 225978bf 'd') m d775 at 0 mt 1722858174 l 4096 t 0 d 0 ext )
I0805 11:46:56.962745  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Twalk tag 0 fid 0 newfid 1 
I0805 11:46:56.962814  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rwalk tag 0 
I0805 11:46:56.963077  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Topen tag 0 fid 1 mode 0
I0805 11:46:56.963158  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Ropen tag 0 qid (20fa2de 225978bf 'd') iounit 0
I0805 11:46:56.963322  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Tstat tag 0 fid 0
I0805 11:46:56.963439  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa2de 225978bf 'd') m d775 at 0 mt 1722858174 l 4096 t 0 d 0 ext )
I0805 11:46:56.963647  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Tread tag 0 fid 1 offset 0 count 65512
I0805 11:46:56.963860  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rread tag 0 count 258
I0805 11:46:56.964008  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Tread tag 0 fid 1 offset 258 count 65254
I0805 11:46:56.964043  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rread tag 0 count 0
I0805 11:46:56.964207  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Tread tag 0 fid 1 offset 258 count 65512
I0805 11:46:56.964271  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rread tag 0 count 0
I0805 11:46:56.964408  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Twalk tag 0 fid 0 newfid 2 0:'test-1722858174657329828' 
I0805 11:46:56.964442  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rwalk tag 0 (20fa2e6 225978bf '') 
I0805 11:46:56.964553  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Tstat tag 0 fid 2
I0805 11:46:56.964652  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rstat tag 0 st ('test-1722858174657329828' 'jenkins' 'balintp' '' q (20fa2e6 225978bf '') m 644 at 0 mt 1722858174 l 24 t 0 d 0 ext )
I0805 11:46:56.964881  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Tstat tag 0 fid 2
I0805 11:46:56.964987  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rstat tag 0 st ('test-1722858174657329828' 'jenkins' 'balintp' '' q (20fa2e6 225978bf '') m 644 at 0 mt 1722858174 l 24 t 0 d 0 ext )
I0805 11:46:56.965143  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Tclunk tag 0 fid 2
I0805 11:46:56.965177  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rclunk tag 0
I0805 11:46:56.965309  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I0805 11:46:56.965348  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rwalk tag 0 (20fa2e5 225978bf '') 
I0805 11:46:56.965457  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Tstat tag 0 fid 2
I0805 11:46:56.965552  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa2e5 225978bf '') m 644 at 0 mt 1722858174 l 24 t 0 d 0 ext )
I0805 11:46:56.965769  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Tstat tag 0 fid 2
I0805 11:46:56.965849  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa2e5 225978bf '') m 644 at 0 mt 1722858174 l 24 t 0 d 0 ext )
I0805 11:46:56.966167  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Tclunk tag 0 fid 2
I0805 11:46:56.966200  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rclunk tag 0
I0805 11:46:56.966480  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I0805 11:46:56.966510  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rwalk tag 0 (20fa2e2 225978bf '') 
I0805 11:46:56.966626  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Tstat tag 0 fid 2
I0805 11:46:56.966721  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa2e2 225978bf '') m 644 at 0 mt 1722858174 l 24 t 0 d 0 ext )
I0805 11:46:56.966857  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Tstat tag 0 fid 2
I0805 11:46:56.966929  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa2e2 225978bf '') m 644 at 0 mt 1722858174 l 24 t 0 d 0 ext )
I0805 11:46:56.967037  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Tclunk tag 0 fid 2
I0805 11:46:56.967068  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rclunk tag 0
I0805 11:46:56.967245  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Tread tag 0 fid 1 offset 258 count 65512
I0805 11:46:56.967370  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rread tag 0 count 0
I0805 11:46:56.967526  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Tclunk tag 0 fid 1
I0805 11:46:56.967565  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rclunk tag 0
I0805 11:46:56.969895  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I0805 11:46:56.969947  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rerror tag 0 ename 'file not found' ecode 0
I0805 11:46:57.161330  399885 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.155:45780 Tclunk tag 0 fid 0
I0805 11:46:57.161385  399885 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.155:45780 Rclunk tag 0
I0805 11:46:57.167027  399885 main.go:125] stdlog: ufs.go:147 disconnected
I0805 11:46:57.384797  399885 out.go:177] * Unmounting /mount-9p ...
I0805 11:46:57.386061  399885 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f /mount-9p || echo "
I0805 11:46:57.394787  399885 mount.go:180] unmount for /mount-9p ran successfully
I0805 11:46:57.394911  399885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/functional-014296/.mount-process: {Name:mk4397536a4aecf80a404872aa83353c1b827c33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0805 11:46:57.396492  399885 out.go:177] 
W0805 11:46:57.397825  399885 out.go:239] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I0805 11:46:57.399321  399885 out.go:177] 
--- FAIL: TestFunctional/parallel/MountCmd/any-port (242.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 node stop m02 -v=7 --alsologtostderr
E0805 11:52:52.926869  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/functional-014296/client.crt: no such file or directory
E0805 11:53:20.611893  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/functional-014296/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-672593 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.475986618s)

                                                
                                                
-- stdout --
	* Stopping node "ha-672593-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 11:52:23.017462  407082 out.go:291] Setting OutFile to fd 1 ...
	I0805 11:52:23.017601  407082 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 11:52:23.017613  407082 out.go:304] Setting ErrFile to fd 2...
	I0805 11:52:23.017619  407082 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 11:52:23.017823  407082 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
	I0805 11:52:23.018064  407082 mustload.go:65] Loading cluster: ha-672593
	I0805 11:52:23.018470  407082 config.go:182] Loaded profile config "ha-672593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 11:52:23.018496  407082 stop.go:39] StopHost: ha-672593-m02
	I0805 11:52:23.018841  407082 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:52:23.018895  407082 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:52:23.034251  407082 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33721
	I0805 11:52:23.034695  407082 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:52:23.035232  407082 main.go:141] libmachine: Using API Version  1
	I0805 11:52:23.035260  407082 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:52:23.035582  407082 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:52:23.037757  407082 out.go:177] * Stopping node "ha-672593-m02"  ...
	I0805 11:52:23.038890  407082 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0805 11:52:23.038938  407082 main.go:141] libmachine: (ha-672593-m02) Calling .DriverName
	I0805 11:52:23.039148  407082 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0805 11:52:23.039182  407082 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHHostname
	I0805 11:52:23.041963  407082 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:52:23.042393  407082 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:48:16 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-672593-m02 Clientid:01:52:54:00:67:7b:e8}
	I0805 11:52:23.042420  407082 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:52:23.042549  407082 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHPort
	I0805 11:52:23.042777  407082 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHKeyPath
	I0805 11:52:23.042976  407082 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHUsername
	I0805 11:52:23.043125  407082 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m02/id_rsa Username:docker}
	I0805 11:52:23.127531  407082 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0805 11:52:23.182519  407082 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0805 11:52:23.238436  407082 main.go:141] libmachine: Stopping "ha-672593-m02"...
	I0805 11:52:23.238481  407082 main.go:141] libmachine: (ha-672593-m02) Calling .GetState
	I0805 11:52:23.240258  407082 main.go:141] libmachine: (ha-672593-m02) Calling .Stop
	I0805 11:52:23.243699  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 0/120
	I0805 11:52:24.244981  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 1/120
	I0805 11:52:25.246066  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 2/120
	I0805 11:52:26.247522  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 3/120
	I0805 11:52:27.248906  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 4/120
	I0805 11:52:28.250692  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 5/120
	I0805 11:52:29.252214  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 6/120
	I0805 11:52:30.254238  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 7/120
	I0805 11:52:31.255811  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 8/120
	I0805 11:52:32.257226  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 9/120
	I0805 11:52:33.259332  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 10/120
	I0805 11:52:34.260655  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 11/120
	I0805 11:52:35.262369  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 12/120
	I0805 11:52:36.263689  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 13/120
	I0805 11:52:37.265703  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 14/120
	I0805 11:52:38.267650  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 15/120
	I0805 11:52:39.268989  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 16/120
	I0805 11:52:40.270180  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 17/120
	I0805 11:52:41.271536  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 18/120
	I0805 11:52:42.273126  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 19/120
	I0805 11:52:43.275031  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 20/120
	I0805 11:52:44.276479  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 21/120
	I0805 11:52:45.278216  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 22/120
	I0805 11:52:46.279834  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 23/120
	I0805 11:52:47.281248  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 24/120
	I0805 11:52:48.283055  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 25/120
	I0805 11:52:49.284433  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 26/120
	I0805 11:52:50.286339  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 27/120
	I0805 11:52:51.287662  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 28/120
	I0805 11:52:52.288974  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 29/120
	I0805 11:52:53.291277  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 30/120
	I0805 11:52:54.293648  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 31/120
	I0805 11:52:55.295135  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 32/120
	I0805 11:52:56.297424  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 33/120
	I0805 11:52:57.298959  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 34/120
	I0805 11:52:58.301096  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 35/120
	I0805 11:52:59.302818  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 36/120
	I0805 11:53:00.305347  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 37/120
	I0805 11:53:01.306884  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 38/120
	I0805 11:53:02.309209  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 39/120
	I0805 11:53:03.311397  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 40/120
	I0805 11:53:04.312900  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 41/120
	I0805 11:53:05.315219  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 42/120
	I0805 11:53:06.317047  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 43/120
	I0805 11:53:07.318647  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 44/120
	I0805 11:53:08.320592  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 45/120
	I0805 11:53:09.322402  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 46/120
	I0805 11:53:10.324603  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 47/120
	I0805 11:53:11.326643  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 48/120
	I0805 11:53:12.328259  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 49/120
	I0805 11:53:13.330433  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 50/120
	I0805 11:53:14.331756  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 51/120
	I0805 11:53:15.333163  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 52/120
	I0805 11:53:16.335205  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 53/120
	I0805 11:53:17.336796  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 54/120
	I0805 11:53:18.338562  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 55/120
	I0805 11:53:19.339830  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 56/120
	I0805 11:53:20.341280  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 57/120
	I0805 11:53:21.342738  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 58/120
	I0805 11:53:22.344318  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 59/120
	I0805 11:53:23.345626  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 60/120
	I0805 11:53:24.347112  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 61/120
	I0805 11:53:25.348801  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 62/120
	I0805 11:53:26.350641  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 63/120
	I0805 11:53:27.352363  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 64/120
	I0805 11:53:28.354600  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 65/120
	I0805 11:53:29.356296  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 66/120
	I0805 11:53:30.357727  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 67/120
	I0805 11:53:31.359384  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 68/120
	I0805 11:53:32.360769  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 69/120
	I0805 11:53:33.362967  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 70/120
	I0805 11:53:34.364326  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 71/120
	I0805 11:53:35.365661  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 72/120
	I0805 11:53:36.367152  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 73/120
	I0805 11:53:37.368549  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 74/120
	I0805 11:53:38.370504  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 75/120
	I0805 11:53:39.371770  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 76/120
	I0805 11:53:40.373286  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 77/120
	I0805 11:53:41.374712  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 78/120
	I0805 11:53:42.376011  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 79/120
	I0805 11:53:43.377970  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 80/120
	I0805 11:53:44.379408  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 81/120
	I0805 11:53:45.381408  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 82/120
	I0805 11:53:46.383140  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 83/120
	I0805 11:53:47.384403  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 84/120
	I0805 11:53:48.385823  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 85/120
	I0805 11:53:49.387270  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 86/120
	I0805 11:53:50.388510  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 87/120
	I0805 11:53:51.390127  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 88/120
	I0805 11:53:52.392236  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 89/120
	I0805 11:53:53.394449  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 90/120
	I0805 11:53:54.396424  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 91/120
	I0805 11:53:55.398456  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 92/120
	I0805 11:53:56.399991  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 93/120
	I0805 11:53:57.402350  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 94/120
	I0805 11:53:58.404745  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 95/120
	I0805 11:53:59.406471  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 96/120
	I0805 11:54:00.407954  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 97/120
	I0805 11:54:01.410228  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 98/120
	I0805 11:54:02.412031  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 99/120
	I0805 11:54:03.414221  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 100/120
	I0805 11:54:04.415663  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 101/120
	I0805 11:54:05.417072  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 102/120
	I0805 11:54:06.418801  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 103/120
	I0805 11:54:07.420145  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 104/120
	I0805 11:54:08.421931  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 105/120
	I0805 11:54:09.423175  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 106/120
	I0805 11:54:10.424662  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 107/120
	I0805 11:54:11.426770  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 108/120
	I0805 11:54:12.428425  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 109/120
	I0805 11:54:13.430951  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 110/120
	I0805 11:54:14.432188  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 111/120
	I0805 11:54:15.433731  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 112/120
	I0805 11:54:16.435318  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 113/120
	I0805 11:54:17.436839  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 114/120
	I0805 11:54:18.438861  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 115/120
	I0805 11:54:19.440287  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 116/120
	I0805 11:54:20.442488  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 117/120
	I0805 11:54:21.444092  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 118/120
	I0805 11:54:22.445457  407082 main.go:141] libmachine: (ha-672593-m02) Waiting for machine to stop 119/120
	I0805 11:54:23.446699  407082 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0805 11:54:23.446864  407082 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-672593 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-672593 status -v=7 --alsologtostderr: exit status 3 (19.0677264s)

                                                
                                                
-- stdout --
	ha-672593
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-672593-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-672593-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-672593-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 11:54:23.494850  407507 out.go:291] Setting OutFile to fd 1 ...
	I0805 11:54:23.495149  407507 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 11:54:23.495160  407507 out.go:304] Setting ErrFile to fd 2...
	I0805 11:54:23.495166  407507 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 11:54:23.495342  407507 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
	I0805 11:54:23.495535  407507 out.go:298] Setting JSON to false
	I0805 11:54:23.495569  407507 mustload.go:65] Loading cluster: ha-672593
	I0805 11:54:23.495606  407507 notify.go:220] Checking for updates...
	I0805 11:54:23.496045  407507 config.go:182] Loaded profile config "ha-672593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 11:54:23.496066  407507 status.go:255] checking status of ha-672593 ...
	I0805 11:54:23.496433  407507 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:54:23.496501  407507 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:54:23.514585  407507 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40555
	I0805 11:54:23.515090  407507 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:54:23.515693  407507 main.go:141] libmachine: Using API Version  1
	I0805 11:54:23.515713  407507 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:54:23.516167  407507 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:54:23.516335  407507 main.go:141] libmachine: (ha-672593) Calling .GetState
	I0805 11:54:23.517930  407507 status.go:330] ha-672593 host status = "Running" (err=<nil>)
	I0805 11:54:23.517951  407507 host.go:66] Checking if "ha-672593" exists ...
	I0805 11:54:23.518307  407507 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:54:23.518353  407507 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:54:23.533428  407507 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45259
	I0805 11:54:23.533814  407507 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:54:23.534278  407507 main.go:141] libmachine: Using API Version  1
	I0805 11:54:23.534301  407507 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:54:23.534649  407507 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:54:23.534814  407507 main.go:141] libmachine: (ha-672593) Calling .GetIP
	I0805 11:54:23.537509  407507 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:54:23.538011  407507 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:54:23.538036  407507 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:54:23.538188  407507 host.go:66] Checking if "ha-672593" exists ...
	I0805 11:54:23.538484  407507 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:54:23.538518  407507 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:54:23.554135  407507 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35001
	I0805 11:54:23.554651  407507 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:54:23.555276  407507 main.go:141] libmachine: Using API Version  1
	I0805 11:54:23.555301  407507 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:54:23.555603  407507 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:54:23.555841  407507 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:54:23.556055  407507 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 11:54:23.556080  407507 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:54:23.558959  407507 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:54:23.559396  407507 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:54:23.559424  407507 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:54:23.559568  407507 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:54:23.559754  407507 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:54:23.559918  407507 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:54:23.560067  407507 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/id_rsa Username:docker}
	I0805 11:54:23.649234  407507 ssh_runner.go:195] Run: systemctl --version
	I0805 11:54:23.656223  407507 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 11:54:23.677438  407507 kubeconfig.go:125] found "ha-672593" server: "https://192.168.39.254:8443"
	I0805 11:54:23.677474  407507 api_server.go:166] Checking apiserver status ...
	I0805 11:54:23.677530  407507 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 11:54:23.693671  407507 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup
	W0805 11:54:23.709060  407507 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 11:54:23.709122  407507 ssh_runner.go:195] Run: ls
	I0805 11:54:23.713924  407507 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0805 11:54:23.720430  407507 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0805 11:54:23.720453  407507 status.go:422] ha-672593 apiserver status = Running (err=<nil>)
	I0805 11:54:23.720462  407507 status.go:257] ha-672593 status: &{Name:ha-672593 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 11:54:23.720481  407507 status.go:255] checking status of ha-672593-m02 ...
	I0805 11:54:23.720842  407507 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:54:23.720883  407507 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:54:23.736155  407507 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36201
	I0805 11:54:23.736564  407507 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:54:23.737078  407507 main.go:141] libmachine: Using API Version  1
	I0805 11:54:23.737112  407507 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:54:23.737429  407507 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:54:23.737621  407507 main.go:141] libmachine: (ha-672593-m02) Calling .GetState
	I0805 11:54:23.739404  407507 status.go:330] ha-672593-m02 host status = "Running" (err=<nil>)
	I0805 11:54:23.739423  407507 host.go:66] Checking if "ha-672593-m02" exists ...
	I0805 11:54:23.739722  407507 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:54:23.739777  407507 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:54:23.753803  407507 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34029
	I0805 11:54:23.754260  407507 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:54:23.754766  407507 main.go:141] libmachine: Using API Version  1
	I0805 11:54:23.754787  407507 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:54:23.755058  407507 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:54:23.755268  407507 main.go:141] libmachine: (ha-672593-m02) Calling .GetIP
	I0805 11:54:23.757863  407507 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:54:23.758308  407507 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:48:16 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-672593-m02 Clientid:01:52:54:00:67:7b:e8}
	I0805 11:54:23.758339  407507 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:54:23.758516  407507 host.go:66] Checking if "ha-672593-m02" exists ...
	I0805 11:54:23.758795  407507 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:54:23.758828  407507 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:54:23.773987  407507 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35359
	I0805 11:54:23.774464  407507 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:54:23.774993  407507 main.go:141] libmachine: Using API Version  1
	I0805 11:54:23.775027  407507 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:54:23.775371  407507 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:54:23.775574  407507 main.go:141] libmachine: (ha-672593-m02) Calling .DriverName
	I0805 11:54:23.775833  407507 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 11:54:23.775856  407507 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHHostname
	I0805 11:54:23.778478  407507 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:54:23.778902  407507 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:48:16 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-672593-m02 Clientid:01:52:54:00:67:7b:e8}
	I0805 11:54:23.778930  407507 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:54:23.779056  407507 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHPort
	I0805 11:54:23.779305  407507 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHKeyPath
	I0805 11:54:23.779457  407507 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHUsername
	I0805 11:54:23.779628  407507 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m02/id_rsa Username:docker}
	W0805 11:54:42.147990  407507 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.68:22: connect: no route to host
	W0805 11:54:42.148109  407507 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	E0805 11:54:42.148145  407507 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	I0805 11:54:42.148160  407507 status.go:257] ha-672593-m02 status: &{Name:ha-672593-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0805 11:54:42.148182  407507 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	I0805 11:54:42.148192  407507 status.go:255] checking status of ha-672593-m03 ...
	I0805 11:54:42.148618  407507 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:54:42.148696  407507 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:54:42.164853  407507 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45225
	I0805 11:54:42.165266  407507 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:54:42.165855  407507 main.go:141] libmachine: Using API Version  1
	I0805 11:54:42.165889  407507 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:54:42.166270  407507 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:54:42.166520  407507 main.go:141] libmachine: (ha-672593-m03) Calling .GetState
	I0805 11:54:42.168515  407507 status.go:330] ha-672593-m03 host status = "Running" (err=<nil>)
	I0805 11:54:42.168538  407507 host.go:66] Checking if "ha-672593-m03" exists ...
	I0805 11:54:42.168849  407507 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:54:42.168884  407507 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:54:42.183624  407507 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43839
	I0805 11:54:42.184096  407507 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:54:42.184603  407507 main.go:141] libmachine: Using API Version  1
	I0805 11:54:42.184625  407507 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:54:42.184956  407507 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:54:42.185192  407507 main.go:141] libmachine: (ha-672593-m03) Calling .GetIP
	I0805 11:54:42.187793  407507 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:54:42.188236  407507 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-672593-m03 Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:54:42.188263  407507 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:54:42.188449  407507 host.go:66] Checking if "ha-672593-m03" exists ...
	I0805 11:54:42.188748  407507 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:54:42.188791  407507 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:54:42.203326  407507 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41525
	I0805 11:54:42.203690  407507 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:54:42.204252  407507 main.go:141] libmachine: Using API Version  1
	I0805 11:54:42.204276  407507 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:54:42.204609  407507 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:54:42.204819  407507 main.go:141] libmachine: (ha-672593-m03) Calling .DriverName
	I0805 11:54:42.205007  407507 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 11:54:42.205028  407507 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHHostname
	I0805 11:54:42.208217  407507 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:54:42.208686  407507 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-672593-m03 Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:54:42.208718  407507 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:54:42.208928  407507 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHPort
	I0805 11:54:42.209128  407507 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHKeyPath
	I0805 11:54:42.209262  407507 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHUsername
	I0805 11:54:42.209429  407507 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m03/id_rsa Username:docker}
	I0805 11:54:42.293053  407507 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 11:54:42.312805  407507 kubeconfig.go:125] found "ha-672593" server: "https://192.168.39.254:8443"
	I0805 11:54:42.312860  407507 api_server.go:166] Checking apiserver status ...
	I0805 11:54:42.313025  407507 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 11:54:42.331409  407507 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup
	W0805 11:54:42.342262  407507 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 11:54:42.342316  407507 ssh_runner.go:195] Run: ls
	I0805 11:54:42.346844  407507 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0805 11:54:42.351133  407507 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0805 11:54:42.351157  407507 status.go:422] ha-672593-m03 apiserver status = Running (err=<nil>)
	I0805 11:54:42.351169  407507 status.go:257] ha-672593-m03 status: &{Name:ha-672593-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 11:54:42.351192  407507 status.go:255] checking status of ha-672593-m04 ...
	I0805 11:54:42.351575  407507 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:54:42.351619  407507 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:54:42.366849  407507 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34469
	I0805 11:54:42.367379  407507 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:54:42.367889  407507 main.go:141] libmachine: Using API Version  1
	I0805 11:54:42.367912  407507 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:54:42.368224  407507 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:54:42.368417  407507 main.go:141] libmachine: (ha-672593-m04) Calling .GetState
	I0805 11:54:42.369913  407507 status.go:330] ha-672593-m04 host status = "Running" (err=<nil>)
	I0805 11:54:42.369932  407507 host.go:66] Checking if "ha-672593-m04" exists ...
	I0805 11:54:42.370222  407507 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:54:42.370271  407507 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:54:42.384821  407507 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36733
	I0805 11:54:42.385277  407507 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:54:42.385757  407507 main.go:141] libmachine: Using API Version  1
	I0805 11:54:42.385785  407507 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:54:42.386105  407507 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:54:42.386294  407507 main.go:141] libmachine: (ha-672593-m04) Calling .GetIP
	I0805 11:54:42.389047  407507 main.go:141] libmachine: (ha-672593-m04) DBG | domain ha-672593-m04 has defined MAC address 52:54:00:23:8c:55 in network mk-ha-672593
	I0805 11:54:42.389505  407507 main.go:141] libmachine: (ha-672593-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:8c:55", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:50:59 +0000 UTC Type:0 Mac:52:54:00:23:8c:55 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-672593-m04 Clientid:01:52:54:00:23:8c:55}
	I0805 11:54:42.389546  407507 main.go:141] libmachine: (ha-672593-m04) DBG | domain ha-672593-m04 has defined IP address 192.168.39.4 and MAC address 52:54:00:23:8c:55 in network mk-ha-672593
	I0805 11:54:42.389703  407507 host.go:66] Checking if "ha-672593-m04" exists ...
	I0805 11:54:42.389995  407507 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:54:42.390035  407507 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:54:42.404345  407507 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38155
	I0805 11:54:42.404690  407507 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:54:42.405147  407507 main.go:141] libmachine: Using API Version  1
	I0805 11:54:42.405168  407507 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:54:42.405467  407507 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:54:42.405724  407507 main.go:141] libmachine: (ha-672593-m04) Calling .DriverName
	I0805 11:54:42.405933  407507 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 11:54:42.405954  407507 main.go:141] libmachine: (ha-672593-m04) Calling .GetSSHHostname
	I0805 11:54:42.408699  407507 main.go:141] libmachine: (ha-672593-m04) DBG | domain ha-672593-m04 has defined MAC address 52:54:00:23:8c:55 in network mk-ha-672593
	I0805 11:54:42.409107  407507 main.go:141] libmachine: (ha-672593-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:8c:55", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:50:59 +0000 UTC Type:0 Mac:52:54:00:23:8c:55 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-672593-m04 Clientid:01:52:54:00:23:8c:55}
	I0805 11:54:42.409132  407507 main.go:141] libmachine: (ha-672593-m04) DBG | domain ha-672593-m04 has defined IP address 192.168.39.4 and MAC address 52:54:00:23:8c:55 in network mk-ha-672593
	I0805 11:54:42.409261  407507 main.go:141] libmachine: (ha-672593-m04) Calling .GetSSHPort
	I0805 11:54:42.409427  407507 main.go:141] libmachine: (ha-672593-m04) Calling .GetSSHKeyPath
	I0805 11:54:42.409576  407507 main.go:141] libmachine: (ha-672593-m04) Calling .GetSSHUsername
	I0805 11:54:42.409707  407507 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m04/id_rsa Username:docker}
	I0805 11:54:42.496731  407507 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 11:54:42.513923  407507 status.go:257] ha-672593-m04 status: &{Name:ha-672593-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-672593 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-672593 -n ha-672593
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-672593 logs -n 25: (1.485413968s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-672593 cp ha-672593-m03:/home/docker/cp-test.txt                              | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2308329850/001/cp-test_ha-672593-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n                                                                 | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-672593 cp ha-672593-m03:/home/docker/cp-test.txt                              | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593:/home/docker/cp-test_ha-672593-m03_ha-672593.txt                       |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n                                                                 | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n ha-672593 sudo cat                                              | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | /home/docker/cp-test_ha-672593-m03_ha-672593.txt                                 |           |         |         |                     |                     |
	| cp      | ha-672593 cp ha-672593-m03:/home/docker/cp-test.txt                              | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593-m02:/home/docker/cp-test_ha-672593-m03_ha-672593-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n                                                                 | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n ha-672593-m02 sudo cat                                          | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | /home/docker/cp-test_ha-672593-m03_ha-672593-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-672593 cp ha-672593-m03:/home/docker/cp-test.txt                              | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593-m04:/home/docker/cp-test_ha-672593-m03_ha-672593-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n                                                                 | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n ha-672593-m04 sudo cat                                          | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | /home/docker/cp-test_ha-672593-m03_ha-672593-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-672593 cp testdata/cp-test.txt                                                | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n                                                                 | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-672593 cp ha-672593-m04:/home/docker/cp-test.txt                              | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2308329850/001/cp-test_ha-672593-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n                                                                 | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-672593 cp ha-672593-m04:/home/docker/cp-test.txt                              | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593:/home/docker/cp-test_ha-672593-m04_ha-672593.txt                       |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n                                                                 | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n ha-672593 sudo cat                                              | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | /home/docker/cp-test_ha-672593-m04_ha-672593.txt                                 |           |         |         |                     |                     |
	| cp      | ha-672593 cp ha-672593-m04:/home/docker/cp-test.txt                              | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593-m02:/home/docker/cp-test_ha-672593-m04_ha-672593-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n                                                                 | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n ha-672593-m02 sudo cat                                          | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | /home/docker/cp-test_ha-672593-m04_ha-672593-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-672593 cp ha-672593-m04:/home/docker/cp-test.txt                              | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593-m03:/home/docker/cp-test_ha-672593-m04_ha-672593-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n                                                                 | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n ha-672593-m03 sudo cat                                          | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | /home/docker/cp-test_ha-672593-m04_ha-672593-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-672593 node stop m02 -v=7                                                     | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 11:47:01
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 11:47:01.406932  402885 out.go:291] Setting OutFile to fd 1 ...
	I0805 11:47:01.407221  402885 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 11:47:01.407231  402885 out.go:304] Setting ErrFile to fd 2...
	I0805 11:47:01.407235  402885 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 11:47:01.407430  402885 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
	I0805 11:47:01.408026  402885 out.go:298] Setting JSON to false
	I0805 11:47:01.409097  402885 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":5368,"bootTime":1722853053,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0805 11:47:01.409167  402885 start.go:139] virtualization: kvm guest
	I0805 11:47:01.411485  402885 out.go:177] * [ha-672593] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0805 11:47:01.412749  402885 notify.go:220] Checking for updates...
	I0805 11:47:01.412776  402885 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 11:47:01.413914  402885 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 11:47:01.415104  402885 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 11:47:01.416329  402885 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 11:47:01.417431  402885 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0805 11:47:01.418611  402885 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 11:47:01.419828  402885 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 11:47:01.454526  402885 out.go:177] * Using the kvm2 driver based on user configuration
	I0805 11:47:01.455720  402885 start.go:297] selected driver: kvm2
	I0805 11:47:01.455736  402885 start.go:901] validating driver "kvm2" against <nil>
	I0805 11:47:01.455768  402885 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 11:47:01.456730  402885 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 11:47:01.456816  402885 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19377-383955/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0805 11:47:01.472514  402885 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0805 11:47:01.472573  402885 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 11:47:01.472803  402885 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 11:47:01.472868  402885 cni.go:84] Creating CNI manager for ""
	I0805 11:47:01.472880  402885 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0805 11:47:01.472885  402885 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0805 11:47:01.472947  402885 start.go:340] cluster config:
	{Name:ha-672593 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-672593 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0805 11:47:01.473039  402885 iso.go:125] acquiring lock: {Name:mk78a4988ea0dfb86bb6f7367e362683a39fd912 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 11:47:01.474952  402885 out.go:177] * Starting "ha-672593" primary control-plane node in "ha-672593" cluster
	I0805 11:47:01.476115  402885 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 11:47:01.476152  402885 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0805 11:47:01.476171  402885 cache.go:56] Caching tarball of preloaded images
	I0805 11:47:01.476256  402885 preload.go:172] Found /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0805 11:47:01.476266  402885 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0805 11:47:01.476580  402885 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/config.json ...
	I0805 11:47:01.476599  402885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/config.json: {Name:mk12aeb8990dfd2e3b7b889000f511c048d38e4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:47:01.476722  402885 start.go:360] acquireMachinesLock for ha-672593: {Name:mk3babe91d55c30c0b650587cdec6489eb3a7ed6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 11:47:01.476748  402885 start.go:364] duration metric: took 15.125µs to acquireMachinesLock for "ha-672593"
	I0805 11:47:01.476768  402885 start.go:93] Provisioning new machine with config: &{Name:ha-672593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-672593 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 11:47:01.476842  402885 start.go:125] createHost starting for "" (driver="kvm2")
	I0805 11:47:01.478568  402885 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 11:47:01.478706  402885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:47:01.478754  402885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:47:01.493830  402885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36387
	I0805 11:47:01.494257  402885 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:47:01.494855  402885 main.go:141] libmachine: Using API Version  1
	I0805 11:47:01.494883  402885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:47:01.495156  402885 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:47:01.495344  402885 main.go:141] libmachine: (ha-672593) Calling .GetMachineName
	I0805 11:47:01.495540  402885 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:47:01.495679  402885 start.go:159] libmachine.API.Create for "ha-672593" (driver="kvm2")
	I0805 11:47:01.495706  402885 client.go:168] LocalClient.Create starting
	I0805 11:47:01.495769  402885 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem
	I0805 11:47:01.495815  402885 main.go:141] libmachine: Decoding PEM data...
	I0805 11:47:01.495835  402885 main.go:141] libmachine: Parsing certificate...
	I0805 11:47:01.495901  402885 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem
	I0805 11:47:01.495926  402885 main.go:141] libmachine: Decoding PEM data...
	I0805 11:47:01.495949  402885 main.go:141] libmachine: Parsing certificate...
	I0805 11:47:01.495981  402885 main.go:141] libmachine: Running pre-create checks...
	I0805 11:47:01.495992  402885 main.go:141] libmachine: (ha-672593) Calling .PreCreateCheck
	I0805 11:47:01.496378  402885 main.go:141] libmachine: (ha-672593) Calling .GetConfigRaw
	I0805 11:47:01.496812  402885 main.go:141] libmachine: Creating machine...
	I0805 11:47:01.496826  402885 main.go:141] libmachine: (ha-672593) Calling .Create
	I0805 11:47:01.496984  402885 main.go:141] libmachine: (ha-672593) Creating KVM machine...
	I0805 11:47:01.498181  402885 main.go:141] libmachine: (ha-672593) DBG | found existing default KVM network
	I0805 11:47:01.498912  402885 main.go:141] libmachine: (ha-672593) DBG | I0805 11:47:01.498771  402908 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0805 11:47:01.498934  402885 main.go:141] libmachine: (ha-672593) DBG | created network xml: 
	I0805 11:47:01.498949  402885 main.go:141] libmachine: (ha-672593) DBG | <network>
	I0805 11:47:01.498976  402885 main.go:141] libmachine: (ha-672593) DBG |   <name>mk-ha-672593</name>
	I0805 11:47:01.498991  402885 main.go:141] libmachine: (ha-672593) DBG |   <dns enable='no'/>
	I0805 11:47:01.498998  402885 main.go:141] libmachine: (ha-672593) DBG |   
	I0805 11:47:01.499009  402885 main.go:141] libmachine: (ha-672593) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0805 11:47:01.499021  402885 main.go:141] libmachine: (ha-672593) DBG |     <dhcp>
	I0805 11:47:01.499091  402885 main.go:141] libmachine: (ha-672593) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0805 11:47:01.499113  402885 main.go:141] libmachine: (ha-672593) DBG |     </dhcp>
	I0805 11:47:01.499126  402885 main.go:141] libmachine: (ha-672593) DBG |   </ip>
	I0805 11:47:01.499134  402885 main.go:141] libmachine: (ha-672593) DBG |   
	I0805 11:47:01.499159  402885 main.go:141] libmachine: (ha-672593) DBG | </network>
	I0805 11:47:01.499180  402885 main.go:141] libmachine: (ha-672593) DBG | 
	I0805 11:47:01.504434  402885 main.go:141] libmachine: (ha-672593) DBG | trying to create private KVM network mk-ha-672593 192.168.39.0/24...
	I0805 11:47:01.570407  402885 main.go:141] libmachine: (ha-672593) DBG | private KVM network mk-ha-672593 192.168.39.0/24 created
	I0805 11:47:01.570444  402885 main.go:141] libmachine: (ha-672593) Setting up store path in /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593 ...
	I0805 11:47:01.570457  402885 main.go:141] libmachine: (ha-672593) DBG | I0805 11:47:01.570404  402908 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 11:47:01.570492  402885 main.go:141] libmachine: (ha-672593) Building disk image from file:///home/jenkins/minikube-integration/19377-383955/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0805 11:47:01.570653  402885 main.go:141] libmachine: (ha-672593) Downloading /home/jenkins/minikube-integration/19377-383955/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19377-383955/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0805 11:47:01.851874  402885 main.go:141] libmachine: (ha-672593) DBG | I0805 11:47:01.851756  402908 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/id_rsa...
	I0805 11:47:02.115451  402885 main.go:141] libmachine: (ha-672593) DBG | I0805 11:47:02.115280  402908 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/ha-672593.rawdisk...
	I0805 11:47:02.115480  402885 main.go:141] libmachine: (ha-672593) DBG | Writing magic tar header
	I0805 11:47:02.115490  402885 main.go:141] libmachine: (ha-672593) DBG | Writing SSH key tar header
	I0805 11:47:02.115498  402885 main.go:141] libmachine: (ha-672593) DBG | I0805 11:47:02.115426  402908 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593 ...
	I0805 11:47:02.115609  402885 main.go:141] libmachine: (ha-672593) Setting executable bit set on /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593 (perms=drwx------)
	I0805 11:47:02.115624  402885 main.go:141] libmachine: (ha-672593) Setting executable bit set on /home/jenkins/minikube-integration/19377-383955/.minikube/machines (perms=drwxr-xr-x)
	I0805 11:47:02.115632  402885 main.go:141] libmachine: (ha-672593) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593
	I0805 11:47:02.115641  402885 main.go:141] libmachine: (ha-672593) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19377-383955/.minikube/machines
	I0805 11:47:02.115652  402885 main.go:141] libmachine: (ha-672593) Setting executable bit set on /home/jenkins/minikube-integration/19377-383955/.minikube (perms=drwxr-xr-x)
	I0805 11:47:02.115662  402885 main.go:141] libmachine: (ha-672593) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 11:47:02.115679  402885 main.go:141] libmachine: (ha-672593) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19377-383955
	I0805 11:47:02.115705  402885 main.go:141] libmachine: (ha-672593) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0805 11:47:02.115712  402885 main.go:141] libmachine: (ha-672593) DBG | Checking permissions on dir: /home/jenkins
	I0805 11:47:02.115717  402885 main.go:141] libmachine: (ha-672593) DBG | Checking permissions on dir: /home
	I0805 11:47:02.115724  402885 main.go:141] libmachine: (ha-672593) DBG | Skipping /home - not owner
	I0805 11:47:02.115732  402885 main.go:141] libmachine: (ha-672593) Setting executable bit set on /home/jenkins/minikube-integration/19377-383955 (perms=drwxrwxr-x)
	I0805 11:47:02.115759  402885 main.go:141] libmachine: (ha-672593) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0805 11:47:02.115774  402885 main.go:141] libmachine: (ha-672593) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0805 11:47:02.115844  402885 main.go:141] libmachine: (ha-672593) Creating domain...
	I0805 11:47:02.117016  402885 main.go:141] libmachine: (ha-672593) define libvirt domain using xml: 
	I0805 11:47:02.117034  402885 main.go:141] libmachine: (ha-672593) <domain type='kvm'>
	I0805 11:47:02.117041  402885 main.go:141] libmachine: (ha-672593)   <name>ha-672593</name>
	I0805 11:47:02.117064  402885 main.go:141] libmachine: (ha-672593)   <memory unit='MiB'>2200</memory>
	I0805 11:47:02.117070  402885 main.go:141] libmachine: (ha-672593)   <vcpu>2</vcpu>
	I0805 11:47:02.117074  402885 main.go:141] libmachine: (ha-672593)   <features>
	I0805 11:47:02.117079  402885 main.go:141] libmachine: (ha-672593)     <acpi/>
	I0805 11:47:02.117083  402885 main.go:141] libmachine: (ha-672593)     <apic/>
	I0805 11:47:02.117090  402885 main.go:141] libmachine: (ha-672593)     <pae/>
	I0805 11:47:02.117095  402885 main.go:141] libmachine: (ha-672593)     
	I0805 11:47:02.117101  402885 main.go:141] libmachine: (ha-672593)   </features>
	I0805 11:47:02.117106  402885 main.go:141] libmachine: (ha-672593)   <cpu mode='host-passthrough'>
	I0805 11:47:02.117111  402885 main.go:141] libmachine: (ha-672593)   
	I0805 11:47:02.117117  402885 main.go:141] libmachine: (ha-672593)   </cpu>
	I0805 11:47:02.117122  402885 main.go:141] libmachine: (ha-672593)   <os>
	I0805 11:47:02.117132  402885 main.go:141] libmachine: (ha-672593)     <type>hvm</type>
	I0805 11:47:02.117140  402885 main.go:141] libmachine: (ha-672593)     <boot dev='cdrom'/>
	I0805 11:47:02.117149  402885 main.go:141] libmachine: (ha-672593)     <boot dev='hd'/>
	I0805 11:47:02.117156  402885 main.go:141] libmachine: (ha-672593)     <bootmenu enable='no'/>
	I0805 11:47:02.117160  402885 main.go:141] libmachine: (ha-672593)   </os>
	I0805 11:47:02.117165  402885 main.go:141] libmachine: (ha-672593)   <devices>
	I0805 11:47:02.117171  402885 main.go:141] libmachine: (ha-672593)     <disk type='file' device='cdrom'>
	I0805 11:47:02.117178  402885 main.go:141] libmachine: (ha-672593)       <source file='/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/boot2docker.iso'/>
	I0805 11:47:02.117184  402885 main.go:141] libmachine: (ha-672593)       <target dev='hdc' bus='scsi'/>
	I0805 11:47:02.117191  402885 main.go:141] libmachine: (ha-672593)       <readonly/>
	I0805 11:47:02.117195  402885 main.go:141] libmachine: (ha-672593)     </disk>
	I0805 11:47:02.117200  402885 main.go:141] libmachine: (ha-672593)     <disk type='file' device='disk'>
	I0805 11:47:02.117209  402885 main.go:141] libmachine: (ha-672593)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0805 11:47:02.117222  402885 main.go:141] libmachine: (ha-672593)       <source file='/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/ha-672593.rawdisk'/>
	I0805 11:47:02.117230  402885 main.go:141] libmachine: (ha-672593)       <target dev='hda' bus='virtio'/>
	I0805 11:47:02.117234  402885 main.go:141] libmachine: (ha-672593)     </disk>
	I0805 11:47:02.117239  402885 main.go:141] libmachine: (ha-672593)     <interface type='network'>
	I0805 11:47:02.117245  402885 main.go:141] libmachine: (ha-672593)       <source network='mk-ha-672593'/>
	I0805 11:47:02.117250  402885 main.go:141] libmachine: (ha-672593)       <model type='virtio'/>
	I0805 11:47:02.117257  402885 main.go:141] libmachine: (ha-672593)     </interface>
	I0805 11:47:02.117271  402885 main.go:141] libmachine: (ha-672593)     <interface type='network'>
	I0805 11:47:02.117277  402885 main.go:141] libmachine: (ha-672593)       <source network='default'/>
	I0805 11:47:02.117282  402885 main.go:141] libmachine: (ha-672593)       <model type='virtio'/>
	I0805 11:47:02.117290  402885 main.go:141] libmachine: (ha-672593)     </interface>
	I0805 11:47:02.117321  402885 main.go:141] libmachine: (ha-672593)     <serial type='pty'>
	I0805 11:47:02.117344  402885 main.go:141] libmachine: (ha-672593)       <target port='0'/>
	I0805 11:47:02.117360  402885 main.go:141] libmachine: (ha-672593)     </serial>
	I0805 11:47:02.117375  402885 main.go:141] libmachine: (ha-672593)     <console type='pty'>
	I0805 11:47:02.117386  402885 main.go:141] libmachine: (ha-672593)       <target type='serial' port='0'/>
	I0805 11:47:02.117412  402885 main.go:141] libmachine: (ha-672593)     </console>
	I0805 11:47:02.117425  402885 main.go:141] libmachine: (ha-672593)     <rng model='virtio'>
	I0805 11:47:02.117445  402885 main.go:141] libmachine: (ha-672593)       <backend model='random'>/dev/random</backend>
	I0805 11:47:02.117461  402885 main.go:141] libmachine: (ha-672593)     </rng>
	I0805 11:47:02.117472  402885 main.go:141] libmachine: (ha-672593)     
	I0805 11:47:02.117482  402885 main.go:141] libmachine: (ha-672593)     
	I0805 11:47:02.117492  402885 main.go:141] libmachine: (ha-672593)   </devices>
	I0805 11:47:02.117507  402885 main.go:141] libmachine: (ha-672593) </domain>
	I0805 11:47:02.117522  402885 main.go:141] libmachine: (ha-672593) 
	I0805 11:47:02.121948  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:fd:a1:d4 in network default
	I0805 11:47:02.122593  402885 main.go:141] libmachine: (ha-672593) Ensuring networks are active...
	I0805 11:47:02.122620  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:02.123298  402885 main.go:141] libmachine: (ha-672593) Ensuring network default is active
	I0805 11:47:02.123585  402885 main.go:141] libmachine: (ha-672593) Ensuring network mk-ha-672593 is active
	I0805 11:47:02.124089  402885 main.go:141] libmachine: (ha-672593) Getting domain xml...
	I0805 11:47:02.124741  402885 main.go:141] libmachine: (ha-672593) Creating domain...
	I0805 11:47:03.319883  402885 main.go:141] libmachine: (ha-672593) Waiting to get IP...
	I0805 11:47:03.320698  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:03.321100  402885 main.go:141] libmachine: (ha-672593) DBG | unable to find current IP address of domain ha-672593 in network mk-ha-672593
	I0805 11:47:03.321129  402885 main.go:141] libmachine: (ha-672593) DBG | I0805 11:47:03.321084  402908 retry.go:31] will retry after 197.742325ms: waiting for machine to come up
	I0805 11:47:03.520616  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:03.521078  402885 main.go:141] libmachine: (ha-672593) DBG | unable to find current IP address of domain ha-672593 in network mk-ha-672593
	I0805 11:47:03.521107  402885 main.go:141] libmachine: (ha-672593) DBG | I0805 11:47:03.521037  402908 retry.go:31] will retry after 332.591294ms: waiting for machine to come up
	I0805 11:47:03.855863  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:03.856337  402885 main.go:141] libmachine: (ha-672593) DBG | unable to find current IP address of domain ha-672593 in network mk-ha-672593
	I0805 11:47:03.856368  402885 main.go:141] libmachine: (ha-672593) DBG | I0805 11:47:03.856293  402908 retry.go:31] will retry after 293.806863ms: waiting for machine to come up
	I0805 11:47:04.151867  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:04.152292  402885 main.go:141] libmachine: (ha-672593) DBG | unable to find current IP address of domain ha-672593 in network mk-ha-672593
	I0805 11:47:04.152327  402885 main.go:141] libmachine: (ha-672593) DBG | I0805 11:47:04.152261  402908 retry.go:31] will retry after 546.881134ms: waiting for machine to come up
	I0805 11:47:04.701205  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:04.701717  402885 main.go:141] libmachine: (ha-672593) DBG | unable to find current IP address of domain ha-672593 in network mk-ha-672593
	I0805 11:47:04.701747  402885 main.go:141] libmachine: (ha-672593) DBG | I0805 11:47:04.701681  402908 retry.go:31] will retry after 690.115664ms: waiting for machine to come up
	I0805 11:47:05.393676  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:05.394222  402885 main.go:141] libmachine: (ha-672593) DBG | unable to find current IP address of domain ha-672593 in network mk-ha-672593
	I0805 11:47:05.394251  402885 main.go:141] libmachine: (ha-672593) DBG | I0805 11:47:05.394152  402908 retry.go:31] will retry after 700.558042ms: waiting for machine to come up
	I0805 11:47:06.096140  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:06.096609  402885 main.go:141] libmachine: (ha-672593) DBG | unable to find current IP address of domain ha-672593 in network mk-ha-672593
	I0805 11:47:06.096657  402885 main.go:141] libmachine: (ha-672593) DBG | I0805 11:47:06.096558  402908 retry.go:31] will retry after 1.106283154s: waiting for machine to come up
	I0805 11:47:07.204382  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:07.204777  402885 main.go:141] libmachine: (ha-672593) DBG | unable to find current IP address of domain ha-672593 in network mk-ha-672593
	I0805 11:47:07.204803  402885 main.go:141] libmachine: (ha-672593) DBG | I0805 11:47:07.204737  402908 retry.go:31] will retry after 909.769737ms: waiting for machine to come up
	I0805 11:47:08.115835  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:08.116335  402885 main.go:141] libmachine: (ha-672593) DBG | unable to find current IP address of domain ha-672593 in network mk-ha-672593
	I0805 11:47:08.116368  402885 main.go:141] libmachine: (ha-672593) DBG | I0805 11:47:08.116278  402908 retry.go:31] will retry after 1.197387753s: waiting for machine to come up
	I0805 11:47:09.315548  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:09.315864  402885 main.go:141] libmachine: (ha-672593) DBG | unable to find current IP address of domain ha-672593 in network mk-ha-672593
	I0805 11:47:09.315895  402885 main.go:141] libmachine: (ha-672593) DBG | I0805 11:47:09.315809  402908 retry.go:31] will retry after 1.807716024s: waiting for machine to come up
	I0805 11:47:11.125701  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:11.126191  402885 main.go:141] libmachine: (ha-672593) DBG | unable to find current IP address of domain ha-672593 in network mk-ha-672593
	I0805 11:47:11.126215  402885 main.go:141] libmachine: (ha-672593) DBG | I0805 11:47:11.126140  402908 retry.go:31] will retry after 1.998972255s: waiting for machine to come up
	I0805 11:47:13.127302  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:13.127827  402885 main.go:141] libmachine: (ha-672593) DBG | unable to find current IP address of domain ha-672593 in network mk-ha-672593
	I0805 11:47:13.127858  402885 main.go:141] libmachine: (ha-672593) DBG | I0805 11:47:13.127717  402908 retry.go:31] will retry after 3.556381088s: waiting for machine to come up
	I0805 11:47:16.685699  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:16.686021  402885 main.go:141] libmachine: (ha-672593) DBG | unable to find current IP address of domain ha-672593 in network mk-ha-672593
	I0805 11:47:16.686045  402885 main.go:141] libmachine: (ha-672593) DBG | I0805 11:47:16.685991  402908 retry.go:31] will retry after 4.271029073s: waiting for machine to come up
	I0805 11:47:20.962319  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:20.962715  402885 main.go:141] libmachine: (ha-672593) DBG | unable to find current IP address of domain ha-672593 in network mk-ha-672593
	I0805 11:47:20.962744  402885 main.go:141] libmachine: (ha-672593) DBG | I0805 11:47:20.962659  402908 retry.go:31] will retry after 5.361767594s: waiting for machine to come up
	I0805 11:47:26.329675  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:26.330117  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has current primary IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:26.330139  402885 main.go:141] libmachine: (ha-672593) Found IP for machine: 192.168.39.102
	I0805 11:47:26.330152  402885 main.go:141] libmachine: (ha-672593) Reserving static IP address...
	I0805 11:47:26.330520  402885 main.go:141] libmachine: (ha-672593) DBG | unable to find host DHCP lease matching {name: "ha-672593", mac: "52:54:00:9e:d5:95", ip: "192.168.39.102"} in network mk-ha-672593
	I0805 11:47:26.403576  402885 main.go:141] libmachine: (ha-672593) DBG | Getting to WaitForSSH function...
	I0805 11:47:26.403615  402885 main.go:141] libmachine: (ha-672593) Reserved static IP address: 192.168.39.102
	I0805 11:47:26.403627  402885 main.go:141] libmachine: (ha-672593) Waiting for SSH to be available...
	I0805 11:47:26.406287  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:26.406640  402885 main.go:141] libmachine: (ha-672593) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593
	I0805 11:47:26.406714  402885 main.go:141] libmachine: (ha-672593) DBG | unable to find defined IP address of network mk-ha-672593 interface with MAC address 52:54:00:9e:d5:95
	I0805 11:47:26.406879  402885 main.go:141] libmachine: (ha-672593) DBG | Using SSH client type: external
	I0805 11:47:26.406903  402885 main.go:141] libmachine: (ha-672593) DBG | Using SSH private key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/id_rsa (-rw-------)
	I0805 11:47:26.406928  402885 main.go:141] libmachine: (ha-672593) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 11:47:26.406941  402885 main.go:141] libmachine: (ha-672593) DBG | About to run SSH command:
	I0805 11:47:26.406956  402885 main.go:141] libmachine: (ha-672593) DBG | exit 0
	I0805 11:47:26.410442  402885 main.go:141] libmachine: (ha-672593) DBG | SSH cmd err, output: exit status 255: 
	I0805 11:47:26.410467  402885 main.go:141] libmachine: (ha-672593) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0805 11:47:26.410478  402885 main.go:141] libmachine: (ha-672593) DBG | command : exit 0
	I0805 11:47:26.410490  402885 main.go:141] libmachine: (ha-672593) DBG | err     : exit status 255
	I0805 11:47:26.410500  402885 main.go:141] libmachine: (ha-672593) DBG | output  : 
	I0805 11:47:29.412979  402885 main.go:141] libmachine: (ha-672593) DBG | Getting to WaitForSSH function...
	I0805 11:47:29.415178  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:29.415509  402885 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:47:29.415538  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:29.415640  402885 main.go:141] libmachine: (ha-672593) DBG | Using SSH client type: external
	I0805 11:47:29.415669  402885 main.go:141] libmachine: (ha-672593) DBG | Using SSH private key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/id_rsa (-rw-------)
	I0805 11:47:29.415707  402885 main.go:141] libmachine: (ha-672593) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.102 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 11:47:29.415718  402885 main.go:141] libmachine: (ha-672593) DBG | About to run SSH command:
	I0805 11:47:29.415733  402885 main.go:141] libmachine: (ha-672593) DBG | exit 0
	I0805 11:47:29.543923  402885 main.go:141] libmachine: (ha-672593) DBG | SSH cmd err, output: <nil>: 
	I0805 11:47:29.544178  402885 main.go:141] libmachine: (ha-672593) KVM machine creation complete!
	I0805 11:47:29.544569  402885 main.go:141] libmachine: (ha-672593) Calling .GetConfigRaw
	I0805 11:47:29.545201  402885 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:47:29.545407  402885 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:47:29.545583  402885 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0805 11:47:29.545614  402885 main.go:141] libmachine: (ha-672593) Calling .GetState
	I0805 11:47:29.546800  402885 main.go:141] libmachine: Detecting operating system of created instance...
	I0805 11:47:29.546813  402885 main.go:141] libmachine: Waiting for SSH to be available...
	I0805 11:47:29.546820  402885 main.go:141] libmachine: Getting to WaitForSSH function...
	I0805 11:47:29.546825  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:47:29.548715  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:29.549065  402885 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:47:29.549092  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:29.549216  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:47:29.549406  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:47:29.549545  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:47:29.549692  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:47:29.549833  402885 main.go:141] libmachine: Using SSH client type: native
	I0805 11:47:29.550100  402885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0805 11:47:29.550114  402885 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0805 11:47:29.663179  402885 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 11:47:29.663202  402885 main.go:141] libmachine: Detecting the provisioner...
	I0805 11:47:29.663210  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:47:29.666721  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:29.667145  402885 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:47:29.667166  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:29.667334  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:47:29.667524  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:47:29.667687  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:47:29.667847  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:47:29.668030  402885 main.go:141] libmachine: Using SSH client type: native
	I0805 11:47:29.668198  402885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0805 11:47:29.668208  402885 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0805 11:47:29.780645  402885 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0805 11:47:29.780749  402885 main.go:141] libmachine: found compatible host: buildroot
	I0805 11:47:29.780761  402885 main.go:141] libmachine: Provisioning with buildroot...
	I0805 11:47:29.780768  402885 main.go:141] libmachine: (ha-672593) Calling .GetMachineName
	I0805 11:47:29.781042  402885 buildroot.go:166] provisioning hostname "ha-672593"
	I0805 11:47:29.781081  402885 main.go:141] libmachine: (ha-672593) Calling .GetMachineName
	I0805 11:47:29.781288  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:47:29.783827  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:29.784232  402885 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:47:29.784264  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:29.784384  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:47:29.784556  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:47:29.784705  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:47:29.784879  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:47:29.785072  402885 main.go:141] libmachine: Using SSH client type: native
	I0805 11:47:29.785238  402885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0805 11:47:29.785261  402885 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-672593 && echo "ha-672593" | sudo tee /etc/hostname
	I0805 11:47:29.911387  402885 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-672593
	
	I0805 11:47:29.911455  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:47:29.914263  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:29.914580  402885 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:47:29.914605  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:29.914787  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:47:29.915038  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:47:29.915221  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:47:29.915385  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:47:29.915580  402885 main.go:141] libmachine: Using SSH client type: native
	I0805 11:47:29.915795  402885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0805 11:47:29.915813  402885 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-672593' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-672593/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-672593' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 11:47:30.040854  402885 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 11:47:30.040890  402885 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19377-383955/.minikube CaCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19377-383955/.minikube}
	I0805 11:47:30.040934  402885 buildroot.go:174] setting up certificates
	I0805 11:47:30.040947  402885 provision.go:84] configureAuth start
	I0805 11:47:30.040962  402885 main.go:141] libmachine: (ha-672593) Calling .GetMachineName
	I0805 11:47:30.041282  402885 main.go:141] libmachine: (ha-672593) Calling .GetIP
	I0805 11:47:30.043919  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:30.044419  402885 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:47:30.044445  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:30.044586  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:47:30.046846  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:30.047093  402885 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:47:30.047122  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:30.047264  402885 provision.go:143] copyHostCerts
	I0805 11:47:30.047300  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 11:47:30.047380  402885 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem, removing ...
	I0805 11:47:30.047395  402885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 11:47:30.047483  402885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem (1082 bytes)
	I0805 11:47:30.047616  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 11:47:30.047647  402885 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem, removing ...
	I0805 11:47:30.047659  402885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 11:47:30.047704  402885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem (1123 bytes)
	I0805 11:47:30.047815  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 11:47:30.047856  402885 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem, removing ...
	I0805 11:47:30.047869  402885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 11:47:30.047918  402885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem (1675 bytes)
	I0805 11:47:30.048067  402885 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem org=jenkins.ha-672593 san=[127.0.0.1 192.168.39.102 ha-672593 localhost minikube]
	I0805 11:47:30.244143  402885 provision.go:177] copyRemoteCerts
	I0805 11:47:30.244208  402885 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 11:47:30.244237  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:47:30.246801  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:30.247127  402885 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:47:30.247153  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:30.247352  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:47:30.247580  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:47:30.247765  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:47:30.247930  402885 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/id_rsa Username:docker}
	I0805 11:47:30.333425  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 11:47:30.333489  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 11:47:30.356829  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 11:47:30.356901  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 11:47:30.380400  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 11:47:30.380461  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0805 11:47:30.403423  402885 provision.go:87] duration metric: took 362.461937ms to configureAuth
	I0805 11:47:30.403448  402885 buildroot.go:189] setting minikube options for container-runtime
	I0805 11:47:30.403621  402885 config.go:182] Loaded profile config "ha-672593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 11:47:30.403706  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:47:30.405998  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:30.406288  402885 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:47:30.406315  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:30.406439  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:47:30.406651  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:47:30.406830  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:47:30.407075  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:47:30.407275  402885 main.go:141] libmachine: Using SSH client type: native
	I0805 11:47:30.407449  402885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0805 11:47:30.407466  402885 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 11:47:30.677264  402885 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 11:47:30.677295  402885 main.go:141] libmachine: Checking connection to Docker...
	I0805 11:47:30.677303  402885 main.go:141] libmachine: (ha-672593) Calling .GetURL
	I0805 11:47:30.678830  402885 main.go:141] libmachine: (ha-672593) DBG | Using libvirt version 6000000
	I0805 11:47:30.681221  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:30.681528  402885 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:47:30.681556  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:30.681800  402885 main.go:141] libmachine: Docker is up and running!
	I0805 11:47:30.681828  402885 main.go:141] libmachine: Reticulating splines...
	I0805 11:47:30.681838  402885 client.go:171] duration metric: took 29.18612156s to LocalClient.Create
	I0805 11:47:30.681864  402885 start.go:167] duration metric: took 29.186183459s to libmachine.API.Create "ha-672593"
	I0805 11:47:30.681876  402885 start.go:293] postStartSetup for "ha-672593" (driver="kvm2")
	I0805 11:47:30.681888  402885 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 11:47:30.681906  402885 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:47:30.682170  402885 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 11:47:30.682194  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:47:30.684393  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:30.684666  402885 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:47:30.684693  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:30.684853  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:47:30.685033  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:47:30.685184  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:47:30.685295  402885 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/id_rsa Username:docker}
	I0805 11:47:30.770326  402885 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 11:47:30.774907  402885 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 11:47:30.774936  402885 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/addons for local assets ...
	I0805 11:47:30.775025  402885 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/files for local assets ...
	I0805 11:47:30.775100  402885 filesync.go:149] local asset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> 3912192.pem in /etc/ssl/certs
	I0805 11:47:30.775107  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> /etc/ssl/certs/3912192.pem
	I0805 11:47:30.775211  402885 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 11:47:30.784903  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 11:47:30.812355  402885 start.go:296] duration metric: took 130.462768ms for postStartSetup
	I0805 11:47:30.812515  402885 main.go:141] libmachine: (ha-672593) Calling .GetConfigRaw
	I0805 11:47:30.813149  402885 main.go:141] libmachine: (ha-672593) Calling .GetIP
	I0805 11:47:30.815890  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:30.816226  402885 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:47:30.816254  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:30.816544  402885 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/config.json ...
	I0805 11:47:30.816756  402885 start.go:128] duration metric: took 29.339901951s to createHost
	I0805 11:47:30.816797  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:47:30.818999  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:30.819327  402885 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:47:30.819366  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:30.819462  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:47:30.819647  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:47:30.819822  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:47:30.819935  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:47:30.820104  402885 main.go:141] libmachine: Using SSH client type: native
	I0805 11:47:30.820329  402885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0805 11:47:30.820353  402885 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 11:47:30.932357  402885 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722858450.914487602
	
	I0805 11:47:30.932384  402885 fix.go:216] guest clock: 1722858450.914487602
	I0805 11:47:30.932394  402885 fix.go:229] Guest: 2024-08-05 11:47:30.914487602 +0000 UTC Remote: 2024-08-05 11:47:30.816784327 +0000 UTC m=+29.447989374 (delta=97.703275ms)
	I0805 11:47:30.932421  402885 fix.go:200] guest clock delta is within tolerance: 97.703275ms
	I0805 11:47:30.932428  402885 start.go:83] releasing machines lock for "ha-672593", held for 29.455670749s
	I0805 11:47:30.932453  402885 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:47:30.932785  402885 main.go:141] libmachine: (ha-672593) Calling .GetIP
	I0805 11:47:30.935097  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:30.935406  402885 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:47:30.935434  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:30.935581  402885 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:47:30.936066  402885 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:47:30.936245  402885 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:47:30.936332  402885 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 11:47:30.936373  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:47:30.936471  402885 ssh_runner.go:195] Run: cat /version.json
	I0805 11:47:30.936504  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:47:30.938883  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:30.939052  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:30.939238  402885 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:47:30.939260  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:30.939387  402885 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:47:30.939411  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:30.939423  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:47:30.939618  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:47:30.939633  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:47:30.939793  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:47:30.939800  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:47:30.939946  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:47:30.939933  402885 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/id_rsa Username:docker}
	I0805 11:47:30.940044  402885 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/id_rsa Username:docker}
	I0805 11:47:31.039737  402885 ssh_runner.go:195] Run: systemctl --version
	I0805 11:47:31.045475  402885 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 11:47:31.197205  402885 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 11:47:31.203650  402885 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 11:47:31.203709  402885 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 11:47:31.219157  402885 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 11:47:31.219181  402885 start.go:495] detecting cgroup driver to use...
	I0805 11:47:31.219243  402885 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 11:47:31.235548  402885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 11:47:31.249152  402885 docker.go:217] disabling cri-docker service (if available) ...
	I0805 11:47:31.249217  402885 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 11:47:31.262673  402885 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 11:47:31.276464  402885 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 11:47:31.388840  402885 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 11:47:31.545015  402885 docker.go:233] disabling docker service ...
	I0805 11:47:31.545107  402885 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 11:47:31.559814  402885 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 11:47:31.572831  402885 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 11:47:31.698544  402885 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 11:47:31.820235  402885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 11:47:31.834206  402885 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 11:47:31.852152  402885 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0805 11:47:31.852231  402885 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:47:31.862655  402885 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 11:47:31.862738  402885 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:47:31.873423  402885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:47:31.883959  402885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:47:31.894368  402885 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 11:47:31.906774  402885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:47:31.918325  402885 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:47:31.936356  402885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:47:31.948286  402885 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 11:47:31.959200  402885 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0805 11:47:31.959239  402885 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0805 11:47:31.974768  402885 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 11:47:31.985693  402885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 11:47:32.126784  402885 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 11:47:32.260710  402885 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 11:47:32.260793  402885 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 11:47:32.265705  402885 start.go:563] Will wait 60s for crictl version
	I0805 11:47:32.265775  402885 ssh_runner.go:195] Run: which crictl
	I0805 11:47:32.269618  402885 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 11:47:32.310458  402885 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 11:47:32.310546  402885 ssh_runner.go:195] Run: crio --version
	I0805 11:47:32.338923  402885 ssh_runner.go:195] Run: crio --version
	I0805 11:47:32.367635  402885 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0805 11:47:32.368941  402885 main.go:141] libmachine: (ha-672593) Calling .GetIP
	I0805 11:47:32.371554  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:32.371976  402885 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:47:32.372006  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:32.372218  402885 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0805 11:47:32.376375  402885 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 11:47:32.388848  402885 kubeadm.go:883] updating cluster {Name:ha-672593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-672593 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 11:47:32.388986  402885 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 11:47:32.389053  402885 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 11:47:32.427488  402885 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0805 11:47:32.427574  402885 ssh_runner.go:195] Run: which lz4
	I0805 11:47:32.431340  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0805 11:47:32.431455  402885 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0805 11:47:32.435364  402885 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 11:47:32.435390  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0805 11:47:33.806142  402885 crio.go:462] duration metric: took 1.374734579s to copy over tarball
	I0805 11:47:33.806232  402885 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 11:47:35.968986  402885 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.162714569s)
	I0805 11:47:35.969032  402885 crio.go:469] duration metric: took 2.162856294s to extract the tarball
	I0805 11:47:35.969045  402885 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0805 11:47:36.007014  402885 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 11:47:36.054239  402885 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 11:47:36.054272  402885 cache_images.go:84] Images are preloaded, skipping loading
	I0805 11:47:36.054283  402885 kubeadm.go:934] updating node { 192.168.39.102 8443 v1.30.3 crio true true} ...
	I0805 11:47:36.054430  402885 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-672593 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-672593 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 11:47:36.054499  402885 ssh_runner.go:195] Run: crio config
	I0805 11:47:36.104058  402885 cni.go:84] Creating CNI manager for ""
	I0805 11:47:36.104084  402885 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0805 11:47:36.104097  402885 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 11:47:36.104127  402885 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.102 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-672593 NodeName:ha-672593 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.102"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.102 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 11:47:36.104307  402885 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.102
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-672593"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.102
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.102"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 11:47:36.104341  402885 kube-vip.go:115] generating kube-vip config ...
	I0805 11:47:36.104392  402885 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0805 11:47:36.123514  402885 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0805 11:47:36.123633  402885 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0805 11:47:36.123690  402885 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 11:47:36.133420  402885 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 11:47:36.133496  402885 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0805 11:47:36.142489  402885 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0805 11:47:36.159165  402885 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 11:47:36.175609  402885 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0805 11:47:36.192086  402885 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0805 11:47:36.207817  402885 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0805 11:47:36.211345  402885 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 11:47:36.222877  402885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 11:47:36.352753  402885 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 11:47:36.370110  402885 certs.go:68] Setting up /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593 for IP: 192.168.39.102
	I0805 11:47:36.370135  402885 certs.go:194] generating shared ca certs ...
	I0805 11:47:36.370156  402885 certs.go:226] acquiring lock for ca certs: {Name:mk0abfcaff3883fbb5243c47b487f9200d9166d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:47:36.370327  402885 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key
	I0805 11:47:36.370389  402885 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key
	I0805 11:47:36.370405  402885 certs.go:256] generating profile certs ...
	I0805 11:47:36.370550  402885 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/client.key
	I0805 11:47:36.370571  402885 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/client.crt with IP's: []
	I0805 11:47:36.443467  402885 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/client.crt ...
	I0805 11:47:36.443497  402885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/client.crt: {Name:mk64efb16e1b54b1ad46318bd3555907edacc1fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:47:36.443681  402885 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/client.key ...
	I0805 11:47:36.443696  402885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/client.key: {Name:mka046d85df2c8ea9a81fa425ffb812340b51d52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:47:36.443826  402885 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key.d1de58d0
	I0805 11:47:36.443846  402885 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt.d1de58d0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.254]
	I0805 11:47:36.625998  402885 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt.d1de58d0 ...
	I0805 11:47:36.626035  402885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt.d1de58d0: {Name:mk971fadbe0c7eacc8f710f7033a3327fa9ee2d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:47:36.626234  402885 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key.d1de58d0 ...
	I0805 11:47:36.626253  402885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key.d1de58d0: {Name:mk1c5497454e604075e29d080c0dc346f196be2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:47:36.626361  402885 certs.go:381] copying /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt.d1de58d0 -> /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt
	I0805 11:47:36.626442  402885 certs.go:385] copying /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key.d1de58d0 -> /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key
	I0805 11:47:36.626498  402885 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/proxy-client.key
	I0805 11:47:36.626515  402885 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/proxy-client.crt with IP's: []
	I0805 11:47:36.984398  402885 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/proxy-client.crt ...
	I0805 11:47:36.984430  402885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/proxy-client.crt: {Name:mk4cd8e8ae8575603b5e1fa8b77e6557d8c1ece5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:47:36.984602  402885 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/proxy-client.key ...
	I0805 11:47:36.984623  402885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/proxy-client.key: {Name:mkce91befe6e8431fd2dfc816ef3f4abd3a91050 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:47:36.984720  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0805 11:47:36.984744  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0805 11:47:36.984756  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0805 11:47:36.984769  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0805 11:47:36.984782  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0805 11:47:36.984806  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0805 11:47:36.984819  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0805 11:47:36.984831  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0805 11:47:36.984882  402885 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem (1338 bytes)
	W0805 11:47:36.984918  402885 certs.go:480] ignoring /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219_empty.pem, impossibly tiny 0 bytes
	I0805 11:47:36.984927  402885 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 11:47:36.984948  402885 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem (1082 bytes)
	I0805 11:47:36.984977  402885 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem (1123 bytes)
	I0805 11:47:36.985007  402885 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem (1675 bytes)
	I0805 11:47:36.985044  402885 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 11:47:36.985075  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0805 11:47:36.985089  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem -> /usr/share/ca-certificates/391219.pem
	I0805 11:47:36.985106  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> /usr/share/ca-certificates/3912192.pem
	I0805 11:47:36.985694  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 11:47:37.011987  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 11:47:37.035377  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 11:47:37.058543  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 11:47:37.081913  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0805 11:47:37.105057  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0805 11:47:37.130939  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 11:47:37.157787  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 11:47:37.186187  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 11:47:37.213848  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem --> /usr/share/ca-certificates/391219.pem (1338 bytes)
	I0805 11:47:37.237251  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /usr/share/ca-certificates/3912192.pem (1708 bytes)
	I0805 11:47:37.260594  402885 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 11:47:37.277763  402885 ssh_runner.go:195] Run: openssl version
	I0805 11:47:37.284446  402885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391219.pem && ln -fs /usr/share/ca-certificates/391219.pem /etc/ssl/certs/391219.pem"
	I0805 11:47:37.295487  402885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/391219.pem
	I0805 11:47:37.299976  402885 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 11:39 /usr/share/ca-certificates/391219.pem
	I0805 11:47:37.300031  402885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391219.pem
	I0805 11:47:37.305863  402885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391219.pem /etc/ssl/certs/51391683.0"
	I0805 11:47:37.316889  402885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3912192.pem && ln -fs /usr/share/ca-certificates/3912192.pem /etc/ssl/certs/3912192.pem"
	I0805 11:47:37.327673  402885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3912192.pem
	I0805 11:47:37.332181  402885 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 11:39 /usr/share/ca-certificates/3912192.pem
	I0805 11:47:37.332236  402885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3912192.pem
	I0805 11:47:37.338018  402885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3912192.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 11:47:37.348910  402885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 11:47:37.359270  402885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 11:47:37.363584  402885 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0805 11:47:37.363622  402885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 11:47:37.369239  402885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 11:47:37.379604  402885 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 11:47:37.383537  402885 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0805 11:47:37.383591  402885 kubeadm.go:392] StartCluster: {Name:ha-672593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-672593 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 11:47:37.383664  402885 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 11:47:37.383715  402885 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 11:47:37.419857  402885 cri.go:89] found id: ""
	I0805 11:47:37.419925  402885 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 11:47:37.430051  402885 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 11:47:37.439307  402885 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 11:47:37.448637  402885 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 11:47:37.448654  402885 kubeadm.go:157] found existing configuration files:
	
	I0805 11:47:37.448703  402885 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 11:47:37.457516  402885 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 11:47:37.457576  402885 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 11:47:37.466851  402885 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 11:47:37.475755  402885 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 11:47:37.475799  402885 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 11:47:37.485152  402885 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 11:47:37.494271  402885 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 11:47:37.494313  402885 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 11:47:37.503315  402885 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 11:47:37.512128  402885 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 11:47:37.512173  402885 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 11:47:37.521489  402885 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 11:47:37.624606  402885 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0805 11:47:37.624706  402885 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 11:47:37.757030  402885 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 11:47:37.757209  402885 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 11:47:37.757380  402885 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 11:47:37.962278  402885 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 11:47:37.964191  402885 out.go:204]   - Generating certificates and keys ...
	I0805 11:47:37.964276  402885 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 11:47:37.964378  402885 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 11:47:38.139549  402885 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0805 11:47:38.277362  402885 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0805 11:47:38.403783  402885 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0805 11:47:38.484752  402885 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0805 11:47:38.681349  402885 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0805 11:47:38.681515  402885 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-672593 localhost] and IPs [192.168.39.102 127.0.0.1 ::1]
	I0805 11:47:38.773264  402885 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0805 11:47:38.773407  402885 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-672593 localhost] and IPs [192.168.39.102 127.0.0.1 ::1]
	I0805 11:47:38.924683  402885 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0805 11:47:39.021527  402885 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0805 11:47:39.134668  402885 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0805 11:47:39.134782  402885 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 11:47:39.422524  402885 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 11:47:39.955462  402885 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0805 11:47:40.308237  402885 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 11:47:40.361656  402885 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 11:47:40.479271  402885 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 11:47:40.479670  402885 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 11:47:40.482134  402885 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 11:47:40.483922  402885 out.go:204]   - Booting up control plane ...
	I0805 11:47:40.484030  402885 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 11:47:40.484132  402885 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 11:47:40.484213  402885 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 11:47:40.498471  402885 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 11:47:40.499335  402885 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 11:47:40.499412  402885 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 11:47:40.625259  402885 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0805 11:47:40.625403  402885 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0805 11:47:41.626409  402885 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00164977s
	I0805 11:47:41.626669  402885 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0805 11:47:47.305194  402885 kubeadm.go:310] [api-check] The API server is healthy after 5.680388911s
	I0805 11:47:47.319874  402885 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 11:47:47.332106  402885 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 11:47:47.367795  402885 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0805 11:47:47.368002  402885 kubeadm.go:310] [mark-control-plane] Marking the node ha-672593 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 11:47:47.379722  402885 kubeadm.go:310] [bootstrap-token] Using token: rofbjc.vrvrgkgc24h3j2yi
	I0805 11:47:47.381086  402885 out.go:204]   - Configuring RBAC rules ...
	I0805 11:47:47.381212  402885 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 11:47:47.392759  402885 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 11:47:47.400174  402885 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 11:47:47.405738  402885 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 11:47:47.408907  402885 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 11:47:47.411987  402885 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 11:47:47.711919  402885 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 11:47:48.186021  402885 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0805 11:47:48.711616  402885 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0805 11:47:48.711640  402885 kubeadm.go:310] 
	I0805 11:47:48.711690  402885 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0805 11:47:48.711694  402885 kubeadm.go:310] 
	I0805 11:47:48.711852  402885 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0805 11:47:48.711878  402885 kubeadm.go:310] 
	I0805 11:47:48.711939  402885 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0805 11:47:48.712020  402885 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 11:47:48.712090  402885 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 11:47:48.712103  402885 kubeadm.go:310] 
	I0805 11:47:48.712173  402885 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0805 11:47:48.712186  402885 kubeadm.go:310] 
	I0805 11:47:48.712249  402885 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 11:47:48.712263  402885 kubeadm.go:310] 
	I0805 11:47:48.712334  402885 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0805 11:47:48.712448  402885 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 11:47:48.712537  402885 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 11:47:48.712550  402885 kubeadm.go:310] 
	I0805 11:47:48.712663  402885 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0805 11:47:48.712767  402885 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0805 11:47:48.712777  402885 kubeadm.go:310] 
	I0805 11:47:48.712875  402885 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rofbjc.vrvrgkgc24h3j2yi \
	I0805 11:47:48.712981  402885 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d5d31a77e9c4cbf19599d2fca5d8f2345e115b01301fa4b841f92bcfec86ddc6 \
	I0805 11:47:48.713009  402885 kubeadm.go:310] 	--control-plane 
	I0805 11:47:48.713022  402885 kubeadm.go:310] 
	I0805 11:47:48.713117  402885 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0805 11:47:48.713128  402885 kubeadm.go:310] 
	I0805 11:47:48.713258  402885 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rofbjc.vrvrgkgc24h3j2yi \
	I0805 11:47:48.713435  402885 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d5d31a77e9c4cbf19599d2fca5d8f2345e115b01301fa4b841f92bcfec86ddc6 
	I0805 11:47:48.713543  402885 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 11:47:48.713554  402885 cni.go:84] Creating CNI manager for ""
	I0805 11:47:48.713560  402885 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0805 11:47:48.715158  402885 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0805 11:47:48.716496  402885 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0805 11:47:48.721941  402885 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0805 11:47:48.721958  402885 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0805 11:47:48.742232  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0805 11:47:49.123502  402885 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 11:47:49.123602  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:47:49.123629  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-672593 minikube.k8s.io/updated_at=2024_08_05T11_47_49_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f minikube.k8s.io/name=ha-672593 minikube.k8s.io/primary=true
	I0805 11:47:49.265775  402885 ops.go:34] apiserver oom_adj: -16
	I0805 11:47:49.265862  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:47:49.766504  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:47:50.266289  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:47:50.766141  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:47:51.266826  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:47:51.765994  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:47:52.266341  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:47:52.766819  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:47:53.265993  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:47:53.766780  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:47:54.266174  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:47:54.766950  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:47:55.266643  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:47:55.766565  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:47:56.266555  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:47:56.766945  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:47:57.266238  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:47:57.766591  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:47:58.266182  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:47:58.766160  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:47:59.266832  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:47:59.765979  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:48:00.265995  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:48:00.356758  402885 kubeadm.go:1113] duration metric: took 11.233215725s to wait for elevateKubeSystemPrivileges
	I0805 11:48:00.356804  402885 kubeadm.go:394] duration metric: took 22.97321577s to StartCluster
	I0805 11:48:00.356828  402885 settings.go:142] acquiring lock: {Name:mkef693333292ed53a03690c72ec170ce2e26d3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:48:00.356910  402885 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 11:48:00.357556  402885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/kubeconfig: {Name:mkf2ea766e58530103015ce4ba9d1ed3336f3926 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:48:00.357769  402885 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 11:48:00.357792  402885 start.go:241] waiting for startup goroutines ...
	I0805 11:48:00.357777  402885 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0805 11:48:00.357792  402885 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 11:48:00.357854  402885 addons.go:69] Setting storage-provisioner=true in profile "ha-672593"
	I0805 11:48:00.357877  402885 addons.go:69] Setting default-storageclass=true in profile "ha-672593"
	I0805 11:48:00.357926  402885 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-672593"
	I0805 11:48:00.357886  402885 addons.go:234] Setting addon storage-provisioner=true in "ha-672593"
	I0805 11:48:00.357999  402885 config.go:182] Loaded profile config "ha-672593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 11:48:00.358011  402885 host.go:66] Checking if "ha-672593" exists ...
	I0805 11:48:00.358440  402885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:48:00.358477  402885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:48:00.358440  402885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:48:00.358553  402885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:48:00.373501  402885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37729
	I0805 11:48:00.373587  402885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33195
	I0805 11:48:00.374054  402885 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:48:00.374090  402885 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:48:00.374598  402885 main.go:141] libmachine: Using API Version  1
	I0805 11:48:00.374614  402885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:48:00.374727  402885 main.go:141] libmachine: Using API Version  1
	I0805 11:48:00.374754  402885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:48:00.375056  402885 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:48:00.375072  402885 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:48:00.375279  402885 main.go:141] libmachine: (ha-672593) Calling .GetState
	I0805 11:48:00.375644  402885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:48:00.375680  402885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:48:00.377443  402885 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 11:48:00.377697  402885 kapi.go:59] client config for ha-672593: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/client.crt", KeyFile:"/home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/client.key", CAFile:"/home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 11:48:00.378183  402885 cert_rotation.go:137] Starting client certificate rotation controller
	I0805 11:48:00.378399  402885 addons.go:234] Setting addon default-storageclass=true in "ha-672593"
	I0805 11:48:00.378432  402885 host.go:66] Checking if "ha-672593" exists ...
	I0805 11:48:00.378673  402885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:48:00.378694  402885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:48:00.391307  402885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43259
	I0805 11:48:00.391871  402885 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:48:00.392426  402885 main.go:141] libmachine: Using API Version  1
	I0805 11:48:00.392447  402885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:48:00.392774  402885 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:48:00.392954  402885 main.go:141] libmachine: (ha-672593) Calling .GetState
	I0805 11:48:00.393569  402885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35773
	I0805 11:48:00.394032  402885 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:48:00.394569  402885 main.go:141] libmachine: Using API Version  1
	I0805 11:48:00.394593  402885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:48:00.394932  402885 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:48:00.395040  402885 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:48:00.395555  402885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:48:00.395590  402885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:48:00.397044  402885 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 11:48:00.398512  402885 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 11:48:00.398535  402885 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 11:48:00.398555  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:48:00.401399  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:48:00.401772  402885 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:48:00.401794  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:48:00.402038  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:48:00.402203  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:48:00.402341  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:48:00.402490  402885 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/id_rsa Username:docker}
	I0805 11:48:00.412017  402885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32779
	I0805 11:48:00.412450  402885 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:48:00.413008  402885 main.go:141] libmachine: Using API Version  1
	I0805 11:48:00.413034  402885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:48:00.413379  402885 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:48:00.413561  402885 main.go:141] libmachine: (ha-672593) Calling .GetState
	I0805 11:48:00.414973  402885 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:48:00.415191  402885 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 11:48:00.415206  402885 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 11:48:00.415222  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:48:00.417804  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:48:00.418274  402885 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:48:00.418299  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:48:00.418470  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:48:00.418640  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:48:00.418826  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:48:00.418970  402885 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/id_rsa Username:docker}
	I0805 11:48:00.490607  402885 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0805 11:48:00.551074  402885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 11:48:00.573464  402885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 11:48:01.002396  402885 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0805 11:48:01.002507  402885 main.go:141] libmachine: Making call to close driver server
	I0805 11:48:01.002530  402885 main.go:141] libmachine: (ha-672593) Calling .Close
	I0805 11:48:01.002830  402885 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:48:01.002853  402885 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:48:01.002867  402885 main.go:141] libmachine: Making call to close driver server
	I0805 11:48:01.002869  402885 main.go:141] libmachine: (ha-672593) DBG | Closing plugin on server side
	I0805 11:48:01.002877  402885 main.go:141] libmachine: (ha-672593) Calling .Close
	I0805 11:48:01.003123  402885 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:48:01.003139  402885 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:48:01.003265  402885 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0805 11:48:01.003275  402885 round_trippers.go:469] Request Headers:
	I0805 11:48:01.003286  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:48:01.003294  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:48:01.027307  402885 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0805 11:48:01.029401  402885 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0805 11:48:01.029417  402885 round_trippers.go:469] Request Headers:
	I0805 11:48:01.029425  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:48:01.029429  402885 round_trippers.go:473]     Content-Type: application/json
	I0805 11:48:01.029433  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:48:01.034128  402885 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 11:48:01.041462  402885 main.go:141] libmachine: Making call to close driver server
	I0805 11:48:01.041477  402885 main.go:141] libmachine: (ha-672593) Calling .Close
	I0805 11:48:01.041790  402885 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:48:01.041809  402885 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:48:01.041832  402885 main.go:141] libmachine: (ha-672593) DBG | Closing plugin on server side
	I0805 11:48:01.316901  402885 main.go:141] libmachine: Making call to close driver server
	I0805 11:48:01.316932  402885 main.go:141] libmachine: (ha-672593) Calling .Close
	I0805 11:48:01.317371  402885 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:48:01.317401  402885 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:48:01.317412  402885 main.go:141] libmachine: Making call to close driver server
	I0805 11:48:01.317420  402885 main.go:141] libmachine: (ha-672593) Calling .Close
	I0805 11:48:01.317698  402885 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:48:01.317708  402885 main.go:141] libmachine: (ha-672593) DBG | Closing plugin on server side
	I0805 11:48:01.317720  402885 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:48:01.319871  402885 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0805 11:48:01.321367  402885 addons.go:510] duration metric: took 963.568265ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0805 11:48:01.321416  402885 start.go:246] waiting for cluster config update ...
	I0805 11:48:01.321443  402885 start.go:255] writing updated cluster config ...
	I0805 11:48:01.323158  402885 out.go:177] 
	I0805 11:48:01.324946  402885 config.go:182] Loaded profile config "ha-672593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 11:48:01.325050  402885 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/config.json ...
	I0805 11:48:01.326478  402885 out.go:177] * Starting "ha-672593-m02" control-plane node in "ha-672593" cluster
	I0805 11:48:01.327903  402885 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 11:48:01.327937  402885 cache.go:56] Caching tarball of preloaded images
	I0805 11:48:01.328091  402885 preload.go:172] Found /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0805 11:48:01.328112  402885 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0805 11:48:01.328225  402885 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/config.json ...
	I0805 11:48:01.328526  402885 start.go:360] acquireMachinesLock for ha-672593-m02: {Name:mk3babe91d55c30c0b650587cdec6489eb3a7ed6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 11:48:01.328611  402885 start.go:364] duration metric: took 49.348µs to acquireMachinesLock for "ha-672593-m02"
	I0805 11:48:01.328645  402885 start.go:93] Provisioning new machine with config: &{Name:ha-672593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-672593 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 11:48:01.328755  402885 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0805 11:48:01.330357  402885 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 11:48:01.330488  402885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:48:01.330522  402885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:48:01.345624  402885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32799
	I0805 11:48:01.346110  402885 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:48:01.346627  402885 main.go:141] libmachine: Using API Version  1
	I0805 11:48:01.346648  402885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:48:01.346924  402885 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:48:01.347103  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetMachineName
	I0805 11:48:01.347229  402885 main.go:141] libmachine: (ha-672593-m02) Calling .DriverName
	I0805 11:48:01.347387  402885 start.go:159] libmachine.API.Create for "ha-672593" (driver="kvm2")
	I0805 11:48:01.347409  402885 client.go:168] LocalClient.Create starting
	I0805 11:48:01.347439  402885 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem
	I0805 11:48:01.347487  402885 main.go:141] libmachine: Decoding PEM data...
	I0805 11:48:01.347508  402885 main.go:141] libmachine: Parsing certificate...
	I0805 11:48:01.347578  402885 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem
	I0805 11:48:01.347605  402885 main.go:141] libmachine: Decoding PEM data...
	I0805 11:48:01.347617  402885 main.go:141] libmachine: Parsing certificate...
	I0805 11:48:01.347642  402885 main.go:141] libmachine: Running pre-create checks...
	I0805 11:48:01.347654  402885 main.go:141] libmachine: (ha-672593-m02) Calling .PreCreateCheck
	I0805 11:48:01.347883  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetConfigRaw
	I0805 11:48:01.348402  402885 main.go:141] libmachine: Creating machine...
	I0805 11:48:01.348423  402885 main.go:141] libmachine: (ha-672593-m02) Calling .Create
	I0805 11:48:01.348600  402885 main.go:141] libmachine: (ha-672593-m02) Creating KVM machine...
	I0805 11:48:01.349851  402885 main.go:141] libmachine: (ha-672593-m02) DBG | found existing default KVM network
	I0805 11:48:01.349995  402885 main.go:141] libmachine: (ha-672593-m02) DBG | found existing private KVM network mk-ha-672593
	I0805 11:48:01.350143  402885 main.go:141] libmachine: (ha-672593-m02) Setting up store path in /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m02 ...
	I0805 11:48:01.350168  402885 main.go:141] libmachine: (ha-672593-m02) Building disk image from file:///home/jenkins/minikube-integration/19377-383955/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0805 11:48:01.350241  402885 main.go:141] libmachine: (ha-672593-m02) DBG | I0805 11:48:01.350134  403313 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 11:48:01.350361  402885 main.go:141] libmachine: (ha-672593-m02) Downloading /home/jenkins/minikube-integration/19377-383955/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19377-383955/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0805 11:48:01.641041  402885 main.go:141] libmachine: (ha-672593-m02) DBG | I0805 11:48:01.640909  403313 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m02/id_rsa...
	I0805 11:48:01.696896  402885 main.go:141] libmachine: (ha-672593-m02) DBG | I0805 11:48:01.696742  403313 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m02/ha-672593-m02.rawdisk...
	I0805 11:48:01.696947  402885 main.go:141] libmachine: (ha-672593-m02) DBG | Writing magic tar header
	I0805 11:48:01.696965  402885 main.go:141] libmachine: (ha-672593-m02) DBG | Writing SSH key tar header
	I0805 11:48:01.696979  402885 main.go:141] libmachine: (ha-672593-m02) DBG | I0805 11:48:01.696920  403313 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m02 ...
	I0805 11:48:01.697096  402885 main.go:141] libmachine: (ha-672593-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m02
	I0805 11:48:01.697150  402885 main.go:141] libmachine: (ha-672593-m02) Setting executable bit set on /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m02 (perms=drwx------)
	I0805 11:48:01.697180  402885 main.go:141] libmachine: (ha-672593-m02) Setting executable bit set on /home/jenkins/minikube-integration/19377-383955/.minikube/machines (perms=drwxr-xr-x)
	I0805 11:48:01.697196  402885 main.go:141] libmachine: (ha-672593-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19377-383955/.minikube/machines
	I0805 11:48:01.697207  402885 main.go:141] libmachine: (ha-672593-m02) Setting executable bit set on /home/jenkins/minikube-integration/19377-383955/.minikube (perms=drwxr-xr-x)
	I0805 11:48:01.697219  402885 main.go:141] libmachine: (ha-672593-m02) Setting executable bit set on /home/jenkins/minikube-integration/19377-383955 (perms=drwxrwxr-x)
	I0805 11:48:01.697227  402885 main.go:141] libmachine: (ha-672593-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0805 11:48:01.697236  402885 main.go:141] libmachine: (ha-672593-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0805 11:48:01.697245  402885 main.go:141] libmachine: (ha-672593-m02) Creating domain...
	I0805 11:48:01.697252  402885 main.go:141] libmachine: (ha-672593-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 11:48:01.697266  402885 main.go:141] libmachine: (ha-672593-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19377-383955
	I0805 11:48:01.697279  402885 main.go:141] libmachine: (ha-672593-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0805 11:48:01.697293  402885 main.go:141] libmachine: (ha-672593-m02) DBG | Checking permissions on dir: /home/jenkins
	I0805 11:48:01.697301  402885 main.go:141] libmachine: (ha-672593-m02) DBG | Checking permissions on dir: /home
	I0805 11:48:01.697309  402885 main.go:141] libmachine: (ha-672593-m02) DBG | Skipping /home - not owner
	I0805 11:48:01.698416  402885 main.go:141] libmachine: (ha-672593-m02) define libvirt domain using xml: 
	I0805 11:48:01.698443  402885 main.go:141] libmachine: (ha-672593-m02) <domain type='kvm'>
	I0805 11:48:01.698454  402885 main.go:141] libmachine: (ha-672593-m02)   <name>ha-672593-m02</name>
	I0805 11:48:01.698462  402885 main.go:141] libmachine: (ha-672593-m02)   <memory unit='MiB'>2200</memory>
	I0805 11:48:01.698471  402885 main.go:141] libmachine: (ha-672593-m02)   <vcpu>2</vcpu>
	I0805 11:48:01.698481  402885 main.go:141] libmachine: (ha-672593-m02)   <features>
	I0805 11:48:01.698491  402885 main.go:141] libmachine: (ha-672593-m02)     <acpi/>
	I0805 11:48:01.698500  402885 main.go:141] libmachine: (ha-672593-m02)     <apic/>
	I0805 11:48:01.698511  402885 main.go:141] libmachine: (ha-672593-m02)     <pae/>
	I0805 11:48:01.698520  402885 main.go:141] libmachine: (ha-672593-m02)     
	I0805 11:48:01.698530  402885 main.go:141] libmachine: (ha-672593-m02)   </features>
	I0805 11:48:01.698536  402885 main.go:141] libmachine: (ha-672593-m02)   <cpu mode='host-passthrough'>
	I0805 11:48:01.698547  402885 main.go:141] libmachine: (ha-672593-m02)   
	I0805 11:48:01.698554  402885 main.go:141] libmachine: (ha-672593-m02)   </cpu>
	I0805 11:48:01.698563  402885 main.go:141] libmachine: (ha-672593-m02)   <os>
	I0805 11:48:01.698580  402885 main.go:141] libmachine: (ha-672593-m02)     <type>hvm</type>
	I0805 11:48:01.698592  402885 main.go:141] libmachine: (ha-672593-m02)     <boot dev='cdrom'/>
	I0805 11:48:01.698600  402885 main.go:141] libmachine: (ha-672593-m02)     <boot dev='hd'/>
	I0805 11:48:01.698614  402885 main.go:141] libmachine: (ha-672593-m02)     <bootmenu enable='no'/>
	I0805 11:48:01.698622  402885 main.go:141] libmachine: (ha-672593-m02)   </os>
	I0805 11:48:01.698630  402885 main.go:141] libmachine: (ha-672593-m02)   <devices>
	I0805 11:48:01.698641  402885 main.go:141] libmachine: (ha-672593-m02)     <disk type='file' device='cdrom'>
	I0805 11:48:01.698655  402885 main.go:141] libmachine: (ha-672593-m02)       <source file='/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m02/boot2docker.iso'/>
	I0805 11:48:01.698668  402885 main.go:141] libmachine: (ha-672593-m02)       <target dev='hdc' bus='scsi'/>
	I0805 11:48:01.698677  402885 main.go:141] libmachine: (ha-672593-m02)       <readonly/>
	I0805 11:48:01.698690  402885 main.go:141] libmachine: (ha-672593-m02)     </disk>
	I0805 11:48:01.698703  402885 main.go:141] libmachine: (ha-672593-m02)     <disk type='file' device='disk'>
	I0805 11:48:01.698710  402885 main.go:141] libmachine: (ha-672593-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0805 11:48:01.698723  402885 main.go:141] libmachine: (ha-672593-m02)       <source file='/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m02/ha-672593-m02.rawdisk'/>
	I0805 11:48:01.698735  402885 main.go:141] libmachine: (ha-672593-m02)       <target dev='hda' bus='virtio'/>
	I0805 11:48:01.698743  402885 main.go:141] libmachine: (ha-672593-m02)     </disk>
	I0805 11:48:01.698754  402885 main.go:141] libmachine: (ha-672593-m02)     <interface type='network'>
	I0805 11:48:01.698787  402885 main.go:141] libmachine: (ha-672593-m02)       <source network='mk-ha-672593'/>
	I0805 11:48:01.698810  402885 main.go:141] libmachine: (ha-672593-m02)       <model type='virtio'/>
	I0805 11:48:01.698819  402885 main.go:141] libmachine: (ha-672593-m02)     </interface>
	I0805 11:48:01.698830  402885 main.go:141] libmachine: (ha-672593-m02)     <interface type='network'>
	I0805 11:48:01.698840  402885 main.go:141] libmachine: (ha-672593-m02)       <source network='default'/>
	I0805 11:48:01.698855  402885 main.go:141] libmachine: (ha-672593-m02)       <model type='virtio'/>
	I0805 11:48:01.698867  402885 main.go:141] libmachine: (ha-672593-m02)     </interface>
	I0805 11:48:01.698877  402885 main.go:141] libmachine: (ha-672593-m02)     <serial type='pty'>
	I0805 11:48:01.698892  402885 main.go:141] libmachine: (ha-672593-m02)       <target port='0'/>
	I0805 11:48:01.698905  402885 main.go:141] libmachine: (ha-672593-m02)     </serial>
	I0805 11:48:01.698918  402885 main.go:141] libmachine: (ha-672593-m02)     <console type='pty'>
	I0805 11:48:01.698928  402885 main.go:141] libmachine: (ha-672593-m02)       <target type='serial' port='0'/>
	I0805 11:48:01.698936  402885 main.go:141] libmachine: (ha-672593-m02)     </console>
	I0805 11:48:01.698950  402885 main.go:141] libmachine: (ha-672593-m02)     <rng model='virtio'>
	I0805 11:48:01.698961  402885 main.go:141] libmachine: (ha-672593-m02)       <backend model='random'>/dev/random</backend>
	I0805 11:48:01.698970  402885 main.go:141] libmachine: (ha-672593-m02)     </rng>
	I0805 11:48:01.698978  402885 main.go:141] libmachine: (ha-672593-m02)     
	I0805 11:48:01.698991  402885 main.go:141] libmachine: (ha-672593-m02)     
	I0805 11:48:01.699024  402885 main.go:141] libmachine: (ha-672593-m02)   </devices>
	I0805 11:48:01.699054  402885 main.go:141] libmachine: (ha-672593-m02) </domain>
	I0805 11:48:01.699066  402885 main.go:141] libmachine: (ha-672593-m02) 
	I0805 11:48:01.706052  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:ea:0c:74 in network default
	I0805 11:48:01.706817  402885 main.go:141] libmachine: (ha-672593-m02) Ensuring networks are active...
	I0805 11:48:01.706843  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:01.707678  402885 main.go:141] libmachine: (ha-672593-m02) Ensuring network default is active
	I0805 11:48:01.708124  402885 main.go:141] libmachine: (ha-672593-m02) Ensuring network mk-ha-672593 is active
	I0805 11:48:01.708718  402885 main.go:141] libmachine: (ha-672593-m02) Getting domain xml...
	I0805 11:48:01.709550  402885 main.go:141] libmachine: (ha-672593-m02) Creating domain...
	I0805 11:48:02.917859  402885 main.go:141] libmachine: (ha-672593-m02) Waiting to get IP...
	I0805 11:48:02.918747  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:02.919199  402885 main.go:141] libmachine: (ha-672593-m02) DBG | unable to find current IP address of domain ha-672593-m02 in network mk-ha-672593
	I0805 11:48:02.919217  402885 main.go:141] libmachine: (ha-672593-m02) DBG | I0805 11:48:02.919183  403313 retry.go:31] will retry after 302.863518ms: waiting for machine to come up
	I0805 11:48:03.223803  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:03.224253  402885 main.go:141] libmachine: (ha-672593-m02) DBG | unable to find current IP address of domain ha-672593-m02 in network mk-ha-672593
	I0805 11:48:03.224282  402885 main.go:141] libmachine: (ha-672593-m02) DBG | I0805 11:48:03.224201  403313 retry.go:31] will retry after 382.819723ms: waiting for machine to come up
	I0805 11:48:03.608940  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:03.609403  402885 main.go:141] libmachine: (ha-672593-m02) DBG | unable to find current IP address of domain ha-672593-m02 in network mk-ha-672593
	I0805 11:48:03.609428  402885 main.go:141] libmachine: (ha-672593-m02) DBG | I0805 11:48:03.609344  403313 retry.go:31] will retry after 318.082741ms: waiting for machine to come up
	I0805 11:48:03.928829  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:03.929244  402885 main.go:141] libmachine: (ha-672593-m02) DBG | unable to find current IP address of domain ha-672593-m02 in network mk-ha-672593
	I0805 11:48:03.929274  402885 main.go:141] libmachine: (ha-672593-m02) DBG | I0805 11:48:03.929187  403313 retry.go:31] will retry after 479.149529ms: waiting for machine to come up
	I0805 11:48:04.409675  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:04.410224  402885 main.go:141] libmachine: (ha-672593-m02) DBG | unable to find current IP address of domain ha-672593-m02 in network mk-ha-672593
	I0805 11:48:04.410265  402885 main.go:141] libmachine: (ha-672593-m02) DBG | I0805 11:48:04.410173  403313 retry.go:31] will retry after 683.38485ms: waiting for machine to come up
	I0805 11:48:05.095020  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:05.095382  402885 main.go:141] libmachine: (ha-672593-m02) DBG | unable to find current IP address of domain ha-672593-m02 in network mk-ha-672593
	I0805 11:48:05.095411  402885 main.go:141] libmachine: (ha-672593-m02) DBG | I0805 11:48:05.095355  403313 retry.go:31] will retry after 944.815364ms: waiting for machine to come up
	I0805 11:48:06.042078  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:06.042559  402885 main.go:141] libmachine: (ha-672593-m02) DBG | unable to find current IP address of domain ha-672593-m02 in network mk-ha-672593
	I0805 11:48:06.042591  402885 main.go:141] libmachine: (ha-672593-m02) DBG | I0805 11:48:06.042512  403313 retry.go:31] will retry after 934.806892ms: waiting for machine to come up
	I0805 11:48:06.979021  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:06.979515  402885 main.go:141] libmachine: (ha-672593-m02) DBG | unable to find current IP address of domain ha-672593-m02 in network mk-ha-672593
	I0805 11:48:06.979541  402885 main.go:141] libmachine: (ha-672593-m02) DBG | I0805 11:48:06.979475  403313 retry.go:31] will retry after 1.203623715s: waiting for machine to come up
	I0805 11:48:08.184893  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:08.185316  402885 main.go:141] libmachine: (ha-672593-m02) DBG | unable to find current IP address of domain ha-672593-m02 in network mk-ha-672593
	I0805 11:48:08.185346  402885 main.go:141] libmachine: (ha-672593-m02) DBG | I0805 11:48:08.185260  403313 retry.go:31] will retry after 1.41925065s: waiting for machine to come up
	I0805 11:48:09.606879  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:09.607341  402885 main.go:141] libmachine: (ha-672593-m02) DBG | unable to find current IP address of domain ha-672593-m02 in network mk-ha-672593
	I0805 11:48:09.607370  402885 main.go:141] libmachine: (ha-672593-m02) DBG | I0805 11:48:09.607270  403313 retry.go:31] will retry after 1.671138336s: waiting for machine to come up
	I0805 11:48:11.280997  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:11.281363  402885 main.go:141] libmachine: (ha-672593-m02) DBG | unable to find current IP address of domain ha-672593-m02 in network mk-ha-672593
	I0805 11:48:11.281389  402885 main.go:141] libmachine: (ha-672593-m02) DBG | I0805 11:48:11.281332  403313 retry.go:31] will retry after 2.578509384s: waiting for machine to come up
	I0805 11:48:13.862566  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:13.862965  402885 main.go:141] libmachine: (ha-672593-m02) DBG | unable to find current IP address of domain ha-672593-m02 in network mk-ha-672593
	I0805 11:48:13.862990  402885 main.go:141] libmachine: (ha-672593-m02) DBG | I0805 11:48:13.862912  403313 retry.go:31] will retry after 2.291998643s: waiting for machine to come up
	I0805 11:48:16.156873  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:16.157200  402885 main.go:141] libmachine: (ha-672593-m02) DBG | unable to find current IP address of domain ha-672593-m02 in network mk-ha-672593
	I0805 11:48:16.157225  402885 main.go:141] libmachine: (ha-672593-m02) DBG | I0805 11:48:16.157174  403313 retry.go:31] will retry after 4.45165891s: waiting for machine to come up
	I0805 11:48:20.613052  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:20.613503  402885 main.go:141] libmachine: (ha-672593-m02) DBG | unable to find current IP address of domain ha-672593-m02 in network mk-ha-672593
	I0805 11:48:20.613534  402885 main.go:141] libmachine: (ha-672593-m02) DBG | I0805 11:48:20.613441  403313 retry.go:31] will retry after 5.087876314s: waiting for machine to come up
	I0805 11:48:25.704853  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:25.705361  402885 main.go:141] libmachine: (ha-672593-m02) Found IP for machine: 192.168.39.68
	I0805 11:48:25.705384  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has current primary IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:25.705393  402885 main.go:141] libmachine: (ha-672593-m02) Reserving static IP address...
	I0805 11:48:25.705715  402885 main.go:141] libmachine: (ha-672593-m02) DBG | unable to find host DHCP lease matching {name: "ha-672593-m02", mac: "52:54:00:67:7b:e8", ip: "192.168.39.68"} in network mk-ha-672593
	I0805 11:48:25.776238  402885 main.go:141] libmachine: (ha-672593-m02) DBG | Getting to WaitForSSH function...
	I0805 11:48:25.776273  402885 main.go:141] libmachine: (ha-672593-m02) Reserved static IP address: 192.168.39.68
	I0805 11:48:25.776296  402885 main.go:141] libmachine: (ha-672593-m02) Waiting for SSH to be available...
	I0805 11:48:25.778763  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:25.779155  402885 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:48:16 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:minikube Clientid:01:52:54:00:67:7b:e8}
	I0805 11:48:25.779186  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:25.779266  402885 main.go:141] libmachine: (ha-672593-m02) DBG | Using SSH client type: external
	I0805 11:48:25.779298  402885 main.go:141] libmachine: (ha-672593-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m02/id_rsa (-rw-------)
	I0805 11:48:25.779330  402885 main.go:141] libmachine: (ha-672593-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.68 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 11:48:25.779346  402885 main.go:141] libmachine: (ha-672593-m02) DBG | About to run SSH command:
	I0805 11:48:25.779385  402885 main.go:141] libmachine: (ha-672593-m02) DBG | exit 0
	I0805 11:48:25.908557  402885 main.go:141] libmachine: (ha-672593-m02) DBG | SSH cmd err, output: <nil>: 
	I0805 11:48:25.908814  402885 main.go:141] libmachine: (ha-672593-m02) KVM machine creation complete!
	I0805 11:48:25.909183  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetConfigRaw
	I0805 11:48:25.909795  402885 main.go:141] libmachine: (ha-672593-m02) Calling .DriverName
	I0805 11:48:25.910028  402885 main.go:141] libmachine: (ha-672593-m02) Calling .DriverName
	I0805 11:48:25.910231  402885 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0805 11:48:25.910244  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetState
	I0805 11:48:25.911548  402885 main.go:141] libmachine: Detecting operating system of created instance...
	I0805 11:48:25.911561  402885 main.go:141] libmachine: Waiting for SSH to be available...
	I0805 11:48:25.911567  402885 main.go:141] libmachine: Getting to WaitForSSH function...
	I0805 11:48:25.911575  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHHostname
	I0805 11:48:25.913870  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:25.914377  402885 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:48:16 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-672593-m02 Clientid:01:52:54:00:67:7b:e8}
	I0805 11:48:25.914403  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:25.914600  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHPort
	I0805 11:48:25.914792  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHKeyPath
	I0805 11:48:25.914940  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHKeyPath
	I0805 11:48:25.915099  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHUsername
	I0805 11:48:25.915267  402885 main.go:141] libmachine: Using SSH client type: native
	I0805 11:48:25.915497  402885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0805 11:48:25.915513  402885 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0805 11:48:26.023197  402885 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 11:48:26.023222  402885 main.go:141] libmachine: Detecting the provisioner...
	I0805 11:48:26.023238  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHHostname
	I0805 11:48:26.025829  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:26.026174  402885 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:48:16 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-672593-m02 Clientid:01:52:54:00:67:7b:e8}
	I0805 11:48:26.026207  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:26.026292  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHPort
	I0805 11:48:26.026551  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHKeyPath
	I0805 11:48:26.026750  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHKeyPath
	I0805 11:48:26.026921  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHUsername
	I0805 11:48:26.027115  402885 main.go:141] libmachine: Using SSH client type: native
	I0805 11:48:26.027346  402885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0805 11:48:26.027364  402885 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0805 11:48:26.132333  402885 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0805 11:48:26.132440  402885 main.go:141] libmachine: found compatible host: buildroot
	I0805 11:48:26.132453  402885 main.go:141] libmachine: Provisioning with buildroot...
	I0805 11:48:26.132464  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetMachineName
	I0805 11:48:26.132744  402885 buildroot.go:166] provisioning hostname "ha-672593-m02"
	I0805 11:48:26.132763  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetMachineName
	I0805 11:48:26.132977  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHHostname
	I0805 11:48:26.135523  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:26.135901  402885 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:48:16 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-672593-m02 Clientid:01:52:54:00:67:7b:e8}
	I0805 11:48:26.135916  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:26.136114  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHPort
	I0805 11:48:26.136277  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHKeyPath
	I0805 11:48:26.136433  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHKeyPath
	I0805 11:48:26.136567  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHUsername
	I0805 11:48:26.136758  402885 main.go:141] libmachine: Using SSH client type: native
	I0805 11:48:26.136912  402885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0805 11:48:26.136924  402885 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-672593-m02 && echo "ha-672593-m02" | sudo tee /etc/hostname
	I0805 11:48:26.253208  402885 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-672593-m02
	
	I0805 11:48:26.253238  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHHostname
	I0805 11:48:26.255880  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:26.256319  402885 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:48:16 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-672593-m02 Clientid:01:52:54:00:67:7b:e8}
	I0805 11:48:26.256359  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:26.256502  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHPort
	I0805 11:48:26.256723  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHKeyPath
	I0805 11:48:26.256875  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHKeyPath
	I0805 11:48:26.257002  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHUsername
	I0805 11:48:26.257148  402885 main.go:141] libmachine: Using SSH client type: native
	I0805 11:48:26.257336  402885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0805 11:48:26.257357  402885 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-672593-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-672593-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-672593-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 11:48:26.372664  402885 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 11:48:26.372695  402885 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19377-383955/.minikube CaCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19377-383955/.minikube}
	I0805 11:48:26.372714  402885 buildroot.go:174] setting up certificates
	I0805 11:48:26.372728  402885 provision.go:84] configureAuth start
	I0805 11:48:26.372736  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetMachineName
	I0805 11:48:26.372977  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetIP
	I0805 11:48:26.375201  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:26.375595  402885 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:48:16 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-672593-m02 Clientid:01:52:54:00:67:7b:e8}
	I0805 11:48:26.375620  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:26.375730  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHHostname
	I0805 11:48:26.378096  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:26.378431  402885 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:48:16 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-672593-m02 Clientid:01:52:54:00:67:7b:e8}
	I0805 11:48:26.378451  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:26.378635  402885 provision.go:143] copyHostCerts
	I0805 11:48:26.378669  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 11:48:26.378704  402885 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem, removing ...
	I0805 11:48:26.378713  402885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 11:48:26.378776  402885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem (1082 bytes)
	I0805 11:48:26.378845  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 11:48:26.378868  402885 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem, removing ...
	I0805 11:48:26.378877  402885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 11:48:26.378910  402885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem (1123 bytes)
	I0805 11:48:26.378972  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 11:48:26.378998  402885 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem, removing ...
	I0805 11:48:26.379005  402885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 11:48:26.379042  402885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem (1675 bytes)
	I0805 11:48:26.379123  402885 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem org=jenkins.ha-672593-m02 san=[127.0.0.1 192.168.39.68 ha-672593-m02 localhost minikube]
	I0805 11:48:26.606457  402885 provision.go:177] copyRemoteCerts
	I0805 11:48:26.606519  402885 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 11:48:26.606547  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHHostname
	I0805 11:48:26.609287  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:26.609596  402885 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:48:16 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-672593-m02 Clientid:01:52:54:00:67:7b:e8}
	I0805 11:48:26.609631  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:26.609725  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHPort
	I0805 11:48:26.609945  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHKeyPath
	I0805 11:48:26.610151  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHUsername
	I0805 11:48:26.610307  402885 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m02/id_rsa Username:docker}
	I0805 11:48:26.695566  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 11:48:26.695655  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0805 11:48:26.723973  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 11:48:26.724047  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 11:48:26.747390  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 11:48:26.747457  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 11:48:26.772915  402885 provision.go:87] duration metric: took 400.171697ms to configureAuth
	I0805 11:48:26.772946  402885 buildroot.go:189] setting minikube options for container-runtime
	I0805 11:48:26.773159  402885 config.go:182] Loaded profile config "ha-672593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 11:48:26.773262  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHHostname
	I0805 11:48:26.776201  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:26.776605  402885 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:48:16 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-672593-m02 Clientid:01:52:54:00:67:7b:e8}
	I0805 11:48:26.776636  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:26.776855  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHPort
	I0805 11:48:26.777069  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHKeyPath
	I0805 11:48:26.777246  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHKeyPath
	I0805 11:48:26.777414  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHUsername
	I0805 11:48:26.777600  402885 main.go:141] libmachine: Using SSH client type: native
	I0805 11:48:26.777848  402885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0805 11:48:26.777878  402885 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 11:48:27.039393  402885 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 11:48:27.039428  402885 main.go:141] libmachine: Checking connection to Docker...
	I0805 11:48:27.039437  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetURL
	I0805 11:48:27.040785  402885 main.go:141] libmachine: (ha-672593-m02) DBG | Using libvirt version 6000000
	I0805 11:48:27.042890  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:27.043183  402885 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:48:16 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-672593-m02 Clientid:01:52:54:00:67:7b:e8}
	I0805 11:48:27.043222  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:27.043435  402885 main.go:141] libmachine: Docker is up and running!
	I0805 11:48:27.043450  402885 main.go:141] libmachine: Reticulating splines...
	I0805 11:48:27.043457  402885 client.go:171] duration metric: took 25.696041913s to LocalClient.Create
	I0805 11:48:27.043479  402885 start.go:167] duration metric: took 25.696094275s to libmachine.API.Create "ha-672593"
	I0805 11:48:27.043490  402885 start.go:293] postStartSetup for "ha-672593-m02" (driver="kvm2")
	I0805 11:48:27.043500  402885 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 11:48:27.043515  402885 main.go:141] libmachine: (ha-672593-m02) Calling .DriverName
	I0805 11:48:27.043781  402885 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 11:48:27.043806  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHHostname
	I0805 11:48:27.045836  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:27.046182  402885 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:48:16 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-672593-m02 Clientid:01:52:54:00:67:7b:e8}
	I0805 11:48:27.046204  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:27.046356  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHPort
	I0805 11:48:27.046537  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHKeyPath
	I0805 11:48:27.046718  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHUsername
	I0805 11:48:27.046852  402885 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m02/id_rsa Username:docker}
	I0805 11:48:27.130015  402885 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 11:48:27.134348  402885 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 11:48:27.134376  402885 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/addons for local assets ...
	I0805 11:48:27.134446  402885 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/files for local assets ...
	I0805 11:48:27.134547  402885 filesync.go:149] local asset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> 3912192.pem in /etc/ssl/certs
	I0805 11:48:27.134561  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> /etc/ssl/certs/3912192.pem
	I0805 11:48:27.134671  402885 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 11:48:27.144049  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 11:48:27.167995  402885 start.go:296] duration metric: took 124.489233ms for postStartSetup
	I0805 11:48:27.168050  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetConfigRaw
	I0805 11:48:27.168656  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetIP
	I0805 11:48:27.172273  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:27.172709  402885 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:48:16 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-672593-m02 Clientid:01:52:54:00:67:7b:e8}
	I0805 11:48:27.172738  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:27.172996  402885 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/config.json ...
	I0805 11:48:27.173250  402885 start.go:128] duration metric: took 25.844480317s to createHost
	I0805 11:48:27.173281  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHHostname
	I0805 11:48:27.175663  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:27.175987  402885 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:48:16 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-672593-m02 Clientid:01:52:54:00:67:7b:e8}
	I0805 11:48:27.176036  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:27.176239  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHPort
	I0805 11:48:27.176445  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHKeyPath
	I0805 11:48:27.176618  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHKeyPath
	I0805 11:48:27.176743  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHUsername
	I0805 11:48:27.176874  402885 main.go:141] libmachine: Using SSH client type: native
	I0805 11:48:27.177040  402885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0805 11:48:27.177050  402885 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 11:48:27.288647  402885 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722858507.265904771
	
	I0805 11:48:27.288679  402885 fix.go:216] guest clock: 1722858507.265904771
	I0805 11:48:27.288690  402885 fix.go:229] Guest: 2024-08-05 11:48:27.265904771 +0000 UTC Remote: 2024-08-05 11:48:27.173265737 +0000 UTC m=+85.804470788 (delta=92.639034ms)
	I0805 11:48:27.288718  402885 fix.go:200] guest clock delta is within tolerance: 92.639034ms
	I0805 11:48:27.288725  402885 start.go:83] releasing machines lock for "ha-672593-m02", held for 25.960099843s
	I0805 11:48:27.288760  402885 main.go:141] libmachine: (ha-672593-m02) Calling .DriverName
	I0805 11:48:27.289045  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetIP
	I0805 11:48:27.291857  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:27.292196  402885 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:48:16 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-672593-m02 Clientid:01:52:54:00:67:7b:e8}
	I0805 11:48:27.292227  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:27.294470  402885 out.go:177] * Found network options:
	I0805 11:48:27.295834  402885 out.go:177]   - NO_PROXY=192.168.39.102
	W0805 11:48:27.297178  402885 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 11:48:27.297210  402885 main.go:141] libmachine: (ha-672593-m02) Calling .DriverName
	I0805 11:48:27.297850  402885 main.go:141] libmachine: (ha-672593-m02) Calling .DriverName
	I0805 11:48:27.298207  402885 main.go:141] libmachine: (ha-672593-m02) Calling .DriverName
	I0805 11:48:27.298305  402885 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 11:48:27.298351  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHHostname
	W0805 11:48:27.298420  402885 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 11:48:27.298511  402885 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 11:48:27.298534  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHHostname
	I0805 11:48:27.301174  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:27.301488  402885 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:48:16 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-672593-m02 Clientid:01:52:54:00:67:7b:e8}
	I0805 11:48:27.301519  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:27.301627  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:27.301685  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHPort
	I0805 11:48:27.301878  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHKeyPath
	I0805 11:48:27.302045  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHUsername
	I0805 11:48:27.302083  402885 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:48:16 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-672593-m02 Clientid:01:52:54:00:67:7b:e8}
	I0805 11:48:27.302106  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:27.302188  402885 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m02/id_rsa Username:docker}
	I0805 11:48:27.302345  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHPort
	I0805 11:48:27.302573  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHKeyPath
	I0805 11:48:27.303922  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHUsername
	I0805 11:48:27.304102  402885 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m02/id_rsa Username:docker}
	I0805 11:48:27.533535  402885 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 11:48:27.539340  402885 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 11:48:27.539394  402885 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 11:48:27.556611  402885 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 11:48:27.556635  402885 start.go:495] detecting cgroup driver to use...
	I0805 11:48:27.556702  402885 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 11:48:27.573063  402885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 11:48:27.586934  402885 docker.go:217] disabling cri-docker service (if available) ...
	I0805 11:48:27.586986  402885 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 11:48:27.600482  402885 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 11:48:27.614532  402885 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 11:48:27.741282  402885 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 11:48:27.911805  402885 docker.go:233] disabling docker service ...
	I0805 11:48:27.911876  402885 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 11:48:27.928908  402885 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 11:48:27.942263  402885 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 11:48:28.086907  402885 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 11:48:28.207913  402885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 11:48:28.221916  402885 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 11:48:28.244146  402885 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0805 11:48:28.244214  402885 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:48:28.257376  402885 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 11:48:28.257457  402885 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:48:28.267972  402885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:48:28.278416  402885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:48:28.288915  402885 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 11:48:28.299660  402885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:48:28.309889  402885 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:48:28.327242  402885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:48:28.337306  402885 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 11:48:28.346426  402885 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0805 11:48:28.346478  402885 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0805 11:48:28.361676  402885 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 11:48:28.371024  402885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 11:48:28.489702  402885 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 11:48:28.625580  402885 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 11:48:28.625670  402885 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 11:48:28.630374  402885 start.go:563] Will wait 60s for crictl version
	I0805 11:48:28.630416  402885 ssh_runner.go:195] Run: which crictl
	I0805 11:48:28.634219  402885 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 11:48:28.681308  402885 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 11:48:28.681401  402885 ssh_runner.go:195] Run: crio --version
	I0805 11:48:28.710423  402885 ssh_runner.go:195] Run: crio --version
	I0805 11:48:28.742765  402885 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0805 11:48:28.744074  402885 out.go:177]   - env NO_PROXY=192.168.39.102
	I0805 11:48:28.745370  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetIP
	I0805 11:48:28.748024  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:28.748349  402885 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:48:16 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-672593-m02 Clientid:01:52:54:00:67:7b:e8}
	I0805 11:48:28.748366  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:28.748575  402885 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0805 11:48:28.752872  402885 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 11:48:28.765707  402885 mustload.go:65] Loading cluster: ha-672593
	I0805 11:48:28.765900  402885 config.go:182] Loaded profile config "ha-672593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 11:48:28.766170  402885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:48:28.766204  402885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:48:28.781593  402885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40553
	I0805 11:48:28.782040  402885 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:48:28.782493  402885 main.go:141] libmachine: Using API Version  1
	I0805 11:48:28.782514  402885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:48:28.782819  402885 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:48:28.783004  402885 main.go:141] libmachine: (ha-672593) Calling .GetState
	I0805 11:48:28.784613  402885 host.go:66] Checking if "ha-672593" exists ...
	I0805 11:48:28.784888  402885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:48:28.784910  402885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:48:28.801139  402885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46519
	I0805 11:48:28.801558  402885 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:48:28.802039  402885 main.go:141] libmachine: Using API Version  1
	I0805 11:48:28.802057  402885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:48:28.802374  402885 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:48:28.802561  402885 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:48:28.802734  402885 certs.go:68] Setting up /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593 for IP: 192.168.39.68
	I0805 11:48:28.802749  402885 certs.go:194] generating shared ca certs ...
	I0805 11:48:28.802768  402885 certs.go:226] acquiring lock for ca certs: {Name:mk0abfcaff3883fbb5243c47b487f9200d9166d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:48:28.802921  402885 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key
	I0805 11:48:28.802999  402885 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key
	I0805 11:48:28.803014  402885 certs.go:256] generating profile certs ...
	I0805 11:48:28.803128  402885 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/client.key
	I0805 11:48:28.803164  402885 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key.143f38de
	I0805 11:48:28.803184  402885 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt.143f38de with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.68 192.168.39.254]
	I0805 11:48:29.166917  402885 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt.143f38de ...
	I0805 11:48:29.166948  402885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt.143f38de: {Name:mk675c593a87f2257d2750f97816b630d94b443e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:48:29.167153  402885 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key.143f38de ...
	I0805 11:48:29.167172  402885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key.143f38de: {Name:mkb191d4a87b24cab83b77c2e4b67c3fe8122f80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:48:29.167270  402885 certs.go:381] copying /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt.143f38de -> /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt
	I0805 11:48:29.167442  402885 certs.go:385] copying /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key.143f38de -> /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key
	I0805 11:48:29.167623  402885 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/proxy-client.key
	I0805 11:48:29.167644  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0805 11:48:29.167668  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0805 11:48:29.167687  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0805 11:48:29.167705  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0805 11:48:29.167722  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0805 11:48:29.167736  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0805 11:48:29.167772  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0805 11:48:29.167804  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0805 11:48:29.167870  402885 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem (1338 bytes)
	W0805 11:48:29.167911  402885 certs.go:480] ignoring /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219_empty.pem, impossibly tiny 0 bytes
	I0805 11:48:29.167924  402885 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 11:48:29.167957  402885 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem (1082 bytes)
	I0805 11:48:29.167992  402885 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem (1123 bytes)
	I0805 11:48:29.168034  402885 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem (1675 bytes)
	I0805 11:48:29.168095  402885 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 11:48:29.168127  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0805 11:48:29.168153  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem -> /usr/share/ca-certificates/391219.pem
	I0805 11:48:29.168170  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> /usr/share/ca-certificates/3912192.pem
	I0805 11:48:29.168214  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:48:29.171252  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:48:29.171598  402885 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:48:29.171627  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:48:29.171790  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:48:29.171998  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:48:29.172131  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:48:29.172269  402885 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/id_rsa Username:docker}
	I0805 11:48:29.248148  402885 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0805 11:48:29.253076  402885 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0805 11:48:29.264853  402885 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0805 11:48:29.268904  402885 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0805 11:48:29.279274  402885 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0805 11:48:29.283596  402885 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0805 11:48:29.294367  402885 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0805 11:48:29.298519  402885 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0805 11:48:29.311381  402885 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0805 11:48:29.316314  402885 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0805 11:48:29.326771  402885 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0805 11:48:29.330755  402885 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0805 11:48:29.341542  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 11:48:29.367072  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 11:48:29.391061  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 11:48:29.414257  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 11:48:29.440624  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0805 11:48:29.465821  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0805 11:48:29.489923  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 11:48:29.513668  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 11:48:29.536786  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 11:48:29.560954  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem --> /usr/share/ca-certificates/391219.pem (1338 bytes)
	I0805 11:48:29.585731  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /usr/share/ca-certificates/3912192.pem (1708 bytes)
	I0805 11:48:29.612407  402885 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0805 11:48:29.629067  402885 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0805 11:48:29.645661  402885 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0805 11:48:29.662647  402885 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0805 11:48:29.680905  402885 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0805 11:48:29.698729  402885 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0805 11:48:29.716375  402885 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0805 11:48:29.733737  402885 ssh_runner.go:195] Run: openssl version
	I0805 11:48:29.739709  402885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3912192.pem && ln -fs /usr/share/ca-certificates/3912192.pem /etc/ssl/certs/3912192.pem"
	I0805 11:48:29.750894  402885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3912192.pem
	I0805 11:48:29.755513  402885 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 11:39 /usr/share/ca-certificates/3912192.pem
	I0805 11:48:29.755593  402885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3912192.pem
	I0805 11:48:29.761503  402885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3912192.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 11:48:29.772864  402885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 11:48:29.784142  402885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 11:48:29.788775  402885 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0805 11:48:29.788848  402885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 11:48:29.794459  402885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 11:48:29.805331  402885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391219.pem && ln -fs /usr/share/ca-certificates/391219.pem /etc/ssl/certs/391219.pem"
	I0805 11:48:29.815852  402885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/391219.pem
	I0805 11:48:29.820248  402885 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 11:39 /usr/share/ca-certificates/391219.pem
	I0805 11:48:29.820314  402885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391219.pem
	I0805 11:48:29.826195  402885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391219.pem /etc/ssl/certs/51391683.0"
	I0805 11:48:29.836683  402885 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 11:48:29.841095  402885 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0805 11:48:29.841148  402885 kubeadm.go:934] updating node {m02 192.168.39.68 8443 v1.30.3 crio true true} ...
	I0805 11:48:29.841238  402885 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-672593-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.68
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-672593 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 11:48:29.841264  402885 kube-vip.go:115] generating kube-vip config ...
	I0805 11:48:29.841294  402885 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0805 11:48:29.858412  402885 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0805 11:48:29.858491  402885 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0805 11:48:29.858560  402885 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 11:48:29.868915  402885 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0805 11:48:29.868978  402885 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0805 11:48:29.878710  402885 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0805 11:48:29.878750  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0805 11:48:29.878778  402885 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19377-383955/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0805 11:48:29.878810  402885 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19377-383955/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0805 11:48:29.878835  402885 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0805 11:48:29.883247  402885 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0805 11:48:29.883269  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0805 11:48:30.745724  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0805 11:48:30.745806  402885 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0805 11:48:30.750912  402885 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0805 11:48:30.750943  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0805 11:48:31.103575  402885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 11:48:31.118930  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0805 11:48:31.119043  402885 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0805 11:48:31.123655  402885 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0805 11:48:31.123696  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0805 11:48:31.536979  402885 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0805 11:48:31.546582  402885 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0805 11:48:31.562857  402885 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 11:48:31.579410  402885 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0805 11:48:31.595773  402885 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0805 11:48:31.599495  402885 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 11:48:31.613985  402885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 11:48:31.744740  402885 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 11:48:31.762194  402885 host.go:66] Checking if "ha-672593" exists ...
	I0805 11:48:31.762710  402885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:48:31.762778  402885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:48:31.778060  402885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46047
	I0805 11:48:31.778484  402885 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:48:31.778959  402885 main.go:141] libmachine: Using API Version  1
	I0805 11:48:31.778978  402885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:48:31.779317  402885 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:48:31.779507  402885 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:48:31.779669  402885 start.go:317] joinCluster: &{Name:ha-672593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-672593 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 11:48:31.779779  402885 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0805 11:48:31.779802  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:48:31.782506  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:48:31.782912  402885 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:48:31.782949  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:48:31.783252  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:48:31.783430  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:48:31.783580  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:48:31.783703  402885 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/id_rsa Username:docker}
	I0805 11:48:31.940397  402885 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 11:48:31.940449  402885 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7hrapk.99ds5t9ultc1uhu4 --discovery-token-ca-cert-hash sha256:d5d31a77e9c4cbf19599d2fca5d8f2345e115b01301fa4b841f92bcfec86ddc6 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-672593-m02 --control-plane --apiserver-advertise-address=192.168.39.68 --apiserver-bind-port=8443"
	I0805 11:48:55.526418  402885 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7hrapk.99ds5t9ultc1uhu4 --discovery-token-ca-cert-hash sha256:d5d31a77e9c4cbf19599d2fca5d8f2345e115b01301fa4b841f92bcfec86ddc6 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-672593-m02 --control-plane --apiserver-advertise-address=192.168.39.68 --apiserver-bind-port=8443": (23.585944246s)
	I0805 11:48:55.526449  402885 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0805 11:48:56.068657  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-672593-m02 minikube.k8s.io/updated_at=2024_08_05T11_48_56_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f minikube.k8s.io/name=ha-672593 minikube.k8s.io/primary=false
	I0805 11:48:56.174867  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-672593-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0805 11:48:56.316883  402885 start.go:319] duration metric: took 24.537207722s to joinCluster
	I0805 11:48:56.316980  402885 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 11:48:56.317322  402885 config.go:182] Loaded profile config "ha-672593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 11:48:56.318612  402885 out.go:177] * Verifying Kubernetes components...
	I0805 11:48:56.319939  402885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 11:48:56.558841  402885 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 11:48:56.578557  402885 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 11:48:56.578917  402885 kapi.go:59] client config for ha-672593: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/client.crt", KeyFile:"/home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/client.key", CAFile:"/home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0805 11:48:56.579008  402885 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.102:8443
	I0805 11:48:56.579356  402885 node_ready.go:35] waiting up to 6m0s for node "ha-672593-m02" to be "Ready" ...
	I0805 11:48:56.579481  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:48:56.579494  402885 round_trippers.go:469] Request Headers:
	I0805 11:48:56.579505  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:48:56.579511  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:48:56.599700  402885 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0805 11:48:57.080108  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:48:57.080133  402885 round_trippers.go:469] Request Headers:
	I0805 11:48:57.080145  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:48:57.080150  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:48:57.084537  402885 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 11:48:57.580565  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:48:57.580594  402885 round_trippers.go:469] Request Headers:
	I0805 11:48:57.580605  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:48:57.580610  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:48:57.585753  402885 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0805 11:48:58.079648  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:48:58.079677  402885 round_trippers.go:469] Request Headers:
	I0805 11:48:58.079688  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:48:58.079695  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:48:58.083254  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:48:58.580559  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:48:58.580585  402885 round_trippers.go:469] Request Headers:
	I0805 11:48:58.580598  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:48:58.580603  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:48:58.584453  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:48:58.585216  402885 node_ready.go:53] node "ha-672593-m02" has status "Ready":"False"
	I0805 11:48:59.079663  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:48:59.079688  402885 round_trippers.go:469] Request Headers:
	I0805 11:48:59.079700  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:48:59.079705  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:48:59.083765  402885 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 11:48:59.580423  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:48:59.580448  402885 round_trippers.go:469] Request Headers:
	I0805 11:48:59.580456  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:48:59.580462  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:48:59.584400  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:00.080594  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:00.080620  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:00.080631  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:00.080638  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:00.087087  402885 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0805 11:49:00.580570  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:00.580597  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:00.580609  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:00.580616  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:00.583885  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:01.080445  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:01.080476  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:01.080488  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:01.080495  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:01.083958  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:01.084750  402885 node_ready.go:53] node "ha-672593-m02" has status "Ready":"False"
	I0805 11:49:01.580286  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:01.580311  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:01.580322  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:01.580329  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:01.583567  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:02.080313  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:02.080337  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:02.080345  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:02.080350  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:02.084323  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:02.580551  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:02.580574  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:02.580583  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:02.580587  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:02.584267  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:03.080170  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:03.080193  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:03.080201  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:03.080205  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:03.083026  402885 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 11:49:03.579691  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:03.579718  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:03.579730  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:03.579735  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:03.583236  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:03.583923  402885 node_ready.go:53] node "ha-672593-m02" has status "Ready":"False"
	I0805 11:49:04.080087  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:04.080122  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:04.080130  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:04.080134  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:04.083800  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:04.580499  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:04.580533  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:04.580544  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:04.580551  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:04.584076  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:05.079989  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:05.080035  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:05.080046  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:05.080050  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:05.085032  402885 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 11:49:05.579843  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:05.579873  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:05.579884  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:05.579890  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:05.583592  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:05.584365  402885 node_ready.go:53] node "ha-672593-m02" has status "Ready":"False"
	I0805 11:49:06.079736  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:06.079782  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:06.079800  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:06.079805  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:06.083142  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:06.580138  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:06.580166  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:06.580175  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:06.580180  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:06.584140  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:07.079630  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:07.079659  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:07.079670  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:07.079678  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:07.083088  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:07.580280  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:07.580305  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:07.580313  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:07.580317  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:07.583537  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:08.079621  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:08.079646  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:08.079655  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:08.079658  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:08.082922  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:08.083484  402885 node_ready.go:53] node "ha-672593-m02" has status "Ready":"False"
	I0805 11:49:08.579882  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:08.579907  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:08.579916  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:08.579920  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:08.583265  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:09.080350  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:09.080374  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:09.080387  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:09.080392  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:09.083737  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:09.579791  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:09.579814  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:09.579822  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:09.579826  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:09.583634  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:10.079631  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:10.079654  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:10.079662  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:10.079665  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:10.082948  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:10.083628  402885 node_ready.go:53] node "ha-672593-m02" has status "Ready":"False"
	I0805 11:49:10.580238  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:10.580263  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:10.580307  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:10.580314  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:10.589238  402885 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0805 11:49:11.079870  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:11.079900  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:11.079911  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:11.079915  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:11.084116  402885 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 11:49:11.580430  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:11.580455  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:11.580464  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:11.580469  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:11.583695  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:12.080479  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:12.080501  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:12.080509  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:12.080513  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:12.083693  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:12.084378  402885 node_ready.go:53] node "ha-672593-m02" has status "Ready":"False"
	I0805 11:49:12.579782  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:12.579809  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:12.579821  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:12.579827  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:12.583199  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:13.080193  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:13.080217  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:13.080225  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:13.080228  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:13.083579  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:13.580615  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:13.580640  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:13.580646  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:13.580650  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:13.585161  402885 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 11:49:14.080187  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:14.080214  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:14.080225  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:14.080231  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:14.084544  402885 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 11:49:14.085474  402885 node_ready.go:49] node "ha-672593-m02" has status "Ready":"True"
	I0805 11:49:14.085495  402885 node_ready.go:38] duration metric: took 17.506115032s for node "ha-672593-m02" to be "Ready" ...
	I0805 11:49:14.085506  402885 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 11:49:14.085635  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0805 11:49:14.085646  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:14.085653  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:14.085659  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:14.091201  402885 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0805 11:49:14.097313  402885 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-sfh7c" in "kube-system" namespace to be "Ready" ...
	I0805 11:49:14.097408  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sfh7c
	I0805 11:49:14.097416  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:14.097424  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:14.097430  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:14.100382  402885 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 11:49:14.100960  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593
	I0805 11:49:14.100975  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:14.100984  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:14.100989  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:14.103390  402885 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 11:49:14.104025  402885 pod_ready.go:92] pod "coredns-7db6d8ff4d-sfh7c" in "kube-system" namespace has status "Ready":"True"
	I0805 11:49:14.104048  402885 pod_ready.go:81] duration metric: took 6.708322ms for pod "coredns-7db6d8ff4d-sfh7c" in "kube-system" namespace to be "Ready" ...
	I0805 11:49:14.104059  402885 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-sgd4v" in "kube-system" namespace to be "Ready" ...
	I0805 11:49:14.104107  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sgd4v
	I0805 11:49:14.104116  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:14.104122  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:14.104126  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:14.106567  402885 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 11:49:14.107261  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593
	I0805 11:49:14.107278  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:14.107289  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:14.107296  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:14.109533  402885 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 11:49:14.110141  402885 pod_ready.go:92] pod "coredns-7db6d8ff4d-sgd4v" in "kube-system" namespace has status "Ready":"True"
	I0805 11:49:14.110164  402885 pod_ready.go:81] duration metric: took 6.09529ms for pod "coredns-7db6d8ff4d-sgd4v" in "kube-system" namespace to be "Ready" ...
	I0805 11:49:14.110175  402885 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-672593" in "kube-system" namespace to be "Ready" ...
	I0805 11:49:14.110229  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-672593
	I0805 11:49:14.110237  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:14.110243  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:14.110246  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:14.113626  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:14.114280  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593
	I0805 11:49:14.114294  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:14.114301  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:14.114305  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:14.117412  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:14.118170  402885 pod_ready.go:92] pod "etcd-ha-672593" in "kube-system" namespace has status "Ready":"True"
	I0805 11:49:14.118188  402885 pod_ready.go:81] duration metric: took 8.002529ms for pod "etcd-ha-672593" in "kube-system" namespace to be "Ready" ...
	I0805 11:49:14.118196  402885 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-672593-m02" in "kube-system" namespace to be "Ready" ...
	I0805 11:49:14.118238  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-672593-m02
	I0805 11:49:14.118245  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:14.118251  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:14.118257  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:14.120418  402885 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 11:49:14.121019  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:14.121031  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:14.121038  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:14.121043  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:14.123844  402885 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 11:49:14.124626  402885 pod_ready.go:92] pod "etcd-ha-672593-m02" in "kube-system" namespace has status "Ready":"True"
	I0805 11:49:14.124648  402885 pod_ready.go:81] duration metric: took 6.444632ms for pod "etcd-ha-672593-m02" in "kube-system" namespace to be "Ready" ...
	I0805 11:49:14.124666  402885 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-672593" in "kube-system" namespace to be "Ready" ...
	I0805 11:49:14.281129  402885 request.go:629] Waited for 156.38375ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-672593
	I0805 11:49:14.281215  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-672593
	I0805 11:49:14.281226  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:14.281254  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:14.281262  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:14.284965  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:14.480911  402885 request.go:629] Waited for 195.176702ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-672593
	I0805 11:49:14.481004  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593
	I0805 11:49:14.481010  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:14.481018  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:14.481025  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:14.484641  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:14.485138  402885 pod_ready.go:92] pod "kube-apiserver-ha-672593" in "kube-system" namespace has status "Ready":"True"
	I0805 11:49:14.485156  402885 pod_ready.go:81] duration metric: took 360.478367ms for pod "kube-apiserver-ha-672593" in "kube-system" namespace to be "Ready" ...
	I0805 11:49:14.485168  402885 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-672593-m02" in "kube-system" namespace to be "Ready" ...
	I0805 11:49:14.680246  402885 request.go:629] Waited for 194.979653ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-672593-m02
	I0805 11:49:14.680317  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-672593-m02
	I0805 11:49:14.680325  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:14.680337  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:14.680347  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:14.683982  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:14.881084  402885 request.go:629] Waited for 196.407276ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:14.881149  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:14.881154  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:14.881162  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:14.881166  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:14.884441  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:14.885131  402885 pod_ready.go:92] pod "kube-apiserver-ha-672593-m02" in "kube-system" namespace has status "Ready":"True"
	I0805 11:49:14.885158  402885 pod_ready.go:81] duration metric: took 399.981518ms for pod "kube-apiserver-ha-672593-m02" in "kube-system" namespace to be "Ready" ...
	I0805 11:49:14.885172  402885 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-672593" in "kube-system" namespace to be "Ready" ...
	I0805 11:49:15.081186  402885 request.go:629] Waited for 195.91074ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-672593
	I0805 11:49:15.081267  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-672593
	I0805 11:49:15.081277  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:15.081292  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:15.081302  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:15.084342  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:15.280285  402885 request.go:629] Waited for 195.278302ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-672593
	I0805 11:49:15.280404  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593
	I0805 11:49:15.280415  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:15.280426  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:15.280433  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:15.283844  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:15.284509  402885 pod_ready.go:92] pod "kube-controller-manager-ha-672593" in "kube-system" namespace has status "Ready":"True"
	I0805 11:49:15.284528  402885 pod_ready.go:81] duration metric: took 399.349189ms for pod "kube-controller-manager-ha-672593" in "kube-system" namespace to be "Ready" ...
	I0805 11:49:15.284538  402885 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-672593-m02" in "kube-system" namespace to be "Ready" ...
	I0805 11:49:15.480679  402885 request.go:629] Waited for 196.067694ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-672593-m02
	I0805 11:49:15.480766  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-672593-m02
	I0805 11:49:15.480774  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:15.480785  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:15.480795  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:15.484099  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:15.681215  402885 request.go:629] Waited for 196.399946ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:15.681312  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:15.681323  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:15.681336  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:15.681348  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:15.684647  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:15.685195  402885 pod_ready.go:92] pod "kube-controller-manager-ha-672593-m02" in "kube-system" namespace has status "Ready":"True"
	I0805 11:49:15.685220  402885 pod_ready.go:81] duration metric: took 400.675947ms for pod "kube-controller-manager-ha-672593-m02" in "kube-system" namespace to be "Ready" ...
	I0805 11:49:15.685229  402885 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mdwh2" in "kube-system" namespace to be "Ready" ...
	I0805 11:49:15.880237  402885 request.go:629] Waited for 194.922894ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mdwh2
	I0805 11:49:15.880318  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mdwh2
	I0805 11:49:15.880325  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:15.880332  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:15.880336  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:15.883927  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:16.080288  402885 request.go:629] Waited for 195.361808ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:16.080355  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:16.080361  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:16.080369  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:16.080374  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:16.083757  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:16.084301  402885 pod_ready.go:92] pod "kube-proxy-mdwh2" in "kube-system" namespace has status "Ready":"True"
	I0805 11:49:16.084321  402885 pod_ready.go:81] duration metric: took 399.08567ms for pod "kube-proxy-mdwh2" in "kube-system" namespace to be "Ready" ...
	I0805 11:49:16.084333  402885 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wtsdt" in "kube-system" namespace to be "Ready" ...
	I0805 11:49:16.280547  402885 request.go:629] Waited for 196.116287ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wtsdt
	I0805 11:49:16.280639  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wtsdt
	I0805 11:49:16.280647  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:16.280663  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:16.280671  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:16.284090  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:16.480283  402885 request.go:629] Waited for 195.575461ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-672593
	I0805 11:49:16.480354  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593
	I0805 11:49:16.480359  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:16.480367  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:16.480371  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:16.483901  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:16.484533  402885 pod_ready.go:92] pod "kube-proxy-wtsdt" in "kube-system" namespace has status "Ready":"True"
	I0805 11:49:16.484555  402885 pod_ready.go:81] duration metric: took 400.214339ms for pod "kube-proxy-wtsdt" in "kube-system" namespace to be "Ready" ...
	I0805 11:49:16.484567  402885 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-672593" in "kube-system" namespace to be "Ready" ...
	I0805 11:49:16.680924  402885 request.go:629] Waited for 196.260193ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-672593
	I0805 11:49:16.680994  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-672593
	I0805 11:49:16.680999  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:16.681007  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:16.681016  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:16.684490  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:16.880561  402885 request.go:629] Waited for 195.448909ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-672593
	I0805 11:49:16.880624  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593
	I0805 11:49:16.880628  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:16.880637  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:16.880648  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:16.884661  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:16.885298  402885 pod_ready.go:92] pod "kube-scheduler-ha-672593" in "kube-system" namespace has status "Ready":"True"
	I0805 11:49:16.885325  402885 pod_ready.go:81] duration metric: took 400.748413ms for pod "kube-scheduler-ha-672593" in "kube-system" namespace to be "Ready" ...
	I0805 11:49:16.885341  402885 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-672593-m02" in "kube-system" namespace to be "Ready" ...
	I0805 11:49:17.080234  402885 request.go:629] Waited for 194.799084ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-672593-m02
	I0805 11:49:17.080333  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-672593-m02
	I0805 11:49:17.080351  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:17.080364  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:17.080375  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:17.083923  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:17.280992  402885 request.go:629] Waited for 196.405526ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:17.281067  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:17.281076  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:17.281085  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:17.281096  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:17.284891  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:17.285522  402885 pod_ready.go:92] pod "kube-scheduler-ha-672593-m02" in "kube-system" namespace has status "Ready":"True"
	I0805 11:49:17.285547  402885 pod_ready.go:81] duration metric: took 400.19791ms for pod "kube-scheduler-ha-672593-m02" in "kube-system" namespace to be "Ready" ...
	I0805 11:49:17.285561  402885 pod_ready.go:38] duration metric: took 3.200021393s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 11:49:17.285580  402885 api_server.go:52] waiting for apiserver process to appear ...
	I0805 11:49:17.285655  402885 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 11:49:17.302148  402885 api_server.go:72] duration metric: took 20.985124928s to wait for apiserver process to appear ...
	I0805 11:49:17.302174  402885 api_server.go:88] waiting for apiserver healthz status ...
	I0805 11:49:17.302199  402885 api_server.go:253] Checking apiserver healthz at https://192.168.39.102:8443/healthz ...
	I0805 11:49:17.306850  402885 api_server.go:279] https://192.168.39.102:8443/healthz returned 200:
	ok
	I0805 11:49:17.306917  402885 round_trippers.go:463] GET https://192.168.39.102:8443/version
	I0805 11:49:17.306925  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:17.306933  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:17.306936  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:17.307735  402885 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 11:49:17.307851  402885 api_server.go:141] control plane version: v1.30.3
	I0805 11:49:17.307868  402885 api_server.go:131] duration metric: took 5.687191ms to wait for apiserver health ...
	I0805 11:49:17.307876  402885 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 11:49:17.480226  402885 request.go:629] Waited for 172.274918ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0805 11:49:17.480300  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0805 11:49:17.480306  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:17.480313  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:17.480317  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:17.486540  402885 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0805 11:49:17.490941  402885 system_pods.go:59] 17 kube-system pods found
	I0805 11:49:17.490973  402885 system_pods.go:61] "coredns-7db6d8ff4d-sfh7c" [98c09423-e24f-4d26-b7f9-3da3986d538b] Running
	I0805 11:49:17.490978  402885 system_pods.go:61] "coredns-7db6d8ff4d-sgd4v" [58ff9d45-f09f-4213-b1c3-d568ee5ab68a] Running
	I0805 11:49:17.490982  402885 system_pods.go:61] "etcd-ha-672593" [379ffb87-5649-41f5-8095-d7196c401f79] Running
	I0805 11:49:17.490985  402885 system_pods.go:61] "etcd-ha-672593-m02" [ea52f3ac-f7d5-407e-ba4e-a01e5effbf97] Running
	I0805 11:49:17.490988  402885 system_pods.go:61] "kindnet-7fndz" [6bdb2b4a-e7c6-4e03-80f8-cf80501095c4] Running
	I0805 11:49:17.490991  402885 system_pods.go:61] "kindnet-85fm7" [404455ee-e31a-4c52-bf6f-f16546652f70] Running
	I0805 11:49:17.490995  402885 system_pods.go:61] "kube-apiserver-ha-672593" [6c6d5c3e-1d9e-4a8b-8a63-792a94e826a5] Running
	I0805 11:49:17.490998  402885 system_pods.go:61] "kube-apiserver-ha-672593-m02" [f40f5797-3916-467c-a42f-eb18f909121b] Running
	I0805 11:49:17.491001  402885 system_pods.go:61] "kube-controller-manager-ha-672593" [515f7a5c-1f0f-40e9-91ec-1921ec498f03] Running
	I0805 11:49:17.491004  402885 system_pods.go:61] "kube-controller-manager-ha-672593-m02" [60e41780-9ffd-49ea-b9ee-3bbf4dc3ad62] Running
	I0805 11:49:17.491007  402885 system_pods.go:61] "kube-proxy-mdwh2" [93a2ab4f-2393-49f1-b185-97b90da38595] Running
	I0805 11:49:17.491012  402885 system_pods.go:61] "kube-proxy-wtsdt" [9a1664bb-e0a8-496e-a74d-3c25080dca8e] Running
	I0805 11:49:17.491019  402885 system_pods.go:61] "kube-scheduler-ha-672593" [5b680e35-89cc-4a77-a100-2feeccfa4b4b] Running
	I0805 11:49:17.491022  402885 system_pods.go:61] "kube-scheduler-ha-672593-m02" [beba4210-14b0-4bc3-a256-e61d47037355] Running
	I0805 11:49:17.491025  402885 system_pods.go:61] "kube-vip-ha-672593" [36928548-a08e-49a4-a82a-6c6c3fb52b48] Running
	I0805 11:49:17.491028  402885 system_pods.go:61] "kube-vip-ha-672593-m02" [662dd07b-4ec6-471e-8209-6d25bac5459c] Running
	I0805 11:49:17.491031  402885 system_pods.go:61] "storage-provisioner" [9c3a4e49-f517-40e4-bd83-1e69b6a7550c] Running
	I0805 11:49:17.491045  402885 system_pods.go:74] duration metric: took 183.154454ms to wait for pod list to return data ...
	I0805 11:49:17.491062  402885 default_sa.go:34] waiting for default service account to be created ...
	I0805 11:49:17.680408  402885 request.go:629] Waited for 189.264104ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I0805 11:49:17.680470  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I0805 11:49:17.680475  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:17.680483  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:17.680488  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:17.684004  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:17.684266  402885 default_sa.go:45] found service account: "default"
	I0805 11:49:17.684287  402885 default_sa.go:55] duration metric: took 193.216718ms for default service account to be created ...
	I0805 11:49:17.684298  402885 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 11:49:17.880805  402885 request.go:629] Waited for 196.392194ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0805 11:49:17.880870  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0805 11:49:17.880898  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:17.880910  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:17.880915  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:17.886649  402885 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0805 11:49:17.892241  402885 system_pods.go:86] 17 kube-system pods found
	I0805 11:49:17.892268  402885 system_pods.go:89] "coredns-7db6d8ff4d-sfh7c" [98c09423-e24f-4d26-b7f9-3da3986d538b] Running
	I0805 11:49:17.892274  402885 system_pods.go:89] "coredns-7db6d8ff4d-sgd4v" [58ff9d45-f09f-4213-b1c3-d568ee5ab68a] Running
	I0805 11:49:17.892278  402885 system_pods.go:89] "etcd-ha-672593" [379ffb87-5649-41f5-8095-d7196c401f79] Running
	I0805 11:49:17.892283  402885 system_pods.go:89] "etcd-ha-672593-m02" [ea52f3ac-f7d5-407e-ba4e-a01e5effbf97] Running
	I0805 11:49:17.892287  402885 system_pods.go:89] "kindnet-7fndz" [6bdb2b4a-e7c6-4e03-80f8-cf80501095c4] Running
	I0805 11:49:17.892290  402885 system_pods.go:89] "kindnet-85fm7" [404455ee-e31a-4c52-bf6f-f16546652f70] Running
	I0805 11:49:17.892295  402885 system_pods.go:89] "kube-apiserver-ha-672593" [6c6d5c3e-1d9e-4a8b-8a63-792a94e826a5] Running
	I0805 11:49:17.892299  402885 system_pods.go:89] "kube-apiserver-ha-672593-m02" [f40f5797-3916-467c-a42f-eb18f909121b] Running
	I0805 11:49:17.892303  402885 system_pods.go:89] "kube-controller-manager-ha-672593" [515f7a5c-1f0f-40e9-91ec-1921ec498f03] Running
	I0805 11:49:17.892307  402885 system_pods.go:89] "kube-controller-manager-ha-672593-m02" [60e41780-9ffd-49ea-b9ee-3bbf4dc3ad62] Running
	I0805 11:49:17.892312  402885 system_pods.go:89] "kube-proxy-mdwh2" [93a2ab4f-2393-49f1-b185-97b90da38595] Running
	I0805 11:49:17.892317  402885 system_pods.go:89] "kube-proxy-wtsdt" [9a1664bb-e0a8-496e-a74d-3c25080dca8e] Running
	I0805 11:49:17.892321  402885 system_pods.go:89] "kube-scheduler-ha-672593" [5b680e35-89cc-4a77-a100-2feeccfa4b4b] Running
	I0805 11:49:17.892325  402885 system_pods.go:89] "kube-scheduler-ha-672593-m02" [beba4210-14b0-4bc3-a256-e61d47037355] Running
	I0805 11:49:17.892328  402885 system_pods.go:89] "kube-vip-ha-672593" [36928548-a08e-49a4-a82a-6c6c3fb52b48] Running
	I0805 11:49:17.892332  402885 system_pods.go:89] "kube-vip-ha-672593-m02" [662dd07b-4ec6-471e-8209-6d25bac5459c] Running
	I0805 11:49:17.892336  402885 system_pods.go:89] "storage-provisioner" [9c3a4e49-f517-40e4-bd83-1e69b6a7550c] Running
	I0805 11:49:17.892343  402885 system_pods.go:126] duration metric: took 208.038563ms to wait for k8s-apps to be running ...
	I0805 11:49:17.892357  402885 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 11:49:17.892407  402885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 11:49:17.908299  402885 system_svc.go:56] duration metric: took 15.936288ms WaitForService to wait for kubelet
	I0805 11:49:17.908332  402885 kubeadm.go:582] duration metric: took 21.591309871s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 11:49:17.908358  402885 node_conditions.go:102] verifying NodePressure condition ...
	I0805 11:49:18.080827  402885 request.go:629] Waited for 172.374358ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes
	I0805 11:49:18.080907  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes
	I0805 11:49:18.080914  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:18.080921  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:18.080927  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:18.084595  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:18.085599  402885 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 11:49:18.085631  402885 node_conditions.go:123] node cpu capacity is 2
	I0805 11:49:18.085646  402885 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 11:49:18.085652  402885 node_conditions.go:123] node cpu capacity is 2
	I0805 11:49:18.085658  402885 node_conditions.go:105] duration metric: took 177.294354ms to run NodePressure ...
	I0805 11:49:18.085674  402885 start.go:241] waiting for startup goroutines ...
	I0805 11:49:18.085706  402885 start.go:255] writing updated cluster config ...
	I0805 11:49:18.087856  402885 out.go:177] 
	I0805 11:49:18.089404  402885 config.go:182] Loaded profile config "ha-672593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 11:49:18.089497  402885 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/config.json ...
	I0805 11:49:18.091027  402885 out.go:177] * Starting "ha-672593-m03" control-plane node in "ha-672593" cluster
	I0805 11:49:18.092227  402885 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 11:49:18.092250  402885 cache.go:56] Caching tarball of preloaded images
	I0805 11:49:18.092364  402885 preload.go:172] Found /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0805 11:49:18.092381  402885 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0805 11:49:18.092499  402885 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/config.json ...
	I0805 11:49:18.092715  402885 start.go:360] acquireMachinesLock for ha-672593-m03: {Name:mk3babe91d55c30c0b650587cdec6489eb3a7ed6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 11:49:18.092771  402885 start.go:364] duration metric: took 30.723µs to acquireMachinesLock for "ha-672593-m03"
	I0805 11:49:18.092793  402885 start.go:93] Provisioning new machine with config: &{Name:ha-672593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-672593 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 11:49:18.092931  402885 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0805 11:49:18.094466  402885 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 11:49:18.094601  402885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:49:18.094640  402885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:49:18.110518  402885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39533
	I0805 11:49:18.110993  402885 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:49:18.111496  402885 main.go:141] libmachine: Using API Version  1
	I0805 11:49:18.111518  402885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:49:18.111888  402885 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:49:18.112100  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetMachineName
	I0805 11:49:18.112278  402885 main.go:141] libmachine: (ha-672593-m03) Calling .DriverName
	I0805 11:49:18.112468  402885 start.go:159] libmachine.API.Create for "ha-672593" (driver="kvm2")
	I0805 11:49:18.112503  402885 client.go:168] LocalClient.Create starting
	I0805 11:49:18.112548  402885 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem
	I0805 11:49:18.112595  402885 main.go:141] libmachine: Decoding PEM data...
	I0805 11:49:18.112618  402885 main.go:141] libmachine: Parsing certificate...
	I0805 11:49:18.112691  402885 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem
	I0805 11:49:18.112729  402885 main.go:141] libmachine: Decoding PEM data...
	I0805 11:49:18.112747  402885 main.go:141] libmachine: Parsing certificate...
	I0805 11:49:18.112773  402885 main.go:141] libmachine: Running pre-create checks...
	I0805 11:49:18.112786  402885 main.go:141] libmachine: (ha-672593-m03) Calling .PreCreateCheck
	I0805 11:49:18.112944  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetConfigRaw
	I0805 11:49:18.113366  402885 main.go:141] libmachine: Creating machine...
	I0805 11:49:18.113383  402885 main.go:141] libmachine: (ha-672593-m03) Calling .Create
	I0805 11:49:18.113521  402885 main.go:141] libmachine: (ha-672593-m03) Creating KVM machine...
	I0805 11:49:18.114665  402885 main.go:141] libmachine: (ha-672593-m03) DBG | found existing default KVM network
	I0805 11:49:18.114683  402885 main.go:141] libmachine: (ha-672593-m03) DBG | found existing private KVM network mk-ha-672593
	I0805 11:49:18.114826  402885 main.go:141] libmachine: (ha-672593-m03) Setting up store path in /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m03 ...
	I0805 11:49:18.114853  402885 main.go:141] libmachine: (ha-672593-m03) Building disk image from file:///home/jenkins/minikube-integration/19377-383955/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0805 11:49:18.114899  402885 main.go:141] libmachine: (ha-672593-m03) DBG | I0805 11:49:18.114816  403750 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 11:49:18.114988  402885 main.go:141] libmachine: (ha-672593-m03) Downloading /home/jenkins/minikube-integration/19377-383955/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19377-383955/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0805 11:49:18.417438  402885 main.go:141] libmachine: (ha-672593-m03) DBG | I0805 11:49:18.417283  403750 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m03/id_rsa...
	I0805 11:49:18.618583  402885 main.go:141] libmachine: (ha-672593-m03) DBG | I0805 11:49:18.618449  403750 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m03/ha-672593-m03.rawdisk...
	I0805 11:49:18.618613  402885 main.go:141] libmachine: (ha-672593-m03) DBG | Writing magic tar header
	I0805 11:49:18.618624  402885 main.go:141] libmachine: (ha-672593-m03) DBG | Writing SSH key tar header
	I0805 11:49:18.618632  402885 main.go:141] libmachine: (ha-672593-m03) DBG | I0805 11:49:18.618557  403750 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m03 ...
	I0805 11:49:18.618658  402885 main.go:141] libmachine: (ha-672593-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m03
	I0805 11:49:18.618727  402885 main.go:141] libmachine: (ha-672593-m03) Setting executable bit set on /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m03 (perms=drwx------)
	I0805 11:49:18.618759  402885 main.go:141] libmachine: (ha-672593-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19377-383955/.minikube/machines
	I0805 11:49:18.618772  402885 main.go:141] libmachine: (ha-672593-m03) Setting executable bit set on /home/jenkins/minikube-integration/19377-383955/.minikube/machines (perms=drwxr-xr-x)
	I0805 11:49:18.618792  402885 main.go:141] libmachine: (ha-672593-m03) Setting executable bit set on /home/jenkins/minikube-integration/19377-383955/.minikube (perms=drwxr-xr-x)
	I0805 11:49:18.618807  402885 main.go:141] libmachine: (ha-672593-m03) Setting executable bit set on /home/jenkins/minikube-integration/19377-383955 (perms=drwxrwxr-x)
	I0805 11:49:18.618823  402885 main.go:141] libmachine: (ha-672593-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0805 11:49:18.618837  402885 main.go:141] libmachine: (ha-672593-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 11:49:18.618851  402885 main.go:141] libmachine: (ha-672593-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19377-383955
	I0805 11:49:18.618868  402885 main.go:141] libmachine: (ha-672593-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0805 11:49:18.618878  402885 main.go:141] libmachine: (ha-672593-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0805 11:49:18.618887  402885 main.go:141] libmachine: (ha-672593-m03) DBG | Checking permissions on dir: /home/jenkins
	I0805 11:49:18.618892  402885 main.go:141] libmachine: (ha-672593-m03) DBG | Checking permissions on dir: /home
	I0805 11:49:18.618901  402885 main.go:141] libmachine: (ha-672593-m03) DBG | Skipping /home - not owner
	I0805 11:49:18.618911  402885 main.go:141] libmachine: (ha-672593-m03) Creating domain...
	I0805 11:49:18.619646  402885 main.go:141] libmachine: (ha-672593-m03) define libvirt domain using xml: 
	I0805 11:49:18.619668  402885 main.go:141] libmachine: (ha-672593-m03) <domain type='kvm'>
	I0805 11:49:18.619677  402885 main.go:141] libmachine: (ha-672593-m03)   <name>ha-672593-m03</name>
	I0805 11:49:18.619690  402885 main.go:141] libmachine: (ha-672593-m03)   <memory unit='MiB'>2200</memory>
	I0805 11:49:18.619714  402885 main.go:141] libmachine: (ha-672593-m03)   <vcpu>2</vcpu>
	I0805 11:49:18.619731  402885 main.go:141] libmachine: (ha-672593-m03)   <features>
	I0805 11:49:18.619759  402885 main.go:141] libmachine: (ha-672593-m03)     <acpi/>
	I0805 11:49:18.619772  402885 main.go:141] libmachine: (ha-672593-m03)     <apic/>
	I0805 11:49:18.619788  402885 main.go:141] libmachine: (ha-672593-m03)     <pae/>
	I0805 11:49:18.619834  402885 main.go:141] libmachine: (ha-672593-m03)     
	I0805 11:49:18.619862  402885 main.go:141] libmachine: (ha-672593-m03)   </features>
	I0805 11:49:18.619875  402885 main.go:141] libmachine: (ha-672593-m03)   <cpu mode='host-passthrough'>
	I0805 11:49:18.619889  402885 main.go:141] libmachine: (ha-672593-m03)   
	I0805 11:49:18.619897  402885 main.go:141] libmachine: (ha-672593-m03)   </cpu>
	I0805 11:49:18.619903  402885 main.go:141] libmachine: (ha-672593-m03)   <os>
	I0805 11:49:18.619911  402885 main.go:141] libmachine: (ha-672593-m03)     <type>hvm</type>
	I0805 11:49:18.619917  402885 main.go:141] libmachine: (ha-672593-m03)     <boot dev='cdrom'/>
	I0805 11:49:18.619925  402885 main.go:141] libmachine: (ha-672593-m03)     <boot dev='hd'/>
	I0805 11:49:18.619932  402885 main.go:141] libmachine: (ha-672593-m03)     <bootmenu enable='no'/>
	I0805 11:49:18.619947  402885 main.go:141] libmachine: (ha-672593-m03)   </os>
	I0805 11:49:18.619977  402885 main.go:141] libmachine: (ha-672593-m03)   <devices>
	I0805 11:49:18.619999  402885 main.go:141] libmachine: (ha-672593-m03)     <disk type='file' device='cdrom'>
	I0805 11:49:18.620017  402885 main.go:141] libmachine: (ha-672593-m03)       <source file='/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m03/boot2docker.iso'/>
	I0805 11:49:18.620029  402885 main.go:141] libmachine: (ha-672593-m03)       <target dev='hdc' bus='scsi'/>
	I0805 11:49:18.620042  402885 main.go:141] libmachine: (ha-672593-m03)       <readonly/>
	I0805 11:49:18.620053  402885 main.go:141] libmachine: (ha-672593-m03)     </disk>
	I0805 11:49:18.620065  402885 main.go:141] libmachine: (ha-672593-m03)     <disk type='file' device='disk'>
	I0805 11:49:18.620083  402885 main.go:141] libmachine: (ha-672593-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0805 11:49:18.620100  402885 main.go:141] libmachine: (ha-672593-m03)       <source file='/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m03/ha-672593-m03.rawdisk'/>
	I0805 11:49:18.620110  402885 main.go:141] libmachine: (ha-672593-m03)       <target dev='hda' bus='virtio'/>
	I0805 11:49:18.620119  402885 main.go:141] libmachine: (ha-672593-m03)     </disk>
	I0805 11:49:18.620127  402885 main.go:141] libmachine: (ha-672593-m03)     <interface type='network'>
	I0805 11:49:18.620137  402885 main.go:141] libmachine: (ha-672593-m03)       <source network='mk-ha-672593'/>
	I0805 11:49:18.620147  402885 main.go:141] libmachine: (ha-672593-m03)       <model type='virtio'/>
	I0805 11:49:18.620160  402885 main.go:141] libmachine: (ha-672593-m03)     </interface>
	I0805 11:49:18.620172  402885 main.go:141] libmachine: (ha-672593-m03)     <interface type='network'>
	I0805 11:49:18.620185  402885 main.go:141] libmachine: (ha-672593-m03)       <source network='default'/>
	I0805 11:49:18.620196  402885 main.go:141] libmachine: (ha-672593-m03)       <model type='virtio'/>
	I0805 11:49:18.620205  402885 main.go:141] libmachine: (ha-672593-m03)     </interface>
	I0805 11:49:18.620216  402885 main.go:141] libmachine: (ha-672593-m03)     <serial type='pty'>
	I0805 11:49:18.620228  402885 main.go:141] libmachine: (ha-672593-m03)       <target port='0'/>
	I0805 11:49:18.620235  402885 main.go:141] libmachine: (ha-672593-m03)     </serial>
	I0805 11:49:18.620245  402885 main.go:141] libmachine: (ha-672593-m03)     <console type='pty'>
	I0805 11:49:18.620256  402885 main.go:141] libmachine: (ha-672593-m03)       <target type='serial' port='0'/>
	I0805 11:49:18.620268  402885 main.go:141] libmachine: (ha-672593-m03)     </console>
	I0805 11:49:18.620279  402885 main.go:141] libmachine: (ha-672593-m03)     <rng model='virtio'>
	I0805 11:49:18.620292  402885 main.go:141] libmachine: (ha-672593-m03)       <backend model='random'>/dev/random</backend>
	I0805 11:49:18.620304  402885 main.go:141] libmachine: (ha-672593-m03)     </rng>
	I0805 11:49:18.620318  402885 main.go:141] libmachine: (ha-672593-m03)     
	I0805 11:49:18.620328  402885 main.go:141] libmachine: (ha-672593-m03)     
	I0805 11:49:18.620336  402885 main.go:141] libmachine: (ha-672593-m03)   </devices>
	I0805 11:49:18.620348  402885 main.go:141] libmachine: (ha-672593-m03) </domain>
	I0805 11:49:18.620357  402885 main.go:141] libmachine: (ha-672593-m03) 
	I0805 11:49:18.626581  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:19:78:59 in network default
	I0805 11:49:18.627011  402885 main.go:141] libmachine: (ha-672593-m03) Ensuring networks are active...
	I0805 11:49:18.627056  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:18.627664  402885 main.go:141] libmachine: (ha-672593-m03) Ensuring network default is active
	I0805 11:49:18.627934  402885 main.go:141] libmachine: (ha-672593-m03) Ensuring network mk-ha-672593 is active
	I0805 11:49:18.628245  402885 main.go:141] libmachine: (ha-672593-m03) Getting domain xml...
	I0805 11:49:18.628903  402885 main.go:141] libmachine: (ha-672593-m03) Creating domain...
	I0805 11:49:19.873424  402885 main.go:141] libmachine: (ha-672593-m03) Waiting to get IP...
	I0805 11:49:19.874277  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:19.874689  402885 main.go:141] libmachine: (ha-672593-m03) DBG | unable to find current IP address of domain ha-672593-m03 in network mk-ha-672593
	I0805 11:49:19.874750  402885 main.go:141] libmachine: (ha-672593-m03) DBG | I0805 11:49:19.874682  403750 retry.go:31] will retry after 267.860052ms: waiting for machine to come up
	I0805 11:49:20.144380  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:20.144868  402885 main.go:141] libmachine: (ha-672593-m03) DBG | unable to find current IP address of domain ha-672593-m03 in network mk-ha-672593
	I0805 11:49:20.144894  402885 main.go:141] libmachine: (ha-672593-m03) DBG | I0805 11:49:20.144813  403750 retry.go:31] will retry after 245.509323ms: waiting for machine to come up
	I0805 11:49:20.392488  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:20.392960  402885 main.go:141] libmachine: (ha-672593-m03) DBG | unable to find current IP address of domain ha-672593-m03 in network mk-ha-672593
	I0805 11:49:20.392989  402885 main.go:141] libmachine: (ha-672593-m03) DBG | I0805 11:49:20.392900  403750 retry.go:31] will retry after 374.508573ms: waiting for machine to come up
	I0805 11:49:20.769320  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:20.769855  402885 main.go:141] libmachine: (ha-672593-m03) DBG | unable to find current IP address of domain ha-672593-m03 in network mk-ha-672593
	I0805 11:49:20.769893  402885 main.go:141] libmachine: (ha-672593-m03) DBG | I0805 11:49:20.769790  403750 retry.go:31] will retry after 522.60364ms: waiting for machine to come up
	I0805 11:49:21.293910  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:21.294339  402885 main.go:141] libmachine: (ha-672593-m03) DBG | unable to find current IP address of domain ha-672593-m03 in network mk-ha-672593
	I0805 11:49:21.294363  402885 main.go:141] libmachine: (ha-672593-m03) DBG | I0805 11:49:21.294294  403750 retry.go:31] will retry after 472.93212ms: waiting for machine to come up
	I0805 11:49:21.768948  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:21.769410  402885 main.go:141] libmachine: (ha-672593-m03) DBG | unable to find current IP address of domain ha-672593-m03 in network mk-ha-672593
	I0805 11:49:21.769441  402885 main.go:141] libmachine: (ha-672593-m03) DBG | I0805 11:49:21.769360  403750 retry.go:31] will retry after 609.870077ms: waiting for machine to come up
	I0805 11:49:22.381431  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:22.381891  402885 main.go:141] libmachine: (ha-672593-m03) DBG | unable to find current IP address of domain ha-672593-m03 in network mk-ha-672593
	I0805 11:49:22.381920  402885 main.go:141] libmachine: (ha-672593-m03) DBG | I0805 11:49:22.381848  403750 retry.go:31] will retry after 879.361844ms: waiting for machine to come up
	I0805 11:49:23.263122  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:23.263646  402885 main.go:141] libmachine: (ha-672593-m03) DBG | unable to find current IP address of domain ha-672593-m03 in network mk-ha-672593
	I0805 11:49:23.263677  402885 main.go:141] libmachine: (ha-672593-m03) DBG | I0805 11:49:23.263608  403750 retry.go:31] will retry after 904.198074ms: waiting for machine to come up
	I0805 11:49:24.169201  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:24.169569  402885 main.go:141] libmachine: (ha-672593-m03) DBG | unable to find current IP address of domain ha-672593-m03 in network mk-ha-672593
	I0805 11:49:24.169593  402885 main.go:141] libmachine: (ha-672593-m03) DBG | I0805 11:49:24.169530  403750 retry.go:31] will retry after 1.542079417s: waiting for machine to come up
	I0805 11:49:25.714182  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:25.714581  402885 main.go:141] libmachine: (ha-672593-m03) DBG | unable to find current IP address of domain ha-672593-m03 in network mk-ha-672593
	I0805 11:49:25.714613  402885 main.go:141] libmachine: (ha-672593-m03) DBG | I0805 11:49:25.714541  403750 retry.go:31] will retry after 1.650814306s: waiting for machine to come up
	I0805 11:49:27.367413  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:27.367877  402885 main.go:141] libmachine: (ha-672593-m03) DBG | unable to find current IP address of domain ha-672593-m03 in network mk-ha-672593
	I0805 11:49:27.367914  402885 main.go:141] libmachine: (ha-672593-m03) DBG | I0805 11:49:27.367814  403750 retry.go:31] will retry after 2.4227249s: waiting for machine to come up
	I0805 11:49:29.792991  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:29.793659  402885 main.go:141] libmachine: (ha-672593-m03) DBG | unable to find current IP address of domain ha-672593-m03 in network mk-ha-672593
	I0805 11:49:29.793681  402885 main.go:141] libmachine: (ha-672593-m03) DBG | I0805 11:49:29.793600  403750 retry.go:31] will retry after 2.260664163s: waiting for machine to come up
	I0805 11:49:32.056713  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:32.057175  402885 main.go:141] libmachine: (ha-672593-m03) DBG | unable to find current IP address of domain ha-672593-m03 in network mk-ha-672593
	I0805 11:49:32.057202  402885 main.go:141] libmachine: (ha-672593-m03) DBG | I0805 11:49:32.057128  403750 retry.go:31] will retry after 3.608199099s: waiting for machine to come up
	I0805 11:49:35.668118  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:35.668530  402885 main.go:141] libmachine: (ha-672593-m03) DBG | unable to find current IP address of domain ha-672593-m03 in network mk-ha-672593
	I0805 11:49:35.668565  402885 main.go:141] libmachine: (ha-672593-m03) DBG | I0805 11:49:35.668472  403750 retry.go:31] will retry after 4.306357465s: waiting for machine to come up
	I0805 11:49:39.977661  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:39.978135  402885 main.go:141] libmachine: (ha-672593-m03) Found IP for machine: 192.168.39.210
	I0805 11:49:39.978160  402885 main.go:141] libmachine: (ha-672593-m03) Reserving static IP address...
	I0805 11:49:39.978173  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has current primary IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:39.978560  402885 main.go:141] libmachine: (ha-672593-m03) DBG | unable to find host DHCP lease matching {name: "ha-672593-m03", mac: "52:54:00:3d:2e:1f", ip: "192.168.39.210"} in network mk-ha-672593
	I0805 11:49:40.049737  402885 main.go:141] libmachine: (ha-672593-m03) DBG | Getting to WaitForSSH function...
	I0805 11:49:40.049773  402885 main.go:141] libmachine: (ha-672593-m03) Reserved static IP address: 192.168.39.210
	I0805 11:49:40.049788  402885 main.go:141] libmachine: (ha-672593-m03) Waiting for SSH to be available...
	I0805 11:49:40.052546  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:40.052947  402885 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:49:40.052979  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:40.053116  402885 main.go:141] libmachine: (ha-672593-m03) DBG | Using SSH client type: external
	I0805 11:49:40.053145  402885 main.go:141] libmachine: (ha-672593-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m03/id_rsa (-rw-------)
	I0805 11:49:40.053196  402885 main.go:141] libmachine: (ha-672593-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.210 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 11:49:40.053220  402885 main.go:141] libmachine: (ha-672593-m03) DBG | About to run SSH command:
	I0805 11:49:40.053238  402885 main.go:141] libmachine: (ha-672593-m03) DBG | exit 0
	I0805 11:49:40.175631  402885 main.go:141] libmachine: (ha-672593-m03) DBG | SSH cmd err, output: <nil>: 
	I0805 11:49:40.175924  402885 main.go:141] libmachine: (ha-672593-m03) KVM machine creation complete!
	I0805 11:49:40.176257  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetConfigRaw
	I0805 11:49:40.176807  402885 main.go:141] libmachine: (ha-672593-m03) Calling .DriverName
	I0805 11:49:40.176987  402885 main.go:141] libmachine: (ha-672593-m03) Calling .DriverName
	I0805 11:49:40.177152  402885 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0805 11:49:40.177165  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetState
	I0805 11:49:40.178340  402885 main.go:141] libmachine: Detecting operating system of created instance...
	I0805 11:49:40.178354  402885 main.go:141] libmachine: Waiting for SSH to be available...
	I0805 11:49:40.178365  402885 main.go:141] libmachine: Getting to WaitForSSH function...
	I0805 11:49:40.178370  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHHostname
	I0805 11:49:40.180369  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:40.180743  402885 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-672593-m03 Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:49:40.180777  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:40.180920  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHPort
	I0805 11:49:40.181087  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHKeyPath
	I0805 11:49:40.181238  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHKeyPath
	I0805 11:49:40.181368  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHUsername
	I0805 11:49:40.181525  402885 main.go:141] libmachine: Using SSH client type: native
	I0805 11:49:40.181796  402885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0805 11:49:40.181811  402885 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0805 11:49:40.282845  402885 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 11:49:40.282874  402885 main.go:141] libmachine: Detecting the provisioner...
	I0805 11:49:40.282885  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHHostname
	I0805 11:49:40.285502  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:40.285964  402885 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-672593-m03 Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:49:40.285989  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:40.286179  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHPort
	I0805 11:49:40.286403  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHKeyPath
	I0805 11:49:40.286646  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHKeyPath
	I0805 11:49:40.286792  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHUsername
	I0805 11:49:40.286948  402885 main.go:141] libmachine: Using SSH client type: native
	I0805 11:49:40.287171  402885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0805 11:49:40.287185  402885 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0805 11:49:40.388799  402885 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0805 11:49:40.388895  402885 main.go:141] libmachine: found compatible host: buildroot
	I0805 11:49:40.388910  402885 main.go:141] libmachine: Provisioning with buildroot...
	I0805 11:49:40.388926  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetMachineName
	I0805 11:49:40.389198  402885 buildroot.go:166] provisioning hostname "ha-672593-m03"
	I0805 11:49:40.389226  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetMachineName
	I0805 11:49:40.389431  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHHostname
	I0805 11:49:40.391957  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:40.392397  402885 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-672593-m03 Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:49:40.392423  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:40.392547  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHPort
	I0805 11:49:40.392704  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHKeyPath
	I0805 11:49:40.392865  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHKeyPath
	I0805 11:49:40.393039  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHUsername
	I0805 11:49:40.393243  402885 main.go:141] libmachine: Using SSH client type: native
	I0805 11:49:40.393412  402885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0805 11:49:40.393426  402885 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-672593-m03 && echo "ha-672593-m03" | sudo tee /etc/hostname
	I0805 11:49:40.510646  402885 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-672593-m03
	
	I0805 11:49:40.510680  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHHostname
	I0805 11:49:40.513341  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:40.513623  402885 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-672593-m03 Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:49:40.513661  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:40.513905  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHPort
	I0805 11:49:40.514109  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHKeyPath
	I0805 11:49:40.514274  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHKeyPath
	I0805 11:49:40.514454  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHUsername
	I0805 11:49:40.514639  402885 main.go:141] libmachine: Using SSH client type: native
	I0805 11:49:40.514835  402885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0805 11:49:40.514852  402885 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-672593-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-672593-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-672593-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 11:49:40.626042  402885 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 11:49:40.626072  402885 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19377-383955/.minikube CaCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19377-383955/.minikube}
	I0805 11:49:40.626094  402885 buildroot.go:174] setting up certificates
	I0805 11:49:40.626103  402885 provision.go:84] configureAuth start
	I0805 11:49:40.626114  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetMachineName
	I0805 11:49:40.626432  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetIP
	I0805 11:49:40.629094  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:40.629495  402885 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-672593-m03 Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:49:40.629523  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:40.629677  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHHostname
	I0805 11:49:40.631827  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:40.632138  402885 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-672593-m03 Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:49:40.632168  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:40.632284  402885 provision.go:143] copyHostCerts
	I0805 11:49:40.632321  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 11:49:40.632388  402885 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem, removing ...
	I0805 11:49:40.632409  402885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 11:49:40.632486  402885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem (1082 bytes)
	I0805 11:49:40.632570  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 11:49:40.632596  402885 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem, removing ...
	I0805 11:49:40.632604  402885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 11:49:40.632630  402885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem (1123 bytes)
	I0805 11:49:40.632675  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 11:49:40.632695  402885 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem, removing ...
	I0805 11:49:40.632701  402885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 11:49:40.632721  402885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem (1675 bytes)
	I0805 11:49:40.632769  402885 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem org=jenkins.ha-672593-m03 san=[127.0.0.1 192.168.39.210 ha-672593-m03 localhost minikube]
	I0805 11:49:40.789050  402885 provision.go:177] copyRemoteCerts
	I0805 11:49:40.789114  402885 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 11:49:40.789142  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHHostname
	I0805 11:49:40.791859  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:40.792190  402885 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-672593-m03 Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:49:40.792216  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:40.792445  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHPort
	I0805 11:49:40.792669  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHKeyPath
	I0805 11:49:40.792858  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHUsername
	I0805 11:49:40.793040  402885 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m03/id_rsa Username:docker}
	I0805 11:49:40.876523  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 11:49:40.876619  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 11:49:40.900431  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 11:49:40.900512  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0805 11:49:40.923930  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 11:49:40.924001  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0805 11:49:40.948068  402885 provision.go:87] duration metric: took 321.949684ms to configureAuth
	I0805 11:49:40.948097  402885 buildroot.go:189] setting minikube options for container-runtime
	I0805 11:49:40.948344  402885 config.go:182] Loaded profile config "ha-672593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 11:49:40.948463  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHHostname
	I0805 11:49:40.951011  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:40.951445  402885 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-672593-m03 Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:49:40.951477  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:40.951644  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHPort
	I0805 11:49:40.951886  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHKeyPath
	I0805 11:49:40.952061  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHKeyPath
	I0805 11:49:40.952187  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHUsername
	I0805 11:49:40.952338  402885 main.go:141] libmachine: Using SSH client type: native
	I0805 11:49:40.952510  402885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0805 11:49:40.952524  402885 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 11:49:41.209174  402885 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 11:49:41.209206  402885 main.go:141] libmachine: Checking connection to Docker...
	I0805 11:49:41.209215  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetURL
	I0805 11:49:41.210659  402885 main.go:141] libmachine: (ha-672593-m03) DBG | Using libvirt version 6000000
	I0805 11:49:41.213052  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:41.213509  402885 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-672593-m03 Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:49:41.213539  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:41.213704  402885 main.go:141] libmachine: Docker is up and running!
	I0805 11:49:41.213720  402885 main.go:141] libmachine: Reticulating splines...
	I0805 11:49:41.213728  402885 client.go:171] duration metric: took 23.101213828s to LocalClient.Create
	I0805 11:49:41.213756  402885 start.go:167] duration metric: took 23.101289851s to libmachine.API.Create "ha-672593"
	I0805 11:49:41.213769  402885 start.go:293] postStartSetup for "ha-672593-m03" (driver="kvm2")
	I0805 11:49:41.213786  402885 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 11:49:41.213810  402885 main.go:141] libmachine: (ha-672593-m03) Calling .DriverName
	I0805 11:49:41.214069  402885 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 11:49:41.214089  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHHostname
	I0805 11:49:41.216132  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:41.216484  402885 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-672593-m03 Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:49:41.216515  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:41.216666  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHPort
	I0805 11:49:41.216855  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHKeyPath
	I0805 11:49:41.217016  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHUsername
	I0805 11:49:41.217125  402885 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m03/id_rsa Username:docker}
	I0805 11:49:41.299248  402885 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 11:49:41.303540  402885 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 11:49:41.303568  402885 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/addons for local assets ...
	I0805 11:49:41.303653  402885 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/files for local assets ...
	I0805 11:49:41.303770  402885 filesync.go:149] local asset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> 3912192.pem in /etc/ssl/certs
	I0805 11:49:41.303788  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> /etc/ssl/certs/3912192.pem
	I0805 11:49:41.303904  402885 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 11:49:41.313868  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 11:49:41.341779  402885 start.go:296] duration metric: took 127.992765ms for postStartSetup
	I0805 11:49:41.341833  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetConfigRaw
	I0805 11:49:41.342533  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetIP
	I0805 11:49:41.345689  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:41.346158  402885 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-672593-m03 Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:49:41.346190  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:41.346491  402885 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/config.json ...
	I0805 11:49:41.346691  402885 start.go:128] duration metric: took 23.253744147s to createHost
	I0805 11:49:41.346721  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHHostname
	I0805 11:49:41.349004  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:41.349345  402885 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-672593-m03 Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:49:41.349381  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:41.349519  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHPort
	I0805 11:49:41.349713  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHKeyPath
	I0805 11:49:41.349876  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHKeyPath
	I0805 11:49:41.349994  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHUsername
	I0805 11:49:41.350202  402885 main.go:141] libmachine: Using SSH client type: native
	I0805 11:49:41.350410  402885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0805 11:49:41.350424  402885 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 11:49:41.452458  402885 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722858581.427209706
	
	I0805 11:49:41.452487  402885 fix.go:216] guest clock: 1722858581.427209706
	I0805 11:49:41.452495  402885 fix.go:229] Guest: 2024-08-05 11:49:41.427209706 +0000 UTC Remote: 2024-08-05 11:49:41.34670633 +0000 UTC m=+159.977911377 (delta=80.503376ms)
	I0805 11:49:41.452514  402885 fix.go:200] guest clock delta is within tolerance: 80.503376ms
	I0805 11:49:41.452522  402885 start.go:83] releasing machines lock for "ha-672593-m03", held for 23.359741777s
	I0805 11:49:41.452547  402885 main.go:141] libmachine: (ha-672593-m03) Calling .DriverName
	I0805 11:49:41.452802  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetIP
	I0805 11:49:41.455471  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:41.455836  402885 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-672593-m03 Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:49:41.455857  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:41.457788  402885 out.go:177] * Found network options:
	I0805 11:49:41.459200  402885 out.go:177]   - NO_PROXY=192.168.39.102,192.168.39.68
	W0805 11:49:41.460706  402885 proxy.go:119] fail to check proxy env: Error ip not in block
	W0805 11:49:41.460726  402885 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 11:49:41.460740  402885 main.go:141] libmachine: (ha-672593-m03) Calling .DriverName
	I0805 11:49:41.461235  402885 main.go:141] libmachine: (ha-672593-m03) Calling .DriverName
	I0805 11:49:41.461410  402885 main.go:141] libmachine: (ha-672593-m03) Calling .DriverName
	I0805 11:49:41.461510  402885 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 11:49:41.461549  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHHostname
	W0805 11:49:41.461615  402885 proxy.go:119] fail to check proxy env: Error ip not in block
	W0805 11:49:41.461641  402885 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 11:49:41.461716  402885 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 11:49:41.461764  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHHostname
	I0805 11:49:41.464420  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:41.464679  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:41.464853  402885 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-672593-m03 Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:49:41.464880  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:41.464999  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHPort
	I0805 11:49:41.465107  402885 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-672593-m03 Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:49:41.465134  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:41.465172  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHKeyPath
	I0805 11:49:41.465283  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHPort
	I0805 11:49:41.465371  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHUsername
	I0805 11:49:41.465462  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHKeyPath
	I0805 11:49:41.465549  402885 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m03/id_rsa Username:docker}
	I0805 11:49:41.465591  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHUsername
	I0805 11:49:41.465704  402885 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m03/id_rsa Username:docker}
	I0805 11:49:41.695763  402885 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 11:49:41.701998  402885 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 11:49:41.702066  402885 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 11:49:41.718465  402885 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 11:49:41.718491  402885 start.go:495] detecting cgroup driver to use...
	I0805 11:49:41.718598  402885 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 11:49:41.735354  402885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 11:49:41.749933  402885 docker.go:217] disabling cri-docker service (if available) ...
	I0805 11:49:41.750012  402885 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 11:49:41.764742  402885 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 11:49:41.780242  402885 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 11:49:41.901102  402885 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 11:49:42.077051  402885 docker.go:233] disabling docker service ...
	I0805 11:49:42.077130  402885 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 11:49:42.093296  402885 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 11:49:42.106818  402885 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 11:49:42.240445  402885 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 11:49:42.372217  402885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 11:49:42.388583  402885 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 11:49:42.407950  402885 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0805 11:49:42.408024  402885 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:49:42.418305  402885 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 11:49:42.418360  402885 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:49:42.428269  402885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:49:42.437963  402885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:49:42.447753  402885 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 11:49:42.458747  402885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:49:42.469409  402885 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:49:42.487180  402885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:49:42.498318  402885 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 11:49:42.508644  402885 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0805 11:49:42.508697  402885 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0805 11:49:42.522991  402885 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 11:49:42.532658  402885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 11:49:42.651868  402885 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 11:49:42.786123  402885 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 11:49:42.786203  402885 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 11:49:42.791223  402885 start.go:563] Will wait 60s for crictl version
	I0805 11:49:42.791280  402885 ssh_runner.go:195] Run: which crictl
	I0805 11:49:42.795237  402885 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 11:49:42.837359  402885 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 11:49:42.837466  402885 ssh_runner.go:195] Run: crio --version
	I0805 11:49:42.865426  402885 ssh_runner.go:195] Run: crio --version
	I0805 11:49:42.895825  402885 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0805 11:49:42.897127  402885 out.go:177]   - env NO_PROXY=192.168.39.102
	I0805 11:49:42.898310  402885 out.go:177]   - env NO_PROXY=192.168.39.102,192.168.39.68
	I0805 11:49:42.899503  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetIP
	I0805 11:49:42.902494  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:42.902908  402885 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-672593-m03 Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:49:42.902938  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:42.903194  402885 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0805 11:49:42.907439  402885 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 11:49:42.920962  402885 mustload.go:65] Loading cluster: ha-672593
	I0805 11:49:42.921198  402885 config.go:182] Loaded profile config "ha-672593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 11:49:42.921455  402885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:49:42.921497  402885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:49:42.936259  402885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32987
	I0805 11:49:42.936727  402885 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:49:42.937191  402885 main.go:141] libmachine: Using API Version  1
	I0805 11:49:42.937213  402885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:49:42.937525  402885 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:49:42.937752  402885 main.go:141] libmachine: (ha-672593) Calling .GetState
	I0805 11:49:42.939304  402885 host.go:66] Checking if "ha-672593" exists ...
	I0805 11:49:42.939685  402885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:49:42.939728  402885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:49:42.955663  402885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43889
	I0805 11:49:42.956157  402885 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:49:42.956605  402885 main.go:141] libmachine: Using API Version  1
	I0805 11:49:42.956626  402885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:49:42.956921  402885 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:49:42.957073  402885 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:49:42.957257  402885 certs.go:68] Setting up /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593 for IP: 192.168.39.210
	I0805 11:49:42.957268  402885 certs.go:194] generating shared ca certs ...
	I0805 11:49:42.957286  402885 certs.go:226] acquiring lock for ca certs: {Name:mk0abfcaff3883fbb5243c47b487f9200d9166d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:49:42.957406  402885 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key
	I0805 11:49:42.957445  402885 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key
	I0805 11:49:42.957454  402885 certs.go:256] generating profile certs ...
	I0805 11:49:42.957523  402885 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/client.key
	I0805 11:49:42.957545  402885 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key.0007e6ae
	I0805 11:49:42.957560  402885 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt.0007e6ae with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.68 192.168.39.210 192.168.39.254]
	I0805 11:49:43.159482  402885 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt.0007e6ae ...
	I0805 11:49:43.159512  402885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt.0007e6ae: {Name:mk9efa0743d1a8bc6f436032786c5c9439a3c942 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:49:43.159679  402885 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key.0007e6ae ...
	I0805 11:49:43.159692  402885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key.0007e6ae: {Name:mk1f341e70467d49b67ce7b0a18ef6fdf82f8399 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:49:43.159779  402885 certs.go:381] copying /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt.0007e6ae -> /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt
	I0805 11:49:43.159912  402885 certs.go:385] copying /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key.0007e6ae -> /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key
	I0805 11:49:43.160042  402885 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/proxy-client.key
	I0805 11:49:43.160060  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0805 11:49:43.160074  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0805 11:49:43.160087  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0805 11:49:43.160104  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0805 11:49:43.160116  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0805 11:49:43.160129  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0805 11:49:43.160148  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0805 11:49:43.160160  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0805 11:49:43.160218  402885 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem (1338 bytes)
	W0805 11:49:43.160251  402885 certs.go:480] ignoring /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219_empty.pem, impossibly tiny 0 bytes
	I0805 11:49:43.160261  402885 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 11:49:43.160281  402885 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem (1082 bytes)
	I0805 11:49:43.160301  402885 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem (1123 bytes)
	I0805 11:49:43.160325  402885 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem (1675 bytes)
	I0805 11:49:43.160369  402885 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 11:49:43.160395  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem -> /usr/share/ca-certificates/391219.pem
	I0805 11:49:43.160411  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> /usr/share/ca-certificates/3912192.pem
	I0805 11:49:43.160422  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0805 11:49:43.160456  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:49:43.163504  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:49:43.163971  402885 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:49:43.164001  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:49:43.164208  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:49:43.164424  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:49:43.164600  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:49:43.164759  402885 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/id_rsa Username:docker}
	I0805 11:49:43.244031  402885 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0805 11:49:43.249619  402885 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0805 11:49:43.261087  402885 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0805 11:49:43.265484  402885 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0805 11:49:43.276152  402885 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0805 11:49:43.280485  402885 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0805 11:49:43.290860  402885 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0805 11:49:43.296389  402885 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0805 11:49:43.306729  402885 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0805 11:49:43.313915  402885 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0805 11:49:43.325500  402885 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0805 11:49:43.331434  402885 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0805 11:49:43.345944  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 11:49:43.372230  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 11:49:43.395401  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 11:49:43.418504  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 11:49:43.441270  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0805 11:49:43.464130  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0805 11:49:43.486038  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 11:49:43.509477  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 11:49:43.533958  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem --> /usr/share/ca-certificates/391219.pem (1338 bytes)
	I0805 11:49:43.558887  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /usr/share/ca-certificates/3912192.pem (1708 bytes)
	I0805 11:49:43.582322  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 11:49:43.604921  402885 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0805 11:49:43.622098  402885 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0805 11:49:43.638169  402885 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0805 11:49:43.654185  402885 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0805 11:49:43.671899  402885 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0805 11:49:43.688657  402885 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0805 11:49:43.705637  402885 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0805 11:49:43.721912  402885 ssh_runner.go:195] Run: openssl version
	I0805 11:49:43.727649  402885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 11:49:43.738314  402885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 11:49:43.742602  402885 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0805 11:49:43.742656  402885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 11:49:43.748430  402885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 11:49:43.758991  402885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391219.pem && ln -fs /usr/share/ca-certificates/391219.pem /etc/ssl/certs/391219.pem"
	I0805 11:49:43.770286  402885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/391219.pem
	I0805 11:49:43.774655  402885 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 11:39 /usr/share/ca-certificates/391219.pem
	I0805 11:49:43.774708  402885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391219.pem
	I0805 11:49:43.780544  402885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391219.pem /etc/ssl/certs/51391683.0"
	I0805 11:49:43.792673  402885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3912192.pem && ln -fs /usr/share/ca-certificates/3912192.pem /etc/ssl/certs/3912192.pem"
	I0805 11:49:43.803111  402885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3912192.pem
	I0805 11:49:43.807374  402885 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 11:39 /usr/share/ca-certificates/3912192.pem
	I0805 11:49:43.807420  402885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3912192.pem
	I0805 11:49:43.812770  402885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3912192.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 11:49:43.824737  402885 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 11:49:43.828599  402885 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0805 11:49:43.828677  402885 kubeadm.go:934] updating node {m03 192.168.39.210 8443 v1.30.3 crio true true} ...
	I0805 11:49:43.828791  402885 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-672593-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.210
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-672593 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 11:49:43.828820  402885 kube-vip.go:115] generating kube-vip config ...
	I0805 11:49:43.828879  402885 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0805 11:49:43.845482  402885 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0805 11:49:43.845552  402885 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0805 11:49:43.845603  402885 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 11:49:43.855094  402885 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0805 11:49:43.855162  402885 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0805 11:49:43.864668  402885 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0805 11:49:43.864697  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0805 11:49:43.864756  402885 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0805 11:49:43.864668  402885 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0805 11:49:43.864674  402885 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0805 11:49:43.864790  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0805 11:49:43.864827  402885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 11:49:43.864880  402885 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0805 11:49:43.868983  402885 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0805 11:49:43.869002  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0805 11:49:43.907054  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0805 11:49:43.907106  402885 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0805 11:49:43.907135  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0805 11:49:43.907177  402885 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0805 11:49:43.965501  402885 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0805 11:49:43.965554  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0805 11:49:44.719819  402885 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0805 11:49:44.730257  402885 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0805 11:49:44.747268  402885 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 11:49:44.763768  402885 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0805 11:49:44.782179  402885 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0805 11:49:44.786179  402885 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 11:49:44.799255  402885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 11:49:44.919104  402885 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 11:49:44.937494  402885 host.go:66] Checking if "ha-672593" exists ...
	I0805 11:49:44.937870  402885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:49:44.937915  402885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:49:44.953791  402885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42141
	I0805 11:49:44.954174  402885 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:49:44.954651  402885 main.go:141] libmachine: Using API Version  1
	I0805 11:49:44.954681  402885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:49:44.954995  402885 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:49:44.955244  402885 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:49:44.955389  402885 start.go:317] joinCluster: &{Name:ha-672593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-672593 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 11:49:44.955515  402885 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0805 11:49:44.955535  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:49:44.958550  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:49:44.959052  402885 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:49:44.959079  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:49:44.959226  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:49:44.959407  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:49:44.959582  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:49:44.959772  402885 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/id_rsa Username:docker}
	I0805 11:49:45.117881  402885 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 11:49:45.117941  402885 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vtxzg0.uer1bhotyz2fxpnt --discovery-token-ca-cert-hash sha256:d5d31a77e9c4cbf19599d2fca5d8f2345e115b01301fa4b841f92bcfec86ddc6 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-672593-m03 --control-plane --apiserver-advertise-address=192.168.39.210 --apiserver-bind-port=8443"
	I0805 11:50:08.729790  402885 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vtxzg0.uer1bhotyz2fxpnt --discovery-token-ca-cert-hash sha256:d5d31a77e9c4cbf19599d2fca5d8f2345e115b01301fa4b841f92bcfec86ddc6 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-672593-m03 --control-plane --apiserver-advertise-address=192.168.39.210 --apiserver-bind-port=8443": (23.611805037s)
	I0805 11:50:08.729835  402885 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0805 11:50:09.417094  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-672593-m03 minikube.k8s.io/updated_at=2024_08_05T11_50_09_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f minikube.k8s.io/name=ha-672593 minikube.k8s.io/primary=false
	I0805 11:50:09.555213  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-672593-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0805 11:50:09.677594  402885 start.go:319] duration metric: took 24.722198513s to joinCluster
	I0805 11:50:09.677673  402885 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 11:50:09.677971  402885 config.go:182] Loaded profile config "ha-672593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 11:50:09.679027  402885 out.go:177] * Verifying Kubernetes components...
	I0805 11:50:09.680646  402885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 11:50:09.942761  402885 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 11:50:10.023334  402885 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 11:50:10.023698  402885 kapi.go:59] client config for ha-672593: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/client.crt", KeyFile:"/home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/client.key", CAFile:"/home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0805 11:50:10.023834  402885 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.102:8443
	I0805 11:50:10.024130  402885 node_ready.go:35] waiting up to 6m0s for node "ha-672593-m03" to be "Ready" ...
	I0805 11:50:10.024226  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:10.024236  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:10.024246  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:10.024256  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:10.028165  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:10.524842  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:10.524871  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:10.524883  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:10.524890  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:10.529120  402885 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 11:50:11.024991  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:11.025013  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:11.025021  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:11.025027  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:11.028905  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:11.524966  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:11.524996  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:11.525009  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:11.525015  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:11.528136  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:12.024883  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:12.024907  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:12.024916  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:12.024921  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:12.027768  402885 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 11:50:12.028384  402885 node_ready.go:53] node "ha-672593-m03" has status "Ready":"False"
	I0805 11:50:12.524450  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:12.524476  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:12.524488  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:12.524496  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:12.528335  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:13.025222  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:13.025246  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:13.025255  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:13.025260  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:13.029083  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:13.524572  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:13.524607  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:13.524615  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:13.524627  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:13.527835  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:14.024390  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:14.024413  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:14.024424  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:14.024429  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:14.028290  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:14.029078  402885 node_ready.go:53] node "ha-672593-m03" has status "Ready":"False"
	I0805 11:50:14.525035  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:14.525055  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:14.525066  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:14.525072  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:14.528859  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:15.024339  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:15.024362  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:15.024370  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:15.024376  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:15.027697  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:15.524691  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:15.524720  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:15.524728  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:15.524733  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:15.527918  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:16.024536  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:16.024561  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:16.024570  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:16.024574  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:16.028995  402885 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 11:50:16.030417  402885 node_ready.go:53] node "ha-672593-m03" has status "Ready":"False"
	I0805 11:50:16.525306  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:16.525335  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:16.525363  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:16.525373  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:16.530664  402885 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0805 11:50:17.025263  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:17.025292  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:17.025303  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:17.025309  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:17.028817  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:17.524687  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:17.524711  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:17.524718  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:17.524722  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:17.528498  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:18.024565  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:18.024589  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:18.024598  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:18.024603  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:18.027864  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:18.525024  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:18.525048  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:18.525056  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:18.525061  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:18.528139  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:18.528928  402885 node_ready.go:53] node "ha-672593-m03" has status "Ready":"False"
	I0805 11:50:19.025141  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:19.025166  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:19.025174  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:19.025178  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:19.028526  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:19.524651  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:19.524683  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:19.524696  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:19.524704  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:19.528218  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:20.024501  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:20.024524  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:20.024534  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:20.024538  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:20.027848  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:20.525108  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:20.525149  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:20.525163  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:20.525170  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:20.528777  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:20.529374  402885 node_ready.go:53] node "ha-672593-m03" has status "Ready":"False"
	I0805 11:50:21.024348  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:21.024369  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:21.024377  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:21.024382  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:21.028029  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:21.524857  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:21.524881  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:21.524889  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:21.524892  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:21.528187  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:22.025326  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:22.025348  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:22.025357  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:22.025362  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:22.028438  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:22.525068  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:22.525095  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:22.525107  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:22.525114  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:22.528962  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:22.529725  402885 node_ready.go:53] node "ha-672593-m03" has status "Ready":"False"
	I0805 11:50:23.024957  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:23.024979  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:23.024988  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:23.024992  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:23.028631  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:23.525048  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:23.525071  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:23.525087  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:23.525092  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:23.528562  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:24.024433  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:24.024461  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:24.024495  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:24.024500  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:24.027667  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:24.524622  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:24.524649  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:24.524661  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:24.524667  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:24.528997  402885 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 11:50:25.025366  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:25.025394  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:25.025405  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:25.025412  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:25.028714  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:25.029361  402885 node_ready.go:53] node "ha-672593-m03" has status "Ready":"False"
	I0805 11:50:25.524394  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:25.524419  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:25.524426  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:25.524430  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:25.528468  402885 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 11:50:26.024479  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:26.024501  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:26.024530  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:26.024535  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:26.027751  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:26.524434  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:26.524515  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:26.524533  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:26.524540  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:26.529314  402885 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 11:50:27.025202  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:27.025227  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:27.025239  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:27.025246  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:27.028787  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:27.029468  402885 node_ready.go:53] node "ha-672593-m03" has status "Ready":"False"
	I0805 11:50:27.524823  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:27.524851  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:27.524861  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:27.524869  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:27.528579  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:28.025351  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:28.025376  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:28.025385  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:28.025390  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:28.028596  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:28.525015  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:28.525038  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:28.525047  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:28.525051  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:28.528823  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:28.529486  402885 node_ready.go:49] node "ha-672593-m03" has status "Ready":"True"
	I0805 11:50:28.529505  402885 node_ready.go:38] duration metric: took 18.505353861s for node "ha-672593-m03" to be "Ready" ...
	I0805 11:50:28.529514  402885 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 11:50:28.529585  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0805 11:50:28.529594  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:28.529601  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:28.529605  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:28.536212  402885 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0805 11:50:28.544069  402885 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-sfh7c" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:28.544156  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sfh7c
	I0805 11:50:28.544166  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:28.544173  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:28.544180  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:28.546731  402885 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 11:50:28.547413  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593
	I0805 11:50:28.547427  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:28.547435  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:28.547439  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:28.550078  402885 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 11:50:28.550479  402885 pod_ready.go:92] pod "coredns-7db6d8ff4d-sfh7c" in "kube-system" namespace has status "Ready":"True"
	I0805 11:50:28.550496  402885 pod_ready.go:81] duration metric: took 6.406258ms for pod "coredns-7db6d8ff4d-sfh7c" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:28.550506  402885 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-sgd4v" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:28.550578  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sgd4v
	I0805 11:50:28.550589  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:28.550599  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:28.550605  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:28.553192  402885 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 11:50:28.555721  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593
	I0805 11:50:28.555751  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:28.555762  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:28.555768  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:28.558455  402885 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 11:50:28.559075  402885 pod_ready.go:92] pod "coredns-7db6d8ff4d-sgd4v" in "kube-system" namespace has status "Ready":"True"
	I0805 11:50:28.559096  402885 pod_ready.go:81] duration metric: took 8.581234ms for pod "coredns-7db6d8ff4d-sgd4v" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:28.559108  402885 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-672593" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:28.559181  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-672593
	I0805 11:50:28.559190  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:28.559199  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:28.559204  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:28.562010  402885 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 11:50:28.562791  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593
	I0805 11:50:28.562805  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:28.562811  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:28.562815  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:28.565381  402885 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 11:50:28.565930  402885 pod_ready.go:92] pod "etcd-ha-672593" in "kube-system" namespace has status "Ready":"True"
	I0805 11:50:28.565945  402885 pod_ready.go:81] duration metric: took 6.830097ms for pod "etcd-ha-672593" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:28.565959  402885 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-672593-m02" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:28.566023  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-672593-m02
	I0805 11:50:28.566031  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:28.566038  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:28.566045  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:28.568834  402885 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 11:50:28.569482  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:50:28.569495  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:28.569502  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:28.569505  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:28.572587  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:28.573289  402885 pod_ready.go:92] pod "etcd-ha-672593-m02" in "kube-system" namespace has status "Ready":"True"
	I0805 11:50:28.573311  402885 pod_ready.go:81] duration metric: took 7.339266ms for pod "etcd-ha-672593-m02" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:28.573323  402885 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-672593-m03" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:28.725742  402885 request.go:629] Waited for 152.340768ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-672593-m03
	I0805 11:50:28.725827  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-672593-m03
	I0805 11:50:28.725833  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:28.725841  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:28.725847  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:28.729252  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:28.925847  402885 request.go:629] Waited for 195.914849ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:28.925936  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:28.925946  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:28.925957  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:28.925965  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:28.929403  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:28.929892  402885 pod_ready.go:92] pod "etcd-ha-672593-m03" in "kube-system" namespace has status "Ready":"True"
	I0805 11:50:28.929914  402885 pod_ready.go:81] duration metric: took 356.582949ms for pod "etcd-ha-672593-m03" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:28.929937  402885 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-672593" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:29.126071  402885 request.go:629] Waited for 196.051705ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-672593
	I0805 11:50:29.126148  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-672593
	I0805 11:50:29.126154  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:29.126164  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:29.126182  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:29.129507  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:29.325716  402885 request.go:629] Waited for 195.378242ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-672593
	I0805 11:50:29.325793  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593
	I0805 11:50:29.325804  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:29.325815  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:29.325823  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:29.329462  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:29.330002  402885 pod_ready.go:92] pod "kube-apiserver-ha-672593" in "kube-system" namespace has status "Ready":"True"
	I0805 11:50:29.330023  402885 pod_ready.go:81] duration metric: took 400.076496ms for pod "kube-apiserver-ha-672593" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:29.330038  402885 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-672593-m02" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:29.526046  402885 request.go:629] Waited for 195.91009ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-672593-m02
	I0805 11:50:29.526146  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-672593-m02
	I0805 11:50:29.526157  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:29.526169  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:29.526177  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:29.529498  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:29.725557  402885 request.go:629] Waited for 195.364105ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:50:29.725617  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:50:29.725622  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:29.725630  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:29.725634  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:29.729006  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:29.729651  402885 pod_ready.go:92] pod "kube-apiserver-ha-672593-m02" in "kube-system" namespace has status "Ready":"True"
	I0805 11:50:29.729675  402885 pod_ready.go:81] duration metric: took 399.625672ms for pod "kube-apiserver-ha-672593-m02" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:29.729685  402885 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-672593-m03" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:29.925776  402885 request.go:629] Waited for 195.98755ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-672593-m03
	I0805 11:50:29.925853  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-672593-m03
	I0805 11:50:29.925858  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:29.925866  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:29.925871  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:29.929515  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:30.125672  402885 request.go:629] Waited for 195.388467ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:30.125758  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:30.125767  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:30.125783  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:30.125792  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:30.128922  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:30.129650  402885 pod_ready.go:92] pod "kube-apiserver-ha-672593-m03" in "kube-system" namespace has status "Ready":"True"
	I0805 11:50:30.129684  402885 pod_ready.go:81] duration metric: took 399.992597ms for pod "kube-apiserver-ha-672593-m03" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:30.129695  402885 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-672593" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:30.325756  402885 request.go:629] Waited for 195.988109ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-672593
	I0805 11:50:30.325875  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-672593
	I0805 11:50:30.325886  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:30.325911  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:30.325919  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:30.329291  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:30.525991  402885 request.go:629] Waited for 196.004967ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-672593
	I0805 11:50:30.526071  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593
	I0805 11:50:30.526079  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:30.526086  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:30.526094  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:30.529610  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:30.530247  402885 pod_ready.go:92] pod "kube-controller-manager-ha-672593" in "kube-system" namespace has status "Ready":"True"
	I0805 11:50:30.530267  402885 pod_ready.go:81] duration metric: took 400.565722ms for pod "kube-controller-manager-ha-672593" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:30.530278  402885 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-672593-m02" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:30.725335  402885 request.go:629] Waited for 194.965338ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-672593-m02
	I0805 11:50:30.725416  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-672593-m02
	I0805 11:50:30.725423  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:30.725433  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:30.725438  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:30.729104  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:30.925016  402885 request.go:629] Waited for 195.311921ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:50:30.925096  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:50:30.925104  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:30.925116  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:30.925127  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:30.928500  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:30.928949  402885 pod_ready.go:92] pod "kube-controller-manager-ha-672593-m02" in "kube-system" namespace has status "Ready":"True"
	I0805 11:50:30.928990  402885 pod_ready.go:81] duration metric: took 398.695012ms for pod "kube-controller-manager-ha-672593-m02" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:30.929005  402885 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-672593-m03" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:31.125084  402885 request.go:629] Waited for 195.981924ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-672593-m03
	I0805 11:50:31.125154  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-672593-m03
	I0805 11:50:31.125160  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:31.125168  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:31.125172  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:31.128777  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:31.326045  402885 request.go:629] Waited for 196.356047ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:31.326145  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:31.326157  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:31.326170  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:31.326178  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:31.329154  402885 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 11:50:31.329655  402885 pod_ready.go:92] pod "kube-controller-manager-ha-672593-m03" in "kube-system" namespace has status "Ready":"True"
	I0805 11:50:31.329687  402885 pod_ready.go:81] duration metric: took 400.672393ms for pod "kube-controller-manager-ha-672593-m03" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:31.329709  402885 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4q4tq" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:31.525634  402885 request.go:629] Waited for 195.841646ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4q4tq
	I0805 11:50:31.525698  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4q4tq
	I0805 11:50:31.525704  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:31.525711  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:31.525716  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:31.530351  402885 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 11:50:31.725340  402885 request.go:629] Waited for 194.093948ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:31.725402  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:31.725410  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:31.725418  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:31.725425  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:31.728593  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:31.729392  402885 pod_ready.go:92] pod "kube-proxy-4q4tq" in "kube-system" namespace has status "Ready":"True"
	I0805 11:50:31.729413  402885 pod_ready.go:81] duration metric: took 399.693493ms for pod "kube-proxy-4q4tq" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:31.729422  402885 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mdwh2" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:31.925942  402885 request.go:629] Waited for 196.449987ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mdwh2
	I0805 11:50:31.926015  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mdwh2
	I0805 11:50:31.926020  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:31.926027  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:31.926035  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:31.929371  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:32.125350  402885 request.go:629] Waited for 195.285703ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:50:32.125432  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:50:32.125443  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:32.125454  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:32.125466  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:32.128650  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:32.129297  402885 pod_ready.go:92] pod "kube-proxy-mdwh2" in "kube-system" namespace has status "Ready":"True"
	I0805 11:50:32.129317  402885 pod_ready.go:81] duration metric: took 399.886843ms for pod "kube-proxy-mdwh2" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:32.129329  402885 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wtsdt" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:32.325417  402885 request.go:629] Waited for 196.006397ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wtsdt
	I0805 11:50:32.325498  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wtsdt
	I0805 11:50:32.325504  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:32.325511  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:32.325516  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:32.329140  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:32.525094  402885 request.go:629] Waited for 195.277586ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-672593
	I0805 11:50:32.525184  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593
	I0805 11:50:32.525194  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:32.525203  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:32.525210  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:32.528764  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:32.529396  402885 pod_ready.go:92] pod "kube-proxy-wtsdt" in "kube-system" namespace has status "Ready":"True"
	I0805 11:50:32.529413  402885 pod_ready.go:81] duration metric: took 400.078107ms for pod "kube-proxy-wtsdt" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:32.529423  402885 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-672593" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:32.725553  402885 request.go:629] Waited for 196.049403ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-672593
	I0805 11:50:32.725649  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-672593
	I0805 11:50:32.725661  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:32.725671  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:32.725681  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:32.728972  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:32.926030  402885 request.go:629] Waited for 196.35358ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-672593
	I0805 11:50:32.926106  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593
	I0805 11:50:32.926113  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:32.926130  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:32.926141  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:32.929853  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:32.930549  402885 pod_ready.go:92] pod "kube-scheduler-ha-672593" in "kube-system" namespace has status "Ready":"True"
	I0805 11:50:32.930571  402885 pod_ready.go:81] duration metric: took 401.138815ms for pod "kube-scheduler-ha-672593" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:32.930584  402885 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-672593-m02" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:33.125705  402885 request.go:629] Waited for 195.022367ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-672593-m02
	I0805 11:50:33.125778  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-672593-m02
	I0805 11:50:33.125787  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:33.125801  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:33.125810  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:33.129480  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:33.325298  402885 request.go:629] Waited for 195.162835ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:50:33.325358  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:50:33.325363  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:33.325371  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:33.325375  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:33.328055  402885 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 11:50:33.328850  402885 pod_ready.go:92] pod "kube-scheduler-ha-672593-m02" in "kube-system" namespace has status "Ready":"True"
	I0805 11:50:33.328867  402885 pod_ready.go:81] duration metric: took 398.275917ms for pod "kube-scheduler-ha-672593-m02" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:33.328877  402885 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-672593-m03" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:33.525913  402885 request.go:629] Waited for 196.95991ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-672593-m03
	I0805 11:50:33.525994  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-672593-m03
	I0805 11:50:33.526003  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:33.526037  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:33.526049  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:33.529495  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:33.725425  402885 request.go:629] Waited for 195.362958ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:33.725514  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:33.725530  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:33.725554  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:33.725567  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:33.731244  402885 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0805 11:50:33.731686  402885 pod_ready.go:92] pod "kube-scheduler-ha-672593-m03" in "kube-system" namespace has status "Ready":"True"
	I0805 11:50:33.731706  402885 pod_ready.go:81] duration metric: took 402.821942ms for pod "kube-scheduler-ha-672593-m03" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:33.731716  402885 pod_ready.go:38] duration metric: took 5.202193895s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 11:50:33.731731  402885 api_server.go:52] waiting for apiserver process to appear ...
	I0805 11:50:33.731796  402885 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 11:50:33.748061  402885 api_server.go:72] duration metric: took 24.070351029s to wait for apiserver process to appear ...
	I0805 11:50:33.748096  402885 api_server.go:88] waiting for apiserver healthz status ...
	I0805 11:50:33.748115  402885 api_server.go:253] Checking apiserver healthz at https://192.168.39.102:8443/healthz ...
	I0805 11:50:33.752440  402885 api_server.go:279] https://192.168.39.102:8443/healthz returned 200:
	ok
	I0805 11:50:33.752511  402885 round_trippers.go:463] GET https://192.168.39.102:8443/version
	I0805 11:50:33.752519  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:33.752528  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:33.752532  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:33.753367  402885 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 11:50:33.753438  402885 api_server.go:141] control plane version: v1.30.3
	I0805 11:50:33.753452  402885 api_server.go:131] duration metric: took 5.350181ms to wait for apiserver health ...
	I0805 11:50:33.753461  402885 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 11:50:33.925639  402885 request.go:629] Waited for 172.091982ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0805 11:50:33.925722  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0805 11:50:33.925730  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:33.925742  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:33.925750  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:33.932854  402885 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0805 11:50:33.940101  402885 system_pods.go:59] 24 kube-system pods found
	I0805 11:50:33.940129  402885 system_pods.go:61] "coredns-7db6d8ff4d-sfh7c" [98c09423-e24f-4d26-b7f9-3da3986d538b] Running
	I0805 11:50:33.940136  402885 system_pods.go:61] "coredns-7db6d8ff4d-sgd4v" [58ff9d45-f09f-4213-b1c3-d568ee5ab68a] Running
	I0805 11:50:33.940141  402885 system_pods.go:61] "etcd-ha-672593" [379ffb87-5649-41f5-8095-d7196c401f79] Running
	I0805 11:50:33.940147  402885 system_pods.go:61] "etcd-ha-672593-m02" [ea52f3ac-f7d5-407e-ba4e-a01e5effbf97] Running
	I0805 11:50:33.940153  402885 system_pods.go:61] "etcd-ha-672593-m03" [6091761d-b610-4448-a853-274433bb59d0] Running
	I0805 11:50:33.940157  402885 system_pods.go:61] "kindnet-7fndz" [6bdb2b4a-e7c6-4e03-80f8-cf80501095c4] Running
	I0805 11:50:33.940161  402885 system_pods.go:61] "kindnet-85fm7" [404455ee-e31a-4c52-bf6f-f16546652f70] Running
	I0805 11:50:33.940166  402885 system_pods.go:61] "kindnet-wnbr8" [351b5e1b-5da4-442d-96b7-213e3e9a74aa] Running
	I0805 11:50:33.940171  402885 system_pods.go:61] "kube-apiserver-ha-672593" [6c6d5c3e-1d9e-4a8b-8a63-792a94e826a5] Running
	I0805 11:50:33.940175  402885 system_pods.go:61] "kube-apiserver-ha-672593-m02" [f40f5797-3916-467c-a42f-eb18f909121b] Running
	I0805 11:50:33.940180  402885 system_pods.go:61] "kube-apiserver-ha-672593-m03" [1e3694f4-9bc0-4e9a-8e1c-179bbb1c78ca] Running
	I0805 11:50:33.940188  402885 system_pods.go:61] "kube-controller-manager-ha-672593" [515f7a5c-1f0f-40e9-91ec-1921ec498f03] Running
	I0805 11:50:33.940195  402885 system_pods.go:61] "kube-controller-manager-ha-672593-m02" [60e41780-9ffd-49ea-b9ee-3bbf4dc3ad62] Running
	I0805 11:50:33.940200  402885 system_pods.go:61] "kube-controller-manager-ha-672593-m03" [c30415ed-5173-4283-9174-72d05ed227cc] Running
	I0805 11:50:33.940205  402885 system_pods.go:61] "kube-proxy-4q4tq" [44cceade-cf8b-4c4d-b06e-c83c3f20bd3a] Running
	I0805 11:50:33.940210  402885 system_pods.go:61] "kube-proxy-mdwh2" [93a2ab4f-2393-49f1-b185-97b90da38595] Running
	I0805 11:50:33.940215  402885 system_pods.go:61] "kube-proxy-wtsdt" [9a1664bb-e0a8-496e-a74d-3c25080dca8e] Running
	I0805 11:50:33.940223  402885 system_pods.go:61] "kube-scheduler-ha-672593" [5b680e35-89cc-4a77-a100-2feeccfa4b4b] Running
	I0805 11:50:33.940228  402885 system_pods.go:61] "kube-scheduler-ha-672593-m02" [beba4210-14b0-4bc3-a256-e61d47037355] Running
	I0805 11:50:33.940232  402885 system_pods.go:61] "kube-scheduler-ha-672593-m03" [9734cd6a-7e2a-4a7e-99e9-87b72c55a073] Running
	I0805 11:50:33.940237  402885 system_pods.go:61] "kube-vip-ha-672593" [36928548-a08e-49a4-a82a-6c6c3fb52b48] Running
	I0805 11:50:33.940244  402885 system_pods.go:61] "kube-vip-ha-672593-m02" [662dd07b-4ec6-471e-8209-6d25bac5459c] Running
	I0805 11:50:33.940249  402885 system_pods.go:61] "kube-vip-ha-672593-m03" [abc05dea-8108-4a5e-a223-1410c903fccc] Running
	I0805 11:50:33.940256  402885 system_pods.go:61] "storage-provisioner" [9c3a4e49-f517-40e4-bd83-1e69b6a7550c] Running
	I0805 11:50:33.940266  402885 system_pods.go:74] duration metric: took 186.796553ms to wait for pod list to return data ...
	I0805 11:50:33.940278  402885 default_sa.go:34] waiting for default service account to be created ...
	I0805 11:50:34.125710  402885 request.go:629] Waited for 185.326504ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I0805 11:50:34.125770  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I0805 11:50:34.125775  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:34.125783  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:34.125791  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:34.129318  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:34.129445  402885 default_sa.go:45] found service account: "default"
	I0805 11:50:34.129459  402885 default_sa.go:55] duration metric: took 189.171631ms for default service account to be created ...
	I0805 11:50:34.129467  402885 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 11:50:34.325892  402885 request.go:629] Waited for 196.35086ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0805 11:50:34.325994  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0805 11:50:34.326006  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:34.326016  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:34.326022  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:34.332670  402885 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0805 11:50:34.338575  402885 system_pods.go:86] 24 kube-system pods found
	I0805 11:50:34.338603  402885 system_pods.go:89] "coredns-7db6d8ff4d-sfh7c" [98c09423-e24f-4d26-b7f9-3da3986d538b] Running
	I0805 11:50:34.338609  402885 system_pods.go:89] "coredns-7db6d8ff4d-sgd4v" [58ff9d45-f09f-4213-b1c3-d568ee5ab68a] Running
	I0805 11:50:34.338613  402885 system_pods.go:89] "etcd-ha-672593" [379ffb87-5649-41f5-8095-d7196c401f79] Running
	I0805 11:50:34.338617  402885 system_pods.go:89] "etcd-ha-672593-m02" [ea52f3ac-f7d5-407e-ba4e-a01e5effbf97] Running
	I0805 11:50:34.338621  402885 system_pods.go:89] "etcd-ha-672593-m03" [6091761d-b610-4448-a853-274433bb59d0] Running
	I0805 11:50:34.338626  402885 system_pods.go:89] "kindnet-7fndz" [6bdb2b4a-e7c6-4e03-80f8-cf80501095c4] Running
	I0805 11:50:34.338630  402885 system_pods.go:89] "kindnet-85fm7" [404455ee-e31a-4c52-bf6f-f16546652f70] Running
	I0805 11:50:34.338634  402885 system_pods.go:89] "kindnet-wnbr8" [351b5e1b-5da4-442d-96b7-213e3e9a74aa] Running
	I0805 11:50:34.338638  402885 system_pods.go:89] "kube-apiserver-ha-672593" [6c6d5c3e-1d9e-4a8b-8a63-792a94e826a5] Running
	I0805 11:50:34.338646  402885 system_pods.go:89] "kube-apiserver-ha-672593-m02" [f40f5797-3916-467c-a42f-eb18f909121b] Running
	I0805 11:50:34.338650  402885 system_pods.go:89] "kube-apiserver-ha-672593-m03" [1e3694f4-9bc0-4e9a-8e1c-179bbb1c78ca] Running
	I0805 11:50:34.338657  402885 system_pods.go:89] "kube-controller-manager-ha-672593" [515f7a5c-1f0f-40e9-91ec-1921ec498f03] Running
	I0805 11:50:34.338662  402885 system_pods.go:89] "kube-controller-manager-ha-672593-m02" [60e41780-9ffd-49ea-b9ee-3bbf4dc3ad62] Running
	I0805 11:50:34.338668  402885 system_pods.go:89] "kube-controller-manager-ha-672593-m03" [c30415ed-5173-4283-9174-72d05ed227cc] Running
	I0805 11:50:34.338673  402885 system_pods.go:89] "kube-proxy-4q4tq" [44cceade-cf8b-4c4d-b06e-c83c3f20bd3a] Running
	I0805 11:50:34.338679  402885 system_pods.go:89] "kube-proxy-mdwh2" [93a2ab4f-2393-49f1-b185-97b90da38595] Running
	I0805 11:50:34.338683  402885 system_pods.go:89] "kube-proxy-wtsdt" [9a1664bb-e0a8-496e-a74d-3c25080dca8e] Running
	I0805 11:50:34.338689  402885 system_pods.go:89] "kube-scheduler-ha-672593" [5b680e35-89cc-4a77-a100-2feeccfa4b4b] Running
	I0805 11:50:34.338693  402885 system_pods.go:89] "kube-scheduler-ha-672593-m02" [beba4210-14b0-4bc3-a256-e61d47037355] Running
	I0805 11:50:34.338699  402885 system_pods.go:89] "kube-scheduler-ha-672593-m03" [9734cd6a-7e2a-4a7e-99e9-87b72c55a073] Running
	I0805 11:50:34.338703  402885 system_pods.go:89] "kube-vip-ha-672593" [36928548-a08e-49a4-a82a-6c6c3fb52b48] Running
	I0805 11:50:34.338707  402885 system_pods.go:89] "kube-vip-ha-672593-m02" [662dd07b-4ec6-471e-8209-6d25bac5459c] Running
	I0805 11:50:34.338711  402885 system_pods.go:89] "kube-vip-ha-672593-m03" [abc05dea-8108-4a5e-a223-1410c903fccc] Running
	I0805 11:50:34.338714  402885 system_pods.go:89] "storage-provisioner" [9c3a4e49-f517-40e4-bd83-1e69b6a7550c] Running
	I0805 11:50:34.338725  402885 system_pods.go:126] duration metric: took 209.252622ms to wait for k8s-apps to be running ...
	I0805 11:50:34.338734  402885 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 11:50:34.338784  402885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 11:50:34.353831  402885 system_svc.go:56] duration metric: took 15.083924ms WaitForService to wait for kubelet
	I0805 11:50:34.353862  402885 kubeadm.go:582] duration metric: took 24.676155589s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 11:50:34.353884  402885 node_conditions.go:102] verifying NodePressure condition ...
	I0805 11:50:34.525311  402885 request.go:629] Waited for 171.329016ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes
	I0805 11:50:34.525377  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes
	I0805 11:50:34.525389  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:34.525404  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:34.525412  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:34.529093  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:34.530327  402885 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 11:50:34.530352  402885 node_conditions.go:123] node cpu capacity is 2
	I0805 11:50:34.530368  402885 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 11:50:34.530373  402885 node_conditions.go:123] node cpu capacity is 2
	I0805 11:50:34.530380  402885 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 11:50:34.530385  402885 node_conditions.go:123] node cpu capacity is 2
	I0805 11:50:34.530392  402885 node_conditions.go:105] duration metric: took 176.501612ms to run NodePressure ...
	I0805 11:50:34.530410  402885 start.go:241] waiting for startup goroutines ...
	I0805 11:50:34.530439  402885 start.go:255] writing updated cluster config ...
	I0805 11:50:34.530795  402885 ssh_runner.go:195] Run: rm -f paused
	I0805 11:50:34.585518  402885 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0805 11:50:34.587986  402885 out.go:177] * Done! kubectl is now configured to use "ha-672593" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 05 11:54:43 ha-672593 crio[682]: time="2024-08-05 11:54:43.286550626Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722858883286528085,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f35dadf0-36e1-45dc-b697-9301cd29b805 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 11:54:43 ha-672593 crio[682]: time="2024-08-05 11:54:43.287012549Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=17544197-0094-4fcf-a19d-b030ed1d1b2e name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 11:54:43 ha-672593 crio[682]: time="2024-08-05 11:54:43.287079594Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=17544197-0094-4fcf-a19d-b030ed1d1b2e name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 11:54:43 ha-672593 crio[682]: time="2024-08-05 11:54:43.287320928Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f332a2eefb38a7643f5eabdc4c3795fdf9fc7faa3025977758afda4965c4d06f,PodSandboxId:96a63340a808e8f1d3c8938db5651c8ba9a84b0066e04495da70a33af565d687,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722858640390335720,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xx72g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b4aad5e1-e3ed-450f-b0c6-fa690e21632b,},Annotations:map[string]string{io.kubernetes.container.hash: f49c7961,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e556c9ba49f5fe264685a2408b26a61c8c5c8836f0a38b89b776f338b8b0cd22,PodSandboxId:9d62d6071098f73247871066016f164f3ba1e01a8dea16d9e20b8de1b97aafd3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722858498473117166,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c3a4e49-f517-40e4-bd83-1e69b6a7550c,},Annotations:map[string]string{io.kubernetes.container.hash: 907c955b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73fd9ef1948379bdfd834218bee29f227bc55765a421d994bcc5bbfe373658c1,PodSandboxId:162aab1f9af67e7a7875d7f44424f7edaa5b1aa74a891b3a0e84709da26c69fe,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722858498489254054,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sgd4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ff9d45-f09f-4213-b1c3-d568ee5ab68a,},Annotations:map[string]string{io.kubernetes.container.hash: d7a5fe30,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6354e702fe80a5a9853cdd48f89dde467f1f7359bb495c8a4f6a49048f151d94,PodSandboxId:60a5e5f93bb15c3691c3fccd5be1c38de24355d307d1217ada049b281288a7b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722858498409195705,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sfh7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98c09423-e2
4f-4d26-b7f9-3da3986d538b,},Annotations:map[string]string{io.kubernetes.container.hash: 3a333149,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57cec2b511aa8ca1b171b7dfff39ecb51cb11d9cd4efd552598fcc0054488c46,PodSandboxId:214360f7ff706f37f1cd346a7910caa4b07da7a0f1b94fd4af2eb9609e49369b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722858486486455082,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fndz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bdb2b4a-e7c6-4e03-80f8-cf80501095c4,},Annotations:map[string]string{io.kubernetes.container.hash: 96fd5c22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c4e00c9ba78ff0cfb337d7435931f39fe7ccd42145fa6670487d190cacee48,PodSandboxId:b824fdfadbf52a8243b61b3c55556272c3d50bd4fafe70328531a35defcf2fc9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172285848
1390523938,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wtsdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a1664bb-e0a8-496e-a74d-3c25080dca8e,},Annotations:map[string]string{io.kubernetes.container.hash: ff2ee446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:019abd676baf2985a3bf77641c1032cae7b3c22eb67fff535a25d9860b394bfd,PodSandboxId:1d9da1cd788cad95304542b24cf401a422744353696579d0d29bd98eb8653eaa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17228584649
24084638,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99d4f33bf7a3af916699b26dbf5430d3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1019d9e10074631835690fa0d372f2c043a64f237e1ddf9e22bcbd18d59fa6cd,PodSandboxId:1c9e20b33b7b7424aca33506f1a815c58190e9875a108206c654e048992f391f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722858461888444722,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddda5be0e77a9b07805ce43249e5859e,},Annotations:map[string]string{io.kubernetes.container.hash: f024b421,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50907082bdeb824e9a80122033ed1df5631143e152751f066a7bdfba1156e565,PodSandboxId:de38455447227b34bf7963342042ed5499630d6d5c6482c1c0aac94f9ce1a8d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722858461871321787,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.ku
bernetes.pod.name: kube-apiserver-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a381773e823990c7e015983b07a0d8,},Annotations:map[string]string{io.kubernetes.container.hash: caa197,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9839b56e3e62d7ac6b88dc20149da25f586b4033e03a09844938e5b85b6334,PodSandboxId:c7429b1a8552f574f21cc855aa6bf767680c56d05bb1df8b83c28a59cd561fb1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722858461852093132,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube
-scheduler-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96b70bfddf8dc93c8b8709942f15d00b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b17d8131f0edcc3018bb9d820f56a29a7806d7d57a91b849fc1350d6a8465775,PodSandboxId:116d38bae0e1d9ea33ddac0f1847ec8bd262f0dfda40d19beb5ce58d9dfc120c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722858461788289725,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-co
ntroller-manager-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b48534ca818552de6101946d7c7932fd,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=17544197-0094-4fcf-a19d-b030ed1d1b2e name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 11:54:43 ha-672593 crio[682]: time="2024-08-05 11:54:43.331145232Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d58a8712-678f-4f93-a495-9177fc459b86 name=/runtime.v1.RuntimeService/Version
	Aug 05 11:54:43 ha-672593 crio[682]: time="2024-08-05 11:54:43.331295571Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d58a8712-678f-4f93-a495-9177fc459b86 name=/runtime.v1.RuntimeService/Version
	Aug 05 11:54:43 ha-672593 crio[682]: time="2024-08-05 11:54:43.338024981Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d4c66e6e-8c63-42ab-9ee4-03a6b8efc26e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 11:54:43 ha-672593 crio[682]: time="2024-08-05 11:54:43.338482519Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722858883338455875,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d4c66e6e-8c63-42ab-9ee4-03a6b8efc26e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 11:54:43 ha-672593 crio[682]: time="2024-08-05 11:54:43.338908162Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6e2d12e0-ece9-4665-84d6-b977010c5100 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 11:54:43 ha-672593 crio[682]: time="2024-08-05 11:54:43.339031186Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6e2d12e0-ece9-4665-84d6-b977010c5100 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 11:54:43 ha-672593 crio[682]: time="2024-08-05 11:54:43.339307712Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f332a2eefb38a7643f5eabdc4c3795fdf9fc7faa3025977758afda4965c4d06f,PodSandboxId:96a63340a808e8f1d3c8938db5651c8ba9a84b0066e04495da70a33af565d687,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722858640390335720,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xx72g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b4aad5e1-e3ed-450f-b0c6-fa690e21632b,},Annotations:map[string]string{io.kubernetes.container.hash: f49c7961,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e556c9ba49f5fe264685a2408b26a61c8c5c8836f0a38b89b776f338b8b0cd22,PodSandboxId:9d62d6071098f73247871066016f164f3ba1e01a8dea16d9e20b8de1b97aafd3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722858498473117166,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c3a4e49-f517-40e4-bd83-1e69b6a7550c,},Annotations:map[string]string{io.kubernetes.container.hash: 907c955b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73fd9ef1948379bdfd834218bee29f227bc55765a421d994bcc5bbfe373658c1,PodSandboxId:162aab1f9af67e7a7875d7f44424f7edaa5b1aa74a891b3a0e84709da26c69fe,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722858498489254054,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sgd4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ff9d45-f09f-4213-b1c3-d568ee5ab68a,},Annotations:map[string]string{io.kubernetes.container.hash: d7a5fe30,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6354e702fe80a5a9853cdd48f89dde467f1f7359bb495c8a4f6a49048f151d94,PodSandboxId:60a5e5f93bb15c3691c3fccd5be1c38de24355d307d1217ada049b281288a7b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722858498409195705,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sfh7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98c09423-e2
4f-4d26-b7f9-3da3986d538b,},Annotations:map[string]string{io.kubernetes.container.hash: 3a333149,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57cec2b511aa8ca1b171b7dfff39ecb51cb11d9cd4efd552598fcc0054488c46,PodSandboxId:214360f7ff706f37f1cd346a7910caa4b07da7a0f1b94fd4af2eb9609e49369b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722858486486455082,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fndz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bdb2b4a-e7c6-4e03-80f8-cf80501095c4,},Annotations:map[string]string{io.kubernetes.container.hash: 96fd5c22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c4e00c9ba78ff0cfb337d7435931f39fe7ccd42145fa6670487d190cacee48,PodSandboxId:b824fdfadbf52a8243b61b3c55556272c3d50bd4fafe70328531a35defcf2fc9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172285848
1390523938,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wtsdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a1664bb-e0a8-496e-a74d-3c25080dca8e,},Annotations:map[string]string{io.kubernetes.container.hash: ff2ee446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:019abd676baf2985a3bf77641c1032cae7b3c22eb67fff535a25d9860b394bfd,PodSandboxId:1d9da1cd788cad95304542b24cf401a422744353696579d0d29bd98eb8653eaa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17228584649
24084638,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99d4f33bf7a3af916699b26dbf5430d3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1019d9e10074631835690fa0d372f2c043a64f237e1ddf9e22bcbd18d59fa6cd,PodSandboxId:1c9e20b33b7b7424aca33506f1a815c58190e9875a108206c654e048992f391f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722858461888444722,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddda5be0e77a9b07805ce43249e5859e,},Annotations:map[string]string{io.kubernetes.container.hash: f024b421,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50907082bdeb824e9a80122033ed1df5631143e152751f066a7bdfba1156e565,PodSandboxId:de38455447227b34bf7963342042ed5499630d6d5c6482c1c0aac94f9ce1a8d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722858461871321787,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.ku
bernetes.pod.name: kube-apiserver-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a381773e823990c7e015983b07a0d8,},Annotations:map[string]string{io.kubernetes.container.hash: caa197,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9839b56e3e62d7ac6b88dc20149da25f586b4033e03a09844938e5b85b6334,PodSandboxId:c7429b1a8552f574f21cc855aa6bf767680c56d05bb1df8b83c28a59cd561fb1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722858461852093132,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube
-scheduler-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96b70bfddf8dc93c8b8709942f15d00b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b17d8131f0edcc3018bb9d820f56a29a7806d7d57a91b849fc1350d6a8465775,PodSandboxId:116d38bae0e1d9ea33ddac0f1847ec8bd262f0dfda40d19beb5ce58d9dfc120c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722858461788289725,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-co
ntroller-manager-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b48534ca818552de6101946d7c7932fd,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6e2d12e0-ece9-4665-84d6-b977010c5100 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 11:54:43 ha-672593 crio[682]: time="2024-08-05 11:54:43.378761135Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=168ba876-3055-446c-a593-489b254bd133 name=/runtime.v1.RuntimeService/Version
	Aug 05 11:54:43 ha-672593 crio[682]: time="2024-08-05 11:54:43.378852404Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=168ba876-3055-446c-a593-489b254bd133 name=/runtime.v1.RuntimeService/Version
	Aug 05 11:54:43 ha-672593 crio[682]: time="2024-08-05 11:54:43.380195765Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=925bac25-fabc-4b49-bbf2-36b9d647f2fa name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 11:54:43 ha-672593 crio[682]: time="2024-08-05 11:54:43.380685355Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722858883380655498,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=925bac25-fabc-4b49-bbf2-36b9d647f2fa name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 11:54:43 ha-672593 crio[682]: time="2024-08-05 11:54:43.381410881Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c5888af6-6b88-4e51-a908-1b4ee0d3b409 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 11:54:43 ha-672593 crio[682]: time="2024-08-05 11:54:43.381481408Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c5888af6-6b88-4e51-a908-1b4ee0d3b409 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 11:54:43 ha-672593 crio[682]: time="2024-08-05 11:54:43.383426915Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f332a2eefb38a7643f5eabdc4c3795fdf9fc7faa3025977758afda4965c4d06f,PodSandboxId:96a63340a808e8f1d3c8938db5651c8ba9a84b0066e04495da70a33af565d687,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722858640390335720,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xx72g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b4aad5e1-e3ed-450f-b0c6-fa690e21632b,},Annotations:map[string]string{io.kubernetes.container.hash: f49c7961,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e556c9ba49f5fe264685a2408b26a61c8c5c8836f0a38b89b776f338b8b0cd22,PodSandboxId:9d62d6071098f73247871066016f164f3ba1e01a8dea16d9e20b8de1b97aafd3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722858498473117166,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c3a4e49-f517-40e4-bd83-1e69b6a7550c,},Annotations:map[string]string{io.kubernetes.container.hash: 907c955b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73fd9ef1948379bdfd834218bee29f227bc55765a421d994bcc5bbfe373658c1,PodSandboxId:162aab1f9af67e7a7875d7f44424f7edaa5b1aa74a891b3a0e84709da26c69fe,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722858498489254054,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sgd4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ff9d45-f09f-4213-b1c3-d568ee5ab68a,},Annotations:map[string]string{io.kubernetes.container.hash: d7a5fe30,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6354e702fe80a5a9853cdd48f89dde467f1f7359bb495c8a4f6a49048f151d94,PodSandboxId:60a5e5f93bb15c3691c3fccd5be1c38de24355d307d1217ada049b281288a7b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722858498409195705,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sfh7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98c09423-e2
4f-4d26-b7f9-3da3986d538b,},Annotations:map[string]string{io.kubernetes.container.hash: 3a333149,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57cec2b511aa8ca1b171b7dfff39ecb51cb11d9cd4efd552598fcc0054488c46,PodSandboxId:214360f7ff706f37f1cd346a7910caa4b07da7a0f1b94fd4af2eb9609e49369b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722858486486455082,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fndz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bdb2b4a-e7c6-4e03-80f8-cf80501095c4,},Annotations:map[string]string{io.kubernetes.container.hash: 96fd5c22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c4e00c9ba78ff0cfb337d7435931f39fe7ccd42145fa6670487d190cacee48,PodSandboxId:b824fdfadbf52a8243b61b3c55556272c3d50bd4fafe70328531a35defcf2fc9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172285848
1390523938,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wtsdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a1664bb-e0a8-496e-a74d-3c25080dca8e,},Annotations:map[string]string{io.kubernetes.container.hash: ff2ee446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:019abd676baf2985a3bf77641c1032cae7b3c22eb67fff535a25d9860b394bfd,PodSandboxId:1d9da1cd788cad95304542b24cf401a422744353696579d0d29bd98eb8653eaa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17228584649
24084638,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99d4f33bf7a3af916699b26dbf5430d3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1019d9e10074631835690fa0d372f2c043a64f237e1ddf9e22bcbd18d59fa6cd,PodSandboxId:1c9e20b33b7b7424aca33506f1a815c58190e9875a108206c654e048992f391f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722858461888444722,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddda5be0e77a9b07805ce43249e5859e,},Annotations:map[string]string{io.kubernetes.container.hash: f024b421,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50907082bdeb824e9a80122033ed1df5631143e152751f066a7bdfba1156e565,PodSandboxId:de38455447227b34bf7963342042ed5499630d6d5c6482c1c0aac94f9ce1a8d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722858461871321787,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.ku
bernetes.pod.name: kube-apiserver-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a381773e823990c7e015983b07a0d8,},Annotations:map[string]string{io.kubernetes.container.hash: caa197,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9839b56e3e62d7ac6b88dc20149da25f586b4033e03a09844938e5b85b6334,PodSandboxId:c7429b1a8552f574f21cc855aa6bf767680c56d05bb1df8b83c28a59cd561fb1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722858461852093132,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube
-scheduler-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96b70bfddf8dc93c8b8709942f15d00b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b17d8131f0edcc3018bb9d820f56a29a7806d7d57a91b849fc1350d6a8465775,PodSandboxId:116d38bae0e1d9ea33ddac0f1847ec8bd262f0dfda40d19beb5ce58d9dfc120c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722858461788289725,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-co
ntroller-manager-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b48534ca818552de6101946d7c7932fd,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c5888af6-6b88-4e51-a908-1b4ee0d3b409 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 11:54:43 ha-672593 crio[682]: time="2024-08-05 11:54:43.428774314Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=62376fc1-acd0-432a-87e1-92bfaf21240d name=/runtime.v1.RuntimeService/Version
	Aug 05 11:54:43 ha-672593 crio[682]: time="2024-08-05 11:54:43.428878887Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=62376fc1-acd0-432a-87e1-92bfaf21240d name=/runtime.v1.RuntimeService/Version
	Aug 05 11:54:43 ha-672593 crio[682]: time="2024-08-05 11:54:43.430130037Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7dfda3c9-1b48-4d49-a79c-5ad24870a58b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 11:54:43 ha-672593 crio[682]: time="2024-08-05 11:54:43.430734367Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722858883430709653,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7dfda3c9-1b48-4d49-a79c-5ad24870a58b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 11:54:43 ha-672593 crio[682]: time="2024-08-05 11:54:43.431334915Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=62cbda8f-dd76-4b45-90ba-e90da6661c5f name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 11:54:43 ha-672593 crio[682]: time="2024-08-05 11:54:43.431476210Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=62cbda8f-dd76-4b45-90ba-e90da6661c5f name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 11:54:43 ha-672593 crio[682]: time="2024-08-05 11:54:43.431788667Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f332a2eefb38a7643f5eabdc4c3795fdf9fc7faa3025977758afda4965c4d06f,PodSandboxId:96a63340a808e8f1d3c8938db5651c8ba9a84b0066e04495da70a33af565d687,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722858640390335720,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xx72g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b4aad5e1-e3ed-450f-b0c6-fa690e21632b,},Annotations:map[string]string{io.kubernetes.container.hash: f49c7961,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e556c9ba49f5fe264685a2408b26a61c8c5c8836f0a38b89b776f338b8b0cd22,PodSandboxId:9d62d6071098f73247871066016f164f3ba1e01a8dea16d9e20b8de1b97aafd3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722858498473117166,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c3a4e49-f517-40e4-bd83-1e69b6a7550c,},Annotations:map[string]string{io.kubernetes.container.hash: 907c955b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73fd9ef1948379bdfd834218bee29f227bc55765a421d994bcc5bbfe373658c1,PodSandboxId:162aab1f9af67e7a7875d7f44424f7edaa5b1aa74a891b3a0e84709da26c69fe,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722858498489254054,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sgd4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ff9d45-f09f-4213-b1c3-d568ee5ab68a,},Annotations:map[string]string{io.kubernetes.container.hash: d7a5fe30,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6354e702fe80a5a9853cdd48f89dde467f1f7359bb495c8a4f6a49048f151d94,PodSandboxId:60a5e5f93bb15c3691c3fccd5be1c38de24355d307d1217ada049b281288a7b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722858498409195705,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sfh7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98c09423-e2
4f-4d26-b7f9-3da3986d538b,},Annotations:map[string]string{io.kubernetes.container.hash: 3a333149,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57cec2b511aa8ca1b171b7dfff39ecb51cb11d9cd4efd552598fcc0054488c46,PodSandboxId:214360f7ff706f37f1cd346a7910caa4b07da7a0f1b94fd4af2eb9609e49369b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722858486486455082,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fndz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bdb2b4a-e7c6-4e03-80f8-cf80501095c4,},Annotations:map[string]string{io.kubernetes.container.hash: 96fd5c22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c4e00c9ba78ff0cfb337d7435931f39fe7ccd42145fa6670487d190cacee48,PodSandboxId:b824fdfadbf52a8243b61b3c55556272c3d50bd4fafe70328531a35defcf2fc9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172285848
1390523938,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wtsdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a1664bb-e0a8-496e-a74d-3c25080dca8e,},Annotations:map[string]string{io.kubernetes.container.hash: ff2ee446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:019abd676baf2985a3bf77641c1032cae7b3c22eb67fff535a25d9860b394bfd,PodSandboxId:1d9da1cd788cad95304542b24cf401a422744353696579d0d29bd98eb8653eaa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17228584649
24084638,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99d4f33bf7a3af916699b26dbf5430d3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1019d9e10074631835690fa0d372f2c043a64f237e1ddf9e22bcbd18d59fa6cd,PodSandboxId:1c9e20b33b7b7424aca33506f1a815c58190e9875a108206c654e048992f391f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722858461888444722,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddda5be0e77a9b07805ce43249e5859e,},Annotations:map[string]string{io.kubernetes.container.hash: f024b421,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50907082bdeb824e9a80122033ed1df5631143e152751f066a7bdfba1156e565,PodSandboxId:de38455447227b34bf7963342042ed5499630d6d5c6482c1c0aac94f9ce1a8d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722858461871321787,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.ku
bernetes.pod.name: kube-apiserver-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a381773e823990c7e015983b07a0d8,},Annotations:map[string]string{io.kubernetes.container.hash: caa197,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9839b56e3e62d7ac6b88dc20149da25f586b4033e03a09844938e5b85b6334,PodSandboxId:c7429b1a8552f574f21cc855aa6bf767680c56d05bb1df8b83c28a59cd561fb1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722858461852093132,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube
-scheduler-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96b70bfddf8dc93c8b8709942f15d00b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b17d8131f0edcc3018bb9d820f56a29a7806d7d57a91b849fc1350d6a8465775,PodSandboxId:116d38bae0e1d9ea33ddac0f1847ec8bd262f0dfda40d19beb5ce58d9dfc120c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722858461788289725,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-co
ntroller-manager-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b48534ca818552de6101946d7c7932fd,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=62cbda8f-dd76-4b45-90ba-e90da6661c5f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f332a2eefb38a       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   96a63340a808e       busybox-fc5497c4f-xx72g
	73fd9ef194837       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   162aab1f9af67       coredns-7db6d8ff4d-sgd4v
	e556c9ba49f5f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   9d62d6071098f       storage-provisioner
	6354e702fe80a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   60a5e5f93bb15       coredns-7db6d8ff4d-sfh7c
	57cec2b511aa8       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    6 minutes ago       Running             kindnet-cni               0                   214360f7ff706       kindnet-7fndz
	11c4e00c9ba78       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      6 minutes ago       Running             kube-proxy                0                   b824fdfadbf52       kube-proxy-wtsdt
	019abd676baf2       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   1d9da1cd788ca       kube-vip-ha-672593
	1019d9e100746       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   1c9e20b33b7b7       etcd-ha-672593
	50907082bdeb8       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      7 minutes ago       Running             kube-apiserver            0                   de38455447227       kube-apiserver-ha-672593
	ca9839b56e3e6       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      7 minutes ago       Running             kube-scheduler            0                   c7429b1a8552f       kube-scheduler-ha-672593
	b17d8131f0edc       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      7 minutes ago       Running             kube-controller-manager   0                   116d38bae0e1d       kube-controller-manager-ha-672593
	
	
	==> coredns [6354e702fe80a5a9853cdd48f89dde467f1f7359bb495c8a4f6a49048f151d94] <==
	[INFO] 10.244.0.4:39677 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002029758s
	[INFO] 10.244.2.2:53990 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169975s
	[INFO] 10.244.2.2:39764 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000234053s
	[INFO] 10.244.2.2:43842 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000223142s
	[INFO] 10.244.1.2:42884 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000137522s
	[INFO] 10.244.1.2:35448 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147398s
	[INFO] 10.244.1.2:52034 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000147397s
	[INFO] 10.244.0.4:50553 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110672s
	[INFO] 10.244.0.4:47698 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000069619s
	[INFO] 10.244.0.4:39504 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000139191s
	[INFO] 10.244.0.4:35787 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000065087s
	[INFO] 10.244.2.2:57478 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118877s
	[INFO] 10.244.2.2:44657 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000121159s
	[INFO] 10.244.2.2:33599 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126768s
	[INFO] 10.244.1.2:54159 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000179418s
	[INFO] 10.244.1.2:49562 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092072s
	[INFO] 10.244.0.4:42290 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077914s
	[INFO] 10.244.2.2:59634 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000164343s
	[INFO] 10.244.2.2:43784 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000159677s
	[INFO] 10.244.1.2:49443 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000173465s
	[INFO] 10.244.1.2:58280 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00015744s
	[INFO] 10.244.0.4:52050 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111584s
	[INFO] 10.244.0.4:42223 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000078636s
	[INFO] 10.244.0.4:42616 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000084454s
	[INFO] 10.244.0.4:49723 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000087038s
	
	
	==> coredns [73fd9ef1948379bdfd834218bee29f227bc55765a421d994bcc5bbfe373658c1] <==
	[INFO] 10.244.0.4:55666 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.00061709s
	[INFO] 10.244.2.2:58579 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003537694s
	[INFO] 10.244.2.2:57289 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000170189s
	[INFO] 10.244.2.2:42256 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.011807351s
	[INFO] 10.244.2.2:32771 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000276871s
	[INFO] 10.244.2.2:34794 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00013565s
	[INFO] 10.244.1.2:33425 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168612s
	[INFO] 10.244.1.2:49339 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001876895s
	[INFO] 10.244.1.2:41345 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001388007s
	[INFO] 10.244.1.2:39680 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097906s
	[INFO] 10.244.1.2:38660 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000162674s
	[INFO] 10.244.0.4:37518 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001828264s
	[INFO] 10.244.0.4:43389 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000136081s
	[INFO] 10.244.0.4:58226 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000071105s
	[INFO] 10.244.0.4:43658 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001098104s
	[INFO] 10.244.2.2:40561 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000109999s
	[INFO] 10.244.1.2:41071 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120854s
	[INFO] 10.244.1.2:40710 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080783s
	[INFO] 10.244.0.4:54672 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011185s
	[INFO] 10.244.0.4:55288 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117161s
	[INFO] 10.244.0.4:41744 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068123s
	[INFO] 10.244.2.2:60620 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013916s
	[INFO] 10.244.2.2:52672 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000153187s
	[INFO] 10.244.1.2:36870 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144481s
	[INFO] 10.244.1.2:43017 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000166959s
	
	
	==> describe nodes <==
	Name:               ha-672593
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-672593
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f
	                    minikube.k8s.io/name=ha-672593
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_05T11_47_49_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 11:47:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-672593
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 11:54:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 11:50:51 +0000   Mon, 05 Aug 2024 11:47:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 11:50:51 +0000   Mon, 05 Aug 2024 11:47:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 11:50:51 +0000   Mon, 05 Aug 2024 11:47:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 11:50:51 +0000   Mon, 05 Aug 2024 11:48:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.102
	  Hostname:    ha-672593
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fb8829a6b1d145d6aee2ea0e80194fe4
	  System UUID:                fb8829a6-b1d1-45d6-aee2-ea0e80194fe4
	  Boot ID:                    ecb22512-bcb2-43ab-b502-fc0c346e754f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xx72g              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  kube-system                 coredns-7db6d8ff4d-sfh7c             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m42s
	  kube-system                 coredns-7db6d8ff4d-sgd4v             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m42s
	  kube-system                 etcd-ha-672593                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m55s
	  kube-system                 kindnet-7fndz                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m43s
	  kube-system                 kube-apiserver-ha-672593             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m55s
	  kube-system                 kube-controller-manager-ha-672593    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m58s
	  kube-system                 kube-proxy-wtsdt                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m43s
	  kube-system                 kube-scheduler-ha-672593             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m55s
	  kube-system                 kube-vip-ha-672593                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m57s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m42s  kube-proxy       
	  Normal  Starting                 6m56s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m55s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m55s  kubelet          Node ha-672593 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m55s  kubelet          Node ha-672593 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m55s  kubelet          Node ha-672593 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m44s  node-controller  Node ha-672593 event: Registered Node ha-672593 in Controller
	  Normal  NodeReady                6m26s  kubelet          Node ha-672593 status is now: NodeReady
	  Normal  RegisteredNode           5m33s  node-controller  Node ha-672593 event: Registered Node ha-672593 in Controller
	  Normal  RegisteredNode           4m20s  node-controller  Node ha-672593 event: Registered Node ha-672593 in Controller
	
	
	Name:               ha-672593-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-672593-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f
	                    minikube.k8s.io/name=ha-672593
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_05T11_48_56_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 11:48:52 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-672593-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 11:52:16 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 05 Aug 2024 11:50:55 +0000   Mon, 05 Aug 2024 11:52:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 05 Aug 2024 11:50:55 +0000   Mon, 05 Aug 2024 11:52:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 05 Aug 2024 11:50:55 +0000   Mon, 05 Aug 2024 11:52:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 05 Aug 2024 11:50:55 +0000   Mon, 05 Aug 2024 11:52:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.68
	  Hostname:    ha-672593-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8aa3c6ca9e9a439e91c6c120c9ce9ce7
	  System UUID:                8aa3c6ca-9e9a-439e-91c6-c120c9ce9ce7
	  Boot ID:                    38ffe74c-4439-4306-9791-6e268f90d149
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-vn64j                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  kube-system                 etcd-ha-672593-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m49s
	  kube-system                 kindnet-85fm7                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m51s
	  kube-system                 kube-apiserver-ha-672593-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m50s
	  kube-system                 kube-controller-manager-ha-672593-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m50s
	  kube-system                 kube-proxy-mdwh2                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m51s
	  kube-system                 kube-scheduler-ha-672593-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m50s
	  kube-system                 kube-vip-ha-672593-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m47s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m51s (x8 over 5m51s)  kubelet          Node ha-672593-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m51s (x8 over 5m51s)  kubelet          Node ha-672593-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m51s (x7 over 5m51s)  kubelet          Node ha-672593-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m49s                  node-controller  Node ha-672593-m02 event: Registered Node ha-672593-m02 in Controller
	  Normal  RegisteredNode           5m33s                  node-controller  Node ha-672593-m02 event: Registered Node ha-672593-m02 in Controller
	  Normal  RegisteredNode           4m20s                  node-controller  Node ha-672593-m02 event: Registered Node ha-672593-m02 in Controller
	  Normal  NodeNotReady             105s                   node-controller  Node ha-672593-m02 status is now: NodeNotReady
	
	
	Name:               ha-672593-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-672593-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f
	                    minikube.k8s.io/name=ha-672593
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_05T11_50_09_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 11:50:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-672593-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 11:54:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 11:51:07 +0000   Mon, 05 Aug 2024 11:50:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 11:51:07 +0000   Mon, 05 Aug 2024 11:50:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 11:51:07 +0000   Mon, 05 Aug 2024 11:50:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 11:51:07 +0000   Mon, 05 Aug 2024 11:50:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.210
	  Hostname:    ha-672593-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 95bc9a27650e44d8882cc62883736cdc
	  System UUID:                95bc9a27-650e-44d8-882c-c62883736cdc
	  Boot ID:                    ab03c867-4435-497f-a3f3-21dc7ccd0744
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-dq7jg                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  kube-system                 etcd-ha-672593-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m35s
	  kube-system                 kindnet-wnbr8                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m37s
	  kube-system                 kube-apiserver-ha-672593-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m35s
	  kube-system                 kube-controller-manager-ha-672593-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m27s
	  kube-system                 kube-proxy-4q4tq                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m37s
	  kube-system                 kube-scheduler-ha-672593-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m35s
	  kube-system                 kube-vip-ha-672593-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m33s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m37s (x8 over 4m37s)  kubelet          Node ha-672593-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m37s (x8 over 4m37s)  kubelet          Node ha-672593-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m37s (x7 over 4m37s)  kubelet          Node ha-672593-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m33s                  node-controller  Node ha-672593-m03 event: Registered Node ha-672593-m03 in Controller
	  Normal  RegisteredNode           4m33s                  node-controller  Node ha-672593-m03 event: Registered Node ha-672593-m03 in Controller
	  Normal  RegisteredNode           4m20s                  node-controller  Node ha-672593-m03 event: Registered Node ha-672593-m03 in Controller
	
	
	Name:               ha-672593-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-672593-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f
	                    minikube.k8s.io/name=ha-672593
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_05T11_51_15_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 11:51:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-672593-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 11:54:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 11:52:02 +0000   Mon, 05 Aug 2024 11:51:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 11:52:02 +0000   Mon, 05 Aug 2024 11:51:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 11:52:02 +0000   Mon, 05 Aug 2024 11:51:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 11:52:02 +0000   Mon, 05 Aug 2024 11:52:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.4
	  Hostname:    ha-672593-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f5561d3ea391496e983c8078f06ff6c0
	  System UUID:                f5561d3e-a391-496e-983c-8078f06ff6c0
	  Boot ID:                    8c3ba653-2f5c-4f6b-97cc-3874b6ca2e6f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-6dfc5       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m29s
	  kube-system                 kube-proxy-lpp7n    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m20s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m29s (x2 over 3m29s)  kubelet          Node ha-672593-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m29s (x2 over 3m29s)  kubelet          Node ha-672593-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m29s (x2 over 3m29s)  kubelet          Node ha-672593-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m28s                  node-controller  Node ha-672593-m04 event: Registered Node ha-672593-m04 in Controller
	  Normal  RegisteredNode           3m28s                  node-controller  Node ha-672593-m04 event: Registered Node ha-672593-m04 in Controller
	  Normal  RegisteredNode           3m25s                  node-controller  Node ha-672593-m04 event: Registered Node ha-672593-m04 in Controller
	  Normal  NodeReady                2m41s                  kubelet          Node ha-672593-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug 5 11:47] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050074] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039249] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.753225] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.440209] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.588179] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +14.056383] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.055926] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054067] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.198790] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.118095] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.300760] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.230253] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +4.263666] systemd-fstab-generator[942]: Ignoring "noauto" option for root device
	[  +0.055709] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.688004] kauditd_printk_skb: 79 callbacks suppressed
	[  +1.472104] systemd-fstab-generator[1356]: Ignoring "noauto" option for root device
	[Aug 5 11:48] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.212364] kauditd_printk_skb: 29 callbacks suppressed
	[ +52.825644] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [1019d9e10074631835690fa0d372f2c043a64f237e1ddf9e22bcbd18d59fa6cd] <==
	{"level":"warn","ts":"2024-08-05T11:54:43.484737Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T11:54:43.585414Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T11:54:43.67133Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T11:54:43.724041Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T11:54:43.727536Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T11:54:43.741346Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T11:54:43.749938Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T11:54:43.757423Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T11:54:43.761718Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T11:54:43.76534Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T11:54:43.786058Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T11:54:43.790246Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T11:54:43.807035Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T11:54:43.832444Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T11:54:43.845251Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T11:54:43.851175Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T11:54:43.868181Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T11:54:43.877069Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T11:54:43.882654Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T11:54:43.885212Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T11:54:43.886339Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T11:54:43.890281Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T11:54:43.896541Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T11:54:43.902622Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T11:54:43.90865Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 11:54:43 up 7 min,  0 users,  load average: 0.34, 0.31, 0.17
	Linux ha-672593 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [57cec2b511aa8ca1b171b7dfff39ecb51cb11d9cd4efd552598fcc0054488c46] <==
	I0805 11:54:07.434525       1 main.go:322] Node ha-672593-m04 has CIDR [10.244.3.0/24] 
	I0805 11:54:17.441869       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0805 11:54:17.441941       1 main.go:322] Node ha-672593-m03 has CIDR [10.244.2.0/24] 
	I0805 11:54:17.442168       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0805 11:54:17.442197       1 main.go:322] Node ha-672593-m04 has CIDR [10.244.3.0/24] 
	I0805 11:54:17.442257       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0805 11:54:17.442264       1 main.go:299] handling current node
	I0805 11:54:17.442274       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0805 11:54:17.442279       1 main.go:322] Node ha-672593-m02 has CIDR [10.244.1.0/24] 
	I0805 11:54:27.439097       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0805 11:54:27.439192       1 main.go:299] handling current node
	I0805 11:54:27.439224       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0805 11:54:27.439242       1 main.go:322] Node ha-672593-m02 has CIDR [10.244.1.0/24] 
	I0805 11:54:27.439394       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0805 11:54:27.439416       1 main.go:322] Node ha-672593-m03 has CIDR [10.244.2.0/24] 
	I0805 11:54:27.439490       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0805 11:54:27.439508       1 main.go:322] Node ha-672593-m04 has CIDR [10.244.3.0/24] 
	I0805 11:54:37.436617       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0805 11:54:37.436674       1 main.go:299] handling current node
	I0805 11:54:37.436691       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0805 11:54:37.436698       1 main.go:322] Node ha-672593-m02 has CIDR [10.244.1.0/24] 
	I0805 11:54:37.436859       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0805 11:54:37.436882       1 main.go:322] Node ha-672593-m03 has CIDR [10.244.2.0/24] 
	I0805 11:54:37.436934       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0805 11:54:37.437014       1 main.go:322] Node ha-672593-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [50907082bdeb824e9a80122033ed1df5631143e152751f066a7bdfba1156e565] <==
	E0805 11:50:43.789269       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34086: use of closed network connection
	E0805 11:50:43.976322       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34114: use of closed network connection
	E0805 11:51:14.956507       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0805 11:51:14.957049       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0805 11:51:14.956693       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 8.074µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0805 11:51:14.958274       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0805 11:51:14.958403       1 timeout.go:142] post-timeout activity - time-elapsed: 1.974459ms, POST "/api/v1/namespaces/default/events" result: <nil>
	I0805 11:51:15.355884       1 trace.go:236] Trace[2057187306]: "Delete" accept:application/vnd.kubernetes.protobuf, */*,audit-id:d69f909f-0d36-4268-b370-405a73ba5a2d,client:192.168.39.102,api-group:,api-version:v1,name:kindnet-b7k4j,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kindnet-b7k4j,user-agent:kube-controller-manager/v1.30.3 (linux/amd64) kubernetes/6fc0a69/system:serviceaccount:kube-system:daemon-set-controller,verb:DELETE (05-Aug-2024 11:51:14.488) (total time: 867ms):
	Trace[2057187306]: ["GuaranteedUpdate etcd3" audit-id:d69f909f-0d36-4268-b370-405a73ba5a2d,key:/pods/kube-system/kindnet-b7k4j,type:*core.Pod,resource:pods 830ms (11:51:14.525)
	Trace[2057187306]:  ---"Txn call completed" 830ms (11:51:15.355)]
	Trace[2057187306]: [867.121964ms] [867.121964ms] END
	I0805 11:51:15.356091       1 trace.go:236] Trace[1882335687]: "Patch" accept:application/json, */*,audit-id:6a7ce633-7c2c-4b1f-ab10-58dd4b60ca48,client:192.168.39.4,api-group:,api-version:v1,name:ha-672593-m04,subresource:,namespace:,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/ha-672593-m04,user-agent:kubeadm/v1.30.3 (linux/amd64) kubernetes/6fc0a69,verb:PATCH (05-Aug-2024 11:51:14.525) (total time: 830ms):
	Trace[1882335687]: ["GuaranteedUpdate etcd3" audit-id:6a7ce633-7c2c-4b1f-ab10-58dd4b60ca48,key:/minions/ha-672593-m04,type:*core.Node,resource:nodes 830ms (11:51:14.525)
	Trace[1882335687]:  ---"Txn call completed" 827ms (11:51:15.355)]
	Trace[1882335687]: ---"Object stored in database" 828ms (11:51:15.355)
	Trace[1882335687]: [830.979361ms] [830.979361ms] END
	I0805 11:51:15.373191       1 trace.go:236] Trace[1292202851]: "Delete" accept:application/vnd.kubernetes.protobuf, */*,audit-id:e8bdf6a1-f936-44aa-a3e3-f2dcf8ca002f,client:192.168.39.102,api-group:,api-version:v1,name:kindnet-fgzhp,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kindnet-fgzhp,user-agent:kube-controller-manager/v1.30.3 (linux/amd64) kubernetes/6fc0a69/system:serviceaccount:kube-system:daemon-set-controller,verb:DELETE (05-Aug-2024 11:51:14.488) (total time: 885ms):
	Trace[1292202851]: ["GuaranteedUpdate etcd3" audit-id:e8bdf6a1-f936-44aa-a3e3-f2dcf8ca002f,key:/pods/kube-system/kindnet-fgzhp,type:*core.Pod,resource:pods 851ms (11:51:14.521)
	Trace[1292202851]:  ---"Txn call completed" 244ms (11:51:14.765)
	Trace[1292202851]:  ---"Txn call completed" 606ms (11:51:15.372)]
	Trace[1292202851]: [885.062494ms] [885.062494ms] END
	I0805 11:51:15.374273       1 trace.go:236] Trace[1503873603]: "Delete" accept:application/vnd.kubernetes.protobuf, */*,audit-id:1f8a93b4-7913-4c1b-b500-1e3c1a25153d,client:192.168.39.102,api-group:,api-version:v1,name:kube-proxy-rzj75,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kube-proxy-rzj75,user-agent:kube-controller-manager/v1.30.3 (linux/amd64) kubernetes/6fc0a69/system:serviceaccount:kube-system:daemon-set-controller,verb:DELETE (05-Aug-2024 11:51:14.446) (total time: 928ms):
	Trace[1503873603]: ---"Object deleted from database" 854ms (11:51:15.374)
	Trace[1503873603]: [928.175411ms] [928.175411ms] END
	W0805 11:52:36.535304       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.102 192.168.39.210]
	
	
	==> kube-controller-manager [b17d8131f0edcc3018bb9d820f56a29a7806d7d57a91b849fc1350d6a8465775] <==
	I0805 11:50:06.241131       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-672593-m03" podCIDRs=["10.244.2.0/24"]
	I0805 11:50:10.019449       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-672593-m03"
	I0805 11:50:35.511614       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="96.312251ms"
	I0805 11:50:35.543228       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.545622ms"
	I0805 11:50:35.718444       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="175.00124ms"
	I0805 11:50:35.886337       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="167.824683ms"
	E0805 11:50:35.886402       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0805 11:50:35.886582       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="116.447µs"
	I0805 11:50:35.892381       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="122.907µs"
	I0805 11:50:36.181404       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.778µs"
	I0805 11:50:38.032301       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="101.33µs"
	I0805 11:50:39.301373       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.703269ms"
	I0805 11:50:39.302794       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="96.516µs"
	I0805 11:50:39.398444       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.845419ms"
	I0805 11:50:39.398657       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.043µs"
	I0805 11:50:40.772924       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.173027ms"
	I0805 11:50:40.773129       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.867µs"
	E0805 11:51:13.986714       1 certificate_controller.go:146] Sync csr-l8sbz failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-l8sbz": the object has been modified; please apply your changes to the latest version and try again
	I0805 11:51:14.287098       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-672593-m04\" does not exist"
	I0805 11:51:14.324037       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-672593-m04" podCIDRs=["10.244.3.0/24"]
	I0805 11:51:15.030692       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-672593-m04"
	I0805 11:52:02.782810       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-672593-m04"
	I0805 11:52:58.822861       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-672593-m04"
	I0805 11:52:58.914362       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.130266ms"
	I0805 11:52:58.914624       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="105.357µs"
	
	
	==> kube-proxy [11c4e00c9ba78ff0cfb337d7435931f39fe7ccd42145fa6670487d190cacee48] <==
	I0805 11:48:01.596014       1 server_linux.go:69] "Using iptables proxy"
	I0805 11:48:01.611292       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.102"]
	I0805 11:48:01.688564       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0805 11:48:01.688703       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0805 11:48:01.688807       1 server_linux.go:165] "Using iptables Proxier"
	I0805 11:48:01.692611       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0805 11:48:01.693822       1 server.go:872] "Version info" version="v1.30.3"
	I0805 11:48:01.693932       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 11:48:01.695740       1 config.go:192] "Starting service config controller"
	I0805 11:48:01.696023       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 11:48:01.696086       1 config.go:101] "Starting endpoint slice config controller"
	I0805 11:48:01.696106       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 11:48:01.697101       1 config.go:319] "Starting node config controller"
	I0805 11:48:01.697140       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 11:48:01.796613       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0805 11:48:01.796745       1 shared_informer.go:320] Caches are synced for service config
	I0805 11:48:01.797314       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [ca9839b56e3e62d7ac6b88dc20149da25f586b4033e03a09844938e5b85b6334] <==
	E0805 11:47:45.748460       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0805 11:47:45.750500       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0805 11:47:45.750619       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0805 11:47:45.752782       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0805 11:47:45.752912       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0805 11:47:45.760425       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0805 11:47:45.760473       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0805 11:47:45.823044       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0805 11:47:45.823236       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0805 11:47:45.870442       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0805 11:47:45.870650       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0805 11:47:45.877420       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0805 11:47:45.877634       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0805 11:47:46.070799       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0805 11:47:46.070918       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0805 11:47:46.073847       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0805 11:47:46.074127       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0805 11:47:46.087219       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0805 11:47:46.087263       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0805 11:47:46.159418       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0805 11:47:46.159494       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0805 11:47:48.317360       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0805 11:50:06.340620       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-6lh4q\": pod kindnet-6lh4q is being deleted, cannot be assigned to a host" plugin="DefaultBinder" pod="kube-system/kindnet-6lh4q" node="ha-672593-m03"
	E0805 11:50:06.341001       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-6lh4q\": pod kindnet-6lh4q is being deleted, cannot be assigned to a host" pod="kube-system/kindnet-6lh4q"
	E0805 11:50:06.359326       1 schedule_one.go:1095] "Error updating pod" err="pods \"kindnet-6lh4q\" not found" pod="kube-system/kindnet-6lh4q"
	
	
	==> kubelet <==
	Aug 05 11:50:35 ha-672593 kubelet[1363]: E0805 11:50:35.506042    1363 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ha-672593" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'ha-672593' and this object
	Aug 05 11:50:35 ha-672593 kubelet[1363]: I0805 11:50:35.603017    1363 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwq4p\" (UniqueName: \"kubernetes.io/projected/b4aad5e1-e3ed-450f-b0c6-fa690e21632b-kube-api-access-gwq4p\") pod \"busybox-fc5497c4f-xx72g\" (UID: \"b4aad5e1-e3ed-450f-b0c6-fa690e21632b\") " pod="default/busybox-fc5497c4f-xx72g"
	Aug 05 11:50:36 ha-672593 kubelet[1363]: E0805 11:50:36.819567    1363 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Aug 05 11:50:36 ha-672593 kubelet[1363]: E0805 11:50:36.819646    1363 projected.go:200] Error preparing data for projected volume kube-api-access-gwq4p for pod default/busybox-fc5497c4f-xx72g: failed to sync configmap cache: timed out waiting for the condition
	Aug 05 11:50:36 ha-672593 kubelet[1363]: E0805 11:50:36.820371    1363 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b4aad5e1-e3ed-450f-b0c6-fa690e21632b-kube-api-access-gwq4p podName:b4aad5e1-e3ed-450f-b0c6-fa690e21632b nodeName:}" failed. No retries permitted until 2024-08-05 11:50:37.319740901 +0000 UTC m=+169.429622307 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwq4p" (UniqueName: "kubernetes.io/projected/b4aad5e1-e3ed-450f-b0c6-fa690e21632b-kube-api-access-gwq4p") pod "busybox-fc5497c4f-xx72g" (UID: "b4aad5e1-e3ed-450f-b0c6-fa690e21632b") : failed to sync configmap cache: timed out waiting for the condition
	Aug 05 11:50:48 ha-672593 kubelet[1363]: E0805 11:50:48.028465    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 11:50:48 ha-672593 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 11:50:48 ha-672593 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 11:50:48 ha-672593 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 11:50:48 ha-672593 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 11:51:48 ha-672593 kubelet[1363]: E0805 11:51:48.029347    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 11:51:48 ha-672593 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 11:51:48 ha-672593 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 11:51:48 ha-672593 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 11:51:48 ha-672593 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 11:52:48 ha-672593 kubelet[1363]: E0805 11:52:48.034253    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 11:52:48 ha-672593 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 11:52:48 ha-672593 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 11:52:48 ha-672593 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 11:52:48 ha-672593 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 11:53:48 ha-672593 kubelet[1363]: E0805 11:53:48.030232    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 11:53:48 ha-672593 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 11:53:48 ha-672593 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 11:53:48 ha-672593 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 11:53:48 ha-672593 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-672593 -n ha-672593
helpers_test.go:261: (dbg) Run:  kubectl --context ha-672593 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (51.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-672593 status -v=7 --alsologtostderr: exit status 3 (3.194899235s)

                                                
                                                
-- stdout --
	ha-672593
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-672593-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-672593-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-672593-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 11:54:48.468590  407844 out.go:291] Setting OutFile to fd 1 ...
	I0805 11:54:48.468833  407844 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 11:54:48.468843  407844 out.go:304] Setting ErrFile to fd 2...
	I0805 11:54:48.468848  407844 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 11:54:48.469045  407844 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
	I0805 11:54:48.469223  407844 out.go:298] Setting JSON to false
	I0805 11:54:48.469252  407844 mustload.go:65] Loading cluster: ha-672593
	I0805 11:54:48.469295  407844 notify.go:220] Checking for updates...
	I0805 11:54:48.469741  407844 config.go:182] Loaded profile config "ha-672593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 11:54:48.469764  407844 status.go:255] checking status of ha-672593 ...
	I0805 11:54:48.470238  407844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:54:48.470300  407844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:54:48.487813  407844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41799
	I0805 11:54:48.488237  407844 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:54:48.488795  407844 main.go:141] libmachine: Using API Version  1
	I0805 11:54:48.488815  407844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:54:48.489220  407844 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:54:48.489466  407844 main.go:141] libmachine: (ha-672593) Calling .GetState
	I0805 11:54:48.490937  407844 status.go:330] ha-672593 host status = "Running" (err=<nil>)
	I0805 11:54:48.490953  407844 host.go:66] Checking if "ha-672593" exists ...
	I0805 11:54:48.491240  407844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:54:48.491282  407844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:54:48.507137  407844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44385
	I0805 11:54:48.507602  407844 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:54:48.508211  407844 main.go:141] libmachine: Using API Version  1
	I0805 11:54:48.508240  407844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:54:48.508560  407844 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:54:48.508742  407844 main.go:141] libmachine: (ha-672593) Calling .GetIP
	I0805 11:54:48.511372  407844 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:54:48.511870  407844 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:54:48.511901  407844 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:54:48.512056  407844 host.go:66] Checking if "ha-672593" exists ...
	I0805 11:54:48.512394  407844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:54:48.512433  407844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:54:48.526895  407844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37087
	I0805 11:54:48.527300  407844 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:54:48.527763  407844 main.go:141] libmachine: Using API Version  1
	I0805 11:54:48.527786  407844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:54:48.528140  407844 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:54:48.528312  407844 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:54:48.528482  407844 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 11:54:48.528502  407844 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:54:48.531110  407844 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:54:48.531508  407844 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:54:48.531537  407844 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:54:48.531653  407844 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:54:48.531848  407844 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:54:48.532013  407844 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:54:48.532159  407844 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/id_rsa Username:docker}
	I0805 11:54:48.615377  407844 ssh_runner.go:195] Run: systemctl --version
	I0805 11:54:48.621239  407844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 11:54:48.644719  407844 kubeconfig.go:125] found "ha-672593" server: "https://192.168.39.254:8443"
	I0805 11:54:48.644751  407844 api_server.go:166] Checking apiserver status ...
	I0805 11:54:48.644792  407844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 11:54:48.662151  407844 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup
	W0805 11:54:48.673485  407844 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 11:54:48.673560  407844 ssh_runner.go:195] Run: ls
	I0805 11:54:48.678332  407844 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0805 11:54:48.682730  407844 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0805 11:54:48.682756  407844 status.go:422] ha-672593 apiserver status = Running (err=<nil>)
	I0805 11:54:48.682768  407844 status.go:257] ha-672593 status: &{Name:ha-672593 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 11:54:48.682792  407844 status.go:255] checking status of ha-672593-m02 ...
	I0805 11:54:48.683183  407844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:54:48.683227  407844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:54:48.698176  407844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44947
	I0805 11:54:48.698597  407844 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:54:48.699054  407844 main.go:141] libmachine: Using API Version  1
	I0805 11:54:48.699079  407844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:54:48.699446  407844 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:54:48.699631  407844 main.go:141] libmachine: (ha-672593-m02) Calling .GetState
	I0805 11:54:48.701282  407844 status.go:330] ha-672593-m02 host status = "Running" (err=<nil>)
	I0805 11:54:48.701301  407844 host.go:66] Checking if "ha-672593-m02" exists ...
	I0805 11:54:48.701661  407844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:54:48.701702  407844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:54:48.716511  407844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35381
	I0805 11:54:48.716893  407844 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:54:48.717389  407844 main.go:141] libmachine: Using API Version  1
	I0805 11:54:48.717404  407844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:54:48.717779  407844 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:54:48.717958  407844 main.go:141] libmachine: (ha-672593-m02) Calling .GetIP
	I0805 11:54:48.720658  407844 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:54:48.721051  407844 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:48:16 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-672593-m02 Clientid:01:52:54:00:67:7b:e8}
	I0805 11:54:48.721091  407844 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:54:48.721196  407844 host.go:66] Checking if "ha-672593-m02" exists ...
	I0805 11:54:48.721521  407844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:54:48.721556  407844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:54:48.736221  407844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36051
	I0805 11:54:48.736620  407844 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:54:48.737097  407844 main.go:141] libmachine: Using API Version  1
	I0805 11:54:48.737119  407844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:54:48.737448  407844 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:54:48.737649  407844 main.go:141] libmachine: (ha-672593-m02) Calling .DriverName
	I0805 11:54:48.737839  407844 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 11:54:48.737864  407844 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHHostname
	I0805 11:54:48.740686  407844 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:54:48.741200  407844 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:48:16 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-672593-m02 Clientid:01:52:54:00:67:7b:e8}
	I0805 11:54:48.741227  407844 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:54:48.741385  407844 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHPort
	I0805 11:54:48.741578  407844 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHKeyPath
	I0805 11:54:48.741764  407844 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHUsername
	I0805 11:54:48.741923  407844 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m02/id_rsa Username:docker}
	W0805 11:54:51.268034  407844 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.68:22: connect: no route to host
	W0805 11:54:51.268156  407844 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	E0805 11:54:51.268182  407844 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	I0805 11:54:51.268192  407844 status.go:257] ha-672593-m02 status: &{Name:ha-672593-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0805 11:54:51.268214  407844 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	I0805 11:54:51.268223  407844 status.go:255] checking status of ha-672593-m03 ...
	I0805 11:54:51.268629  407844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:54:51.268681  407844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:54:51.284437  407844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41229
	I0805 11:54:51.284843  407844 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:54:51.285322  407844 main.go:141] libmachine: Using API Version  1
	I0805 11:54:51.285347  407844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:54:51.285645  407844 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:54:51.285867  407844 main.go:141] libmachine: (ha-672593-m03) Calling .GetState
	I0805 11:54:51.287454  407844 status.go:330] ha-672593-m03 host status = "Running" (err=<nil>)
	I0805 11:54:51.287475  407844 host.go:66] Checking if "ha-672593-m03" exists ...
	I0805 11:54:51.287876  407844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:54:51.287920  407844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:54:51.302813  407844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45457
	I0805 11:54:51.303247  407844 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:54:51.303708  407844 main.go:141] libmachine: Using API Version  1
	I0805 11:54:51.303734  407844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:54:51.304007  407844 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:54:51.304218  407844 main.go:141] libmachine: (ha-672593-m03) Calling .GetIP
	I0805 11:54:51.307213  407844 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:54:51.307680  407844 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-672593-m03 Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:54:51.307713  407844 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:54:51.307881  407844 host.go:66] Checking if "ha-672593-m03" exists ...
	I0805 11:54:51.308294  407844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:54:51.308338  407844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:54:51.322528  407844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33171
	I0805 11:54:51.322953  407844 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:54:51.323406  407844 main.go:141] libmachine: Using API Version  1
	I0805 11:54:51.323426  407844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:54:51.323761  407844 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:54:51.323957  407844 main.go:141] libmachine: (ha-672593-m03) Calling .DriverName
	I0805 11:54:51.324155  407844 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 11:54:51.324177  407844 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHHostname
	I0805 11:54:51.326941  407844 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:54:51.327354  407844 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-672593-m03 Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:54:51.327387  407844 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:54:51.327540  407844 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHPort
	I0805 11:54:51.327702  407844 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHKeyPath
	I0805 11:54:51.327879  407844 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHUsername
	I0805 11:54:51.328021  407844 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m03/id_rsa Username:docker}
	I0805 11:54:51.408971  407844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 11:54:51.430535  407844 kubeconfig.go:125] found "ha-672593" server: "https://192.168.39.254:8443"
	I0805 11:54:51.430572  407844 api_server.go:166] Checking apiserver status ...
	I0805 11:54:51.430615  407844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 11:54:51.447770  407844 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup
	W0805 11:54:51.457449  407844 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 11:54:51.457524  407844 ssh_runner.go:195] Run: ls
	I0805 11:54:51.461729  407844 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0805 11:54:51.465870  407844 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0805 11:54:51.465894  407844 status.go:422] ha-672593-m03 apiserver status = Running (err=<nil>)
	I0805 11:54:51.465905  407844 status.go:257] ha-672593-m03 status: &{Name:ha-672593-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 11:54:51.465930  407844 status.go:255] checking status of ha-672593-m04 ...
	I0805 11:54:51.466259  407844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:54:51.466305  407844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:54:51.481457  407844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33279
	I0805 11:54:51.481919  407844 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:54:51.482376  407844 main.go:141] libmachine: Using API Version  1
	I0805 11:54:51.482397  407844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:54:51.482785  407844 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:54:51.483012  407844 main.go:141] libmachine: (ha-672593-m04) Calling .GetState
	I0805 11:54:51.484609  407844 status.go:330] ha-672593-m04 host status = "Running" (err=<nil>)
	I0805 11:54:51.484629  407844 host.go:66] Checking if "ha-672593-m04" exists ...
	I0805 11:54:51.484911  407844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:54:51.484946  407844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:54:51.499798  407844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36669
	I0805 11:54:51.500185  407844 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:54:51.500685  407844 main.go:141] libmachine: Using API Version  1
	I0805 11:54:51.500708  407844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:54:51.501028  407844 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:54:51.501243  407844 main.go:141] libmachine: (ha-672593-m04) Calling .GetIP
	I0805 11:54:51.504143  407844 main.go:141] libmachine: (ha-672593-m04) DBG | domain ha-672593-m04 has defined MAC address 52:54:00:23:8c:55 in network mk-ha-672593
	I0805 11:54:51.504595  407844 main.go:141] libmachine: (ha-672593-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:8c:55", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:50:59 +0000 UTC Type:0 Mac:52:54:00:23:8c:55 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-672593-m04 Clientid:01:52:54:00:23:8c:55}
	I0805 11:54:51.504627  407844 main.go:141] libmachine: (ha-672593-m04) DBG | domain ha-672593-m04 has defined IP address 192.168.39.4 and MAC address 52:54:00:23:8c:55 in network mk-ha-672593
	I0805 11:54:51.504798  407844 host.go:66] Checking if "ha-672593-m04" exists ...
	I0805 11:54:51.505084  407844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:54:51.505150  407844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:54:51.520206  407844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41481
	I0805 11:54:51.520628  407844 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:54:51.521209  407844 main.go:141] libmachine: Using API Version  1
	I0805 11:54:51.521241  407844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:54:51.521595  407844 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:54:51.521797  407844 main.go:141] libmachine: (ha-672593-m04) Calling .DriverName
	I0805 11:54:51.521987  407844 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 11:54:51.522008  407844 main.go:141] libmachine: (ha-672593-m04) Calling .GetSSHHostname
	I0805 11:54:51.524480  407844 main.go:141] libmachine: (ha-672593-m04) DBG | domain ha-672593-m04 has defined MAC address 52:54:00:23:8c:55 in network mk-ha-672593
	I0805 11:54:51.524922  407844 main.go:141] libmachine: (ha-672593-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:8c:55", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:50:59 +0000 UTC Type:0 Mac:52:54:00:23:8c:55 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-672593-m04 Clientid:01:52:54:00:23:8c:55}
	I0805 11:54:51.524962  407844 main.go:141] libmachine: (ha-672593-m04) DBG | domain ha-672593-m04 has defined IP address 192.168.39.4 and MAC address 52:54:00:23:8c:55 in network mk-ha-672593
	I0805 11:54:51.525036  407844 main.go:141] libmachine: (ha-672593-m04) Calling .GetSSHPort
	I0805 11:54:51.525234  407844 main.go:141] libmachine: (ha-672593-m04) Calling .GetSSHKeyPath
	I0805 11:54:51.525404  407844 main.go:141] libmachine: (ha-672593-m04) Calling .GetSSHUsername
	I0805 11:54:51.525527  407844 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m04/id_rsa Username:docker}
	I0805 11:54:51.603540  407844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 11:54:51.619193  407844 status.go:257] ha-672593-m04 status: &{Name:ha-672593-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-672593 status -v=7 --alsologtostderr: exit status 3 (4.82858047s)

                                                
                                                
-- stdout --
	ha-672593
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-672593-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-672593-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-672593-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 11:54:52.972377  407945 out.go:291] Setting OutFile to fd 1 ...
	I0805 11:54:52.972475  407945 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 11:54:52.972480  407945 out.go:304] Setting ErrFile to fd 2...
	I0805 11:54:52.972484  407945 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 11:54:52.972668  407945 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
	I0805 11:54:52.972831  407945 out.go:298] Setting JSON to false
	I0805 11:54:52.972860  407945 mustload.go:65] Loading cluster: ha-672593
	I0805 11:54:52.972911  407945 notify.go:220] Checking for updates...
	I0805 11:54:52.973284  407945 config.go:182] Loaded profile config "ha-672593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 11:54:52.973302  407945 status.go:255] checking status of ha-672593 ...
	I0805 11:54:52.973740  407945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:54:52.973821  407945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:54:52.992012  407945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36799
	I0805 11:54:52.992535  407945 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:54:52.993192  407945 main.go:141] libmachine: Using API Version  1
	I0805 11:54:52.993221  407945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:54:52.993699  407945 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:54:52.993930  407945 main.go:141] libmachine: (ha-672593) Calling .GetState
	I0805 11:54:52.996189  407945 status.go:330] ha-672593 host status = "Running" (err=<nil>)
	I0805 11:54:52.996234  407945 host.go:66] Checking if "ha-672593" exists ...
	I0805 11:54:52.996666  407945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:54:52.996723  407945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:54:53.012614  407945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37437
	I0805 11:54:53.013037  407945 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:54:53.013526  407945 main.go:141] libmachine: Using API Version  1
	I0805 11:54:53.013553  407945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:54:53.013908  407945 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:54:53.014091  407945 main.go:141] libmachine: (ha-672593) Calling .GetIP
	I0805 11:54:53.016820  407945 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:54:53.017187  407945 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:54:53.017214  407945 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:54:53.017311  407945 host.go:66] Checking if "ha-672593" exists ...
	I0805 11:54:53.017592  407945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:54:53.017642  407945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:54:53.031995  407945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36371
	I0805 11:54:53.032394  407945 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:54:53.032870  407945 main.go:141] libmachine: Using API Version  1
	I0805 11:54:53.032894  407945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:54:53.033202  407945 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:54:53.033394  407945 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:54:53.033575  407945 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 11:54:53.033615  407945 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:54:53.036741  407945 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:54:53.037176  407945 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:54:53.037207  407945 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:54:53.037329  407945 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:54:53.037514  407945 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:54:53.037697  407945 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:54:53.037827  407945 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/id_rsa Username:docker}
	I0805 11:54:53.119286  407945 ssh_runner.go:195] Run: systemctl --version
	I0805 11:54:53.125940  407945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 11:54:53.141556  407945 kubeconfig.go:125] found "ha-672593" server: "https://192.168.39.254:8443"
	I0805 11:54:53.141589  407945 api_server.go:166] Checking apiserver status ...
	I0805 11:54:53.141632  407945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 11:54:53.155559  407945 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup
	W0805 11:54:53.165435  407945 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 11:54:53.165511  407945 ssh_runner.go:195] Run: ls
	I0805 11:54:53.170136  407945 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0805 11:54:53.176140  407945 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0805 11:54:53.176175  407945 status.go:422] ha-672593 apiserver status = Running (err=<nil>)
	I0805 11:54:53.176186  407945 status.go:257] ha-672593 status: &{Name:ha-672593 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 11:54:53.176208  407945 status.go:255] checking status of ha-672593-m02 ...
	I0805 11:54:53.176498  407945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:54:53.176536  407945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:54:53.192140  407945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42209
	I0805 11:54:53.192579  407945 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:54:53.193071  407945 main.go:141] libmachine: Using API Version  1
	I0805 11:54:53.193094  407945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:54:53.193518  407945 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:54:53.193701  407945 main.go:141] libmachine: (ha-672593-m02) Calling .GetState
	I0805 11:54:53.195356  407945 status.go:330] ha-672593-m02 host status = "Running" (err=<nil>)
	I0805 11:54:53.195378  407945 host.go:66] Checking if "ha-672593-m02" exists ...
	I0805 11:54:53.195702  407945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:54:53.195761  407945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:54:53.210922  407945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38191
	I0805 11:54:53.211460  407945 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:54:53.212100  407945 main.go:141] libmachine: Using API Version  1
	I0805 11:54:53.212127  407945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:54:53.212527  407945 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:54:53.212793  407945 main.go:141] libmachine: (ha-672593-m02) Calling .GetIP
	I0805 11:54:53.216297  407945 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:54:53.216865  407945 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:48:16 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-672593-m02 Clientid:01:52:54:00:67:7b:e8}
	I0805 11:54:53.216906  407945 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:54:53.217064  407945 host.go:66] Checking if "ha-672593-m02" exists ...
	I0805 11:54:53.217484  407945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:54:53.217540  407945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:54:53.232136  407945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36033
	I0805 11:54:53.232701  407945 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:54:53.233269  407945 main.go:141] libmachine: Using API Version  1
	I0805 11:54:53.233293  407945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:54:53.233630  407945 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:54:53.233843  407945 main.go:141] libmachine: (ha-672593-m02) Calling .DriverName
	I0805 11:54:53.234035  407945 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 11:54:53.234062  407945 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHHostname
	I0805 11:54:53.236916  407945 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:54:53.237379  407945 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:48:16 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-672593-m02 Clientid:01:52:54:00:67:7b:e8}
	I0805 11:54:53.237410  407945 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:54:53.237586  407945 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHPort
	I0805 11:54:53.237740  407945 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHKeyPath
	I0805 11:54:53.237873  407945 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHUsername
	I0805 11:54:53.237988  407945 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m02/id_rsa Username:docker}
	W0805 11:54:54.340054  407945 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.68:22: connect: no route to host
	I0805 11:54:54.340106  407945 retry.go:31] will retry after 310.064988ms: dial tcp 192.168.39.68:22: connect: no route to host
	W0805 11:54:57.412079  407945 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.68:22: connect: no route to host
	W0805 11:54:57.412192  407945 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	E0805 11:54:57.412216  407945 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	I0805 11:54:57.412230  407945 status.go:257] ha-672593-m02 status: &{Name:ha-672593-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0805 11:54:57.412285  407945 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	I0805 11:54:57.412300  407945 status.go:255] checking status of ha-672593-m03 ...
	I0805 11:54:57.412739  407945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:54:57.412803  407945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:54:57.427678  407945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40809
	I0805 11:54:57.428221  407945 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:54:57.428801  407945 main.go:141] libmachine: Using API Version  1
	I0805 11:54:57.428827  407945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:54:57.429120  407945 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:54:57.429367  407945 main.go:141] libmachine: (ha-672593-m03) Calling .GetState
	I0805 11:54:57.430945  407945 status.go:330] ha-672593-m03 host status = "Running" (err=<nil>)
	I0805 11:54:57.430961  407945 host.go:66] Checking if "ha-672593-m03" exists ...
	I0805 11:54:57.431267  407945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:54:57.431300  407945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:54:57.446023  407945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33191
	I0805 11:54:57.446454  407945 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:54:57.446946  407945 main.go:141] libmachine: Using API Version  1
	I0805 11:54:57.446968  407945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:54:57.447290  407945 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:54:57.447489  407945 main.go:141] libmachine: (ha-672593-m03) Calling .GetIP
	I0805 11:54:57.450145  407945 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:54:57.450661  407945 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-672593-m03 Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:54:57.450684  407945 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:54:57.450831  407945 host.go:66] Checking if "ha-672593-m03" exists ...
	I0805 11:54:57.451149  407945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:54:57.451186  407945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:54:57.466615  407945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40727
	I0805 11:54:57.467176  407945 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:54:57.467641  407945 main.go:141] libmachine: Using API Version  1
	I0805 11:54:57.467664  407945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:54:57.468023  407945 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:54:57.468198  407945 main.go:141] libmachine: (ha-672593-m03) Calling .DriverName
	I0805 11:54:57.468431  407945 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 11:54:57.468457  407945 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHHostname
	I0805 11:54:57.471089  407945 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:54:57.471579  407945 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-672593-m03 Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:54:57.471612  407945 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:54:57.471766  407945 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHPort
	I0805 11:54:57.471903  407945 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHKeyPath
	I0805 11:54:57.472058  407945 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHUsername
	I0805 11:54:57.472195  407945 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m03/id_rsa Username:docker}
	I0805 11:54:57.551539  407945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 11:54:57.566291  407945 kubeconfig.go:125] found "ha-672593" server: "https://192.168.39.254:8443"
	I0805 11:54:57.566330  407945 api_server.go:166] Checking apiserver status ...
	I0805 11:54:57.566382  407945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 11:54:57.579581  407945 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup
	W0805 11:54:57.588890  407945 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 11:54:57.588947  407945 ssh_runner.go:195] Run: ls
	I0805 11:54:57.592996  407945 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0805 11:54:57.597206  407945 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0805 11:54:57.597231  407945 status.go:422] ha-672593-m03 apiserver status = Running (err=<nil>)
	I0805 11:54:57.597243  407945 status.go:257] ha-672593-m03 status: &{Name:ha-672593-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 11:54:57.597264  407945 status.go:255] checking status of ha-672593-m04 ...
	I0805 11:54:57.597653  407945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:54:57.597698  407945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:54:57.613177  407945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37051
	I0805 11:54:57.613639  407945 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:54:57.614162  407945 main.go:141] libmachine: Using API Version  1
	I0805 11:54:57.614188  407945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:54:57.614610  407945 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:54:57.614837  407945 main.go:141] libmachine: (ha-672593-m04) Calling .GetState
	I0805 11:54:57.616430  407945 status.go:330] ha-672593-m04 host status = "Running" (err=<nil>)
	I0805 11:54:57.616452  407945 host.go:66] Checking if "ha-672593-m04" exists ...
	I0805 11:54:57.616866  407945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:54:57.616910  407945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:54:57.633903  407945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39023
	I0805 11:54:57.634448  407945 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:54:57.634913  407945 main.go:141] libmachine: Using API Version  1
	I0805 11:54:57.634935  407945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:54:57.635252  407945 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:54:57.635463  407945 main.go:141] libmachine: (ha-672593-m04) Calling .GetIP
	I0805 11:54:57.638133  407945 main.go:141] libmachine: (ha-672593-m04) DBG | domain ha-672593-m04 has defined MAC address 52:54:00:23:8c:55 in network mk-ha-672593
	I0805 11:54:57.638505  407945 main.go:141] libmachine: (ha-672593-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:8c:55", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:50:59 +0000 UTC Type:0 Mac:52:54:00:23:8c:55 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-672593-m04 Clientid:01:52:54:00:23:8c:55}
	I0805 11:54:57.638539  407945 main.go:141] libmachine: (ha-672593-m04) DBG | domain ha-672593-m04 has defined IP address 192.168.39.4 and MAC address 52:54:00:23:8c:55 in network mk-ha-672593
	I0805 11:54:57.638713  407945 host.go:66] Checking if "ha-672593-m04" exists ...
	I0805 11:54:57.639010  407945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:54:57.639047  407945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:54:57.654179  407945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40107
	I0805 11:54:57.654555  407945 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:54:57.655099  407945 main.go:141] libmachine: Using API Version  1
	I0805 11:54:57.655131  407945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:54:57.655464  407945 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:54:57.655655  407945 main.go:141] libmachine: (ha-672593-m04) Calling .DriverName
	I0805 11:54:57.655894  407945 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 11:54:57.655921  407945 main.go:141] libmachine: (ha-672593-m04) Calling .GetSSHHostname
	I0805 11:54:57.658742  407945 main.go:141] libmachine: (ha-672593-m04) DBG | domain ha-672593-m04 has defined MAC address 52:54:00:23:8c:55 in network mk-ha-672593
	I0805 11:54:57.659201  407945 main.go:141] libmachine: (ha-672593-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:8c:55", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:50:59 +0000 UTC Type:0 Mac:52:54:00:23:8c:55 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-672593-m04 Clientid:01:52:54:00:23:8c:55}
	I0805 11:54:57.659229  407945 main.go:141] libmachine: (ha-672593-m04) DBG | domain ha-672593-m04 has defined IP address 192.168.39.4 and MAC address 52:54:00:23:8c:55 in network mk-ha-672593
	I0805 11:54:57.659373  407945 main.go:141] libmachine: (ha-672593-m04) Calling .GetSSHPort
	I0805 11:54:57.659529  407945 main.go:141] libmachine: (ha-672593-m04) Calling .GetSSHKeyPath
	I0805 11:54:57.659692  407945 main.go:141] libmachine: (ha-672593-m04) Calling .GetSSHUsername
	I0805 11:54:57.659818  407945 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m04/id_rsa Username:docker}
	I0805 11:54:57.743304  407945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 11:54:57.757892  407945 status.go:257] ha-672593-m04 status: &{Name:ha-672593-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-672593 status -v=7 --alsologtostderr: exit status 3 (4.282610069s)

                                                
                                                
-- stdout --
	ha-672593
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-672593-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-672593-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-672593-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 11:54:59.896662  408045 out.go:291] Setting OutFile to fd 1 ...
	I0805 11:54:59.896909  408045 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 11:54:59.896917  408045 out.go:304] Setting ErrFile to fd 2...
	I0805 11:54:59.896921  408045 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 11:54:59.897141  408045 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
	I0805 11:54:59.897305  408045 out.go:298] Setting JSON to false
	I0805 11:54:59.897339  408045 mustload.go:65] Loading cluster: ha-672593
	I0805 11:54:59.897467  408045 notify.go:220] Checking for updates...
	I0805 11:54:59.897854  408045 config.go:182] Loaded profile config "ha-672593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 11:54:59.897877  408045 status.go:255] checking status of ha-672593 ...
	I0805 11:54:59.898374  408045 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:54:59.898430  408045 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:54:59.915637  408045 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40599
	I0805 11:54:59.916072  408045 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:54:59.916654  408045 main.go:141] libmachine: Using API Version  1
	I0805 11:54:59.916677  408045 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:54:59.917054  408045 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:54:59.917278  408045 main.go:141] libmachine: (ha-672593) Calling .GetState
	I0805 11:54:59.918842  408045 status.go:330] ha-672593 host status = "Running" (err=<nil>)
	I0805 11:54:59.918866  408045 host.go:66] Checking if "ha-672593" exists ...
	I0805 11:54:59.919282  408045 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:54:59.919346  408045 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:54:59.935487  408045 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35749
	I0805 11:54:59.935929  408045 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:54:59.936400  408045 main.go:141] libmachine: Using API Version  1
	I0805 11:54:59.936422  408045 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:54:59.936731  408045 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:54:59.936911  408045 main.go:141] libmachine: (ha-672593) Calling .GetIP
	I0805 11:54:59.940084  408045 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:54:59.940526  408045 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:54:59.940562  408045 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:54:59.940673  408045 host.go:66] Checking if "ha-672593" exists ...
	I0805 11:54:59.940995  408045 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:54:59.941052  408045 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:54:59.957643  408045 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36383
	I0805 11:54:59.958159  408045 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:54:59.958651  408045 main.go:141] libmachine: Using API Version  1
	I0805 11:54:59.958675  408045 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:54:59.958992  408045 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:54:59.959219  408045 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:54:59.959408  408045 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 11:54:59.959460  408045 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:54:59.962316  408045 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:54:59.962849  408045 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:54:59.962877  408045 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:54:59.963034  408045 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:54:59.963245  408045 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:54:59.963464  408045 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:54:59.963612  408045 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/id_rsa Username:docker}
	I0805 11:55:00.048893  408045 ssh_runner.go:195] Run: systemctl --version
	I0805 11:55:00.055256  408045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 11:55:00.069951  408045 kubeconfig.go:125] found "ha-672593" server: "https://192.168.39.254:8443"
	I0805 11:55:00.069979  408045 api_server.go:166] Checking apiserver status ...
	I0805 11:55:00.070012  408045 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 11:55:00.084152  408045 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup
	W0805 11:55:00.094501  408045 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 11:55:00.094563  408045 ssh_runner.go:195] Run: ls
	I0805 11:55:00.098873  408045 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0805 11:55:00.106607  408045 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0805 11:55:00.106630  408045 status.go:422] ha-672593 apiserver status = Running (err=<nil>)
	I0805 11:55:00.106641  408045 status.go:257] ha-672593 status: &{Name:ha-672593 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 11:55:00.106663  408045 status.go:255] checking status of ha-672593-m02 ...
	I0805 11:55:00.107002  408045 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:00.107044  408045 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:00.122092  408045 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37465
	I0805 11:55:00.122541  408045 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:00.123055  408045 main.go:141] libmachine: Using API Version  1
	I0805 11:55:00.123080  408045 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:00.123470  408045 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:00.123677  408045 main.go:141] libmachine: (ha-672593-m02) Calling .GetState
	I0805 11:55:00.125334  408045 status.go:330] ha-672593-m02 host status = "Running" (err=<nil>)
	I0805 11:55:00.125357  408045 host.go:66] Checking if "ha-672593-m02" exists ...
	I0805 11:55:00.125780  408045 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:00.125833  408045 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:00.141947  408045 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44765
	I0805 11:55:00.142365  408045 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:00.142822  408045 main.go:141] libmachine: Using API Version  1
	I0805 11:55:00.142843  408045 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:00.143211  408045 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:00.143418  408045 main.go:141] libmachine: (ha-672593-m02) Calling .GetIP
	I0805 11:55:00.146427  408045 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:55:00.146859  408045 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:48:16 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-672593-m02 Clientid:01:52:54:00:67:7b:e8}
	I0805 11:55:00.146885  408045 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:55:00.147066  408045 host.go:66] Checking if "ha-672593-m02" exists ...
	I0805 11:55:00.147438  408045 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:00.147477  408045 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:00.162087  408045 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41513
	I0805 11:55:00.162524  408045 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:00.163015  408045 main.go:141] libmachine: Using API Version  1
	I0805 11:55:00.163040  408045 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:00.163359  408045 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:00.163521  408045 main.go:141] libmachine: (ha-672593-m02) Calling .DriverName
	I0805 11:55:00.163717  408045 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 11:55:00.163739  408045 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHHostname
	I0805 11:55:00.166222  408045 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:55:00.166644  408045 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:48:16 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-672593-m02 Clientid:01:52:54:00:67:7b:e8}
	I0805 11:55:00.166664  408045 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:55:00.166791  408045 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHPort
	I0805 11:55:00.166947  408045 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHKeyPath
	I0805 11:55:00.167085  408045 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHUsername
	I0805 11:55:00.167235  408045 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m02/id_rsa Username:docker}
	W0805 11:55:00.484007  408045 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.68:22: connect: no route to host
	I0805 11:55:00.484081  408045 retry.go:31] will retry after 242.589706ms: dial tcp 192.168.39.68:22: connect: no route to host
	W0805 11:55:03.780019  408045 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.68:22: connect: no route to host
	W0805 11:55:03.780112  408045 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	E0805 11:55:03.780127  408045 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	I0805 11:55:03.780136  408045 status.go:257] ha-672593-m02 status: &{Name:ha-672593-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0805 11:55:03.780170  408045 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	I0805 11:55:03.780177  408045 status.go:255] checking status of ha-672593-m03 ...
	I0805 11:55:03.780495  408045 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:03.780547  408045 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:03.796209  408045 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44881
	I0805 11:55:03.796691  408045 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:03.797197  408045 main.go:141] libmachine: Using API Version  1
	I0805 11:55:03.797220  408045 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:03.797621  408045 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:03.797832  408045 main.go:141] libmachine: (ha-672593-m03) Calling .GetState
	I0805 11:55:03.799646  408045 status.go:330] ha-672593-m03 host status = "Running" (err=<nil>)
	I0805 11:55:03.799665  408045 host.go:66] Checking if "ha-672593-m03" exists ...
	I0805 11:55:03.800025  408045 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:03.800074  408045 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:03.815400  408045 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34339
	I0805 11:55:03.815914  408045 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:03.816463  408045 main.go:141] libmachine: Using API Version  1
	I0805 11:55:03.816489  408045 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:03.816862  408045 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:03.817057  408045 main.go:141] libmachine: (ha-672593-m03) Calling .GetIP
	I0805 11:55:03.819575  408045 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:55:03.820083  408045 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-672593-m03 Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:55:03.820115  408045 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:55:03.820288  408045 host.go:66] Checking if "ha-672593-m03" exists ...
	I0805 11:55:03.820704  408045 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:03.820747  408045 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:03.838959  408045 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44869
	I0805 11:55:03.839366  408045 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:03.839861  408045 main.go:141] libmachine: Using API Version  1
	I0805 11:55:03.839886  408045 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:03.840212  408045 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:03.840399  408045 main.go:141] libmachine: (ha-672593-m03) Calling .DriverName
	I0805 11:55:03.840587  408045 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 11:55:03.840608  408045 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHHostname
	I0805 11:55:03.843504  408045 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:55:03.843868  408045 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-672593-m03 Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:55:03.843894  408045 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:55:03.844035  408045 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHPort
	I0805 11:55:03.844205  408045 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHKeyPath
	I0805 11:55:03.844522  408045 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHUsername
	I0805 11:55:03.844680  408045 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m03/id_rsa Username:docker}
	I0805 11:55:03.923596  408045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 11:55:03.938943  408045 kubeconfig.go:125] found "ha-672593" server: "https://192.168.39.254:8443"
	I0805 11:55:03.938977  408045 api_server.go:166] Checking apiserver status ...
	I0805 11:55:03.939020  408045 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 11:55:03.956461  408045 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup
	W0805 11:55:03.969505  408045 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 11:55:03.969568  408045 ssh_runner.go:195] Run: ls
	I0805 11:55:03.974480  408045 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0805 11:55:03.979110  408045 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0805 11:55:03.979135  408045 status.go:422] ha-672593-m03 apiserver status = Running (err=<nil>)
	I0805 11:55:03.979144  408045 status.go:257] ha-672593-m03 status: &{Name:ha-672593-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 11:55:03.979159  408045 status.go:255] checking status of ha-672593-m04 ...
	I0805 11:55:03.979457  408045 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:03.979491  408045 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:03.994918  408045 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45501
	I0805 11:55:03.995392  408045 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:03.995997  408045 main.go:141] libmachine: Using API Version  1
	I0805 11:55:03.996026  408045 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:03.996382  408045 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:03.996637  408045 main.go:141] libmachine: (ha-672593-m04) Calling .GetState
	I0805 11:55:03.998434  408045 status.go:330] ha-672593-m04 host status = "Running" (err=<nil>)
	I0805 11:55:03.998450  408045 host.go:66] Checking if "ha-672593-m04" exists ...
	I0805 11:55:03.998736  408045 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:03.998791  408045 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:04.013520  408045 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42955
	I0805 11:55:04.013972  408045 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:04.014492  408045 main.go:141] libmachine: Using API Version  1
	I0805 11:55:04.014514  408045 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:04.014828  408045 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:04.015073  408045 main.go:141] libmachine: (ha-672593-m04) Calling .GetIP
	I0805 11:55:04.017783  408045 main.go:141] libmachine: (ha-672593-m04) DBG | domain ha-672593-m04 has defined MAC address 52:54:00:23:8c:55 in network mk-ha-672593
	I0805 11:55:04.018255  408045 main.go:141] libmachine: (ha-672593-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:8c:55", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:50:59 +0000 UTC Type:0 Mac:52:54:00:23:8c:55 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-672593-m04 Clientid:01:52:54:00:23:8c:55}
	I0805 11:55:04.018284  408045 main.go:141] libmachine: (ha-672593-m04) DBG | domain ha-672593-m04 has defined IP address 192.168.39.4 and MAC address 52:54:00:23:8c:55 in network mk-ha-672593
	I0805 11:55:04.018431  408045 host.go:66] Checking if "ha-672593-m04" exists ...
	I0805 11:55:04.018751  408045 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:04.018792  408045 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:04.034249  408045 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42239
	I0805 11:55:04.034739  408045 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:04.035268  408045 main.go:141] libmachine: Using API Version  1
	I0805 11:55:04.035292  408045 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:04.035613  408045 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:04.035823  408045 main.go:141] libmachine: (ha-672593-m04) Calling .DriverName
	I0805 11:55:04.036016  408045 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 11:55:04.036045  408045 main.go:141] libmachine: (ha-672593-m04) Calling .GetSSHHostname
	I0805 11:55:04.039252  408045 main.go:141] libmachine: (ha-672593-m04) DBG | domain ha-672593-m04 has defined MAC address 52:54:00:23:8c:55 in network mk-ha-672593
	I0805 11:55:04.039778  408045 main.go:141] libmachine: (ha-672593-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:8c:55", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:50:59 +0000 UTC Type:0 Mac:52:54:00:23:8c:55 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-672593-m04 Clientid:01:52:54:00:23:8c:55}
	I0805 11:55:04.039815  408045 main.go:141] libmachine: (ha-672593-m04) DBG | domain ha-672593-m04 has defined IP address 192.168.39.4 and MAC address 52:54:00:23:8c:55 in network mk-ha-672593
	I0805 11:55:04.039960  408045 main.go:141] libmachine: (ha-672593-m04) Calling .GetSSHPort
	I0805 11:55:04.040148  408045 main.go:141] libmachine: (ha-672593-m04) Calling .GetSSHKeyPath
	I0805 11:55:04.040319  408045 main.go:141] libmachine: (ha-672593-m04) Calling .GetSSHUsername
	I0805 11:55:04.040502  408045 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m04/id_rsa Username:docker}
	I0805 11:55:04.119138  408045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 11:55:04.134144  408045 status.go:257] ha-672593-m04 status: &{Name:ha-672593-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-672593 status -v=7 --alsologtostderr: exit status 3 (4.821229861s)

                                                
                                                
-- stdout --
	ha-672593
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-672593-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-672593-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-672593-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 11:55:05.498586  408145 out.go:291] Setting OutFile to fd 1 ...
	I0805 11:55:05.498717  408145 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 11:55:05.498727  408145 out.go:304] Setting ErrFile to fd 2...
	I0805 11:55:05.498731  408145 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 11:55:05.498903  408145 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
	I0805 11:55:05.499062  408145 out.go:298] Setting JSON to false
	I0805 11:55:05.499090  408145 mustload.go:65] Loading cluster: ha-672593
	I0805 11:55:05.499210  408145 notify.go:220] Checking for updates...
	I0805 11:55:05.499490  408145 config.go:182] Loaded profile config "ha-672593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 11:55:05.499509  408145 status.go:255] checking status of ha-672593 ...
	I0805 11:55:05.499935  408145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:05.500020  408145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:05.515680  408145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35465
	I0805 11:55:05.516109  408145 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:05.516687  408145 main.go:141] libmachine: Using API Version  1
	I0805 11:55:05.516715  408145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:05.517084  408145 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:05.517313  408145 main.go:141] libmachine: (ha-672593) Calling .GetState
	I0805 11:55:05.518862  408145 status.go:330] ha-672593 host status = "Running" (err=<nil>)
	I0805 11:55:05.518882  408145 host.go:66] Checking if "ha-672593" exists ...
	I0805 11:55:05.519162  408145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:05.519202  408145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:05.534065  408145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40281
	I0805 11:55:05.534498  408145 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:05.534942  408145 main.go:141] libmachine: Using API Version  1
	I0805 11:55:05.534966  408145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:05.535279  408145 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:05.535497  408145 main.go:141] libmachine: (ha-672593) Calling .GetIP
	I0805 11:55:05.538426  408145 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:55:05.538851  408145 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:55:05.538878  408145 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:55:05.538997  408145 host.go:66] Checking if "ha-672593" exists ...
	I0805 11:55:05.539274  408145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:05.539304  408145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:05.554227  408145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44383
	I0805 11:55:05.554650  408145 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:05.555097  408145 main.go:141] libmachine: Using API Version  1
	I0805 11:55:05.555115  408145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:05.555462  408145 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:05.555677  408145 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:55:05.555900  408145 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 11:55:05.555939  408145 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:55:05.558992  408145 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:55:05.559480  408145 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:55:05.559507  408145 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:55:05.559614  408145 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:55:05.559779  408145 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:55:05.559961  408145 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:55:05.560125  408145 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/id_rsa Username:docker}
	I0805 11:55:05.644772  408145 ssh_runner.go:195] Run: systemctl --version
	I0805 11:55:05.651538  408145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 11:55:05.669255  408145 kubeconfig.go:125] found "ha-672593" server: "https://192.168.39.254:8443"
	I0805 11:55:05.669284  408145 api_server.go:166] Checking apiserver status ...
	I0805 11:55:05.669316  408145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 11:55:05.686062  408145 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup
	W0805 11:55:05.696671  408145 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 11:55:05.696730  408145 ssh_runner.go:195] Run: ls
	I0805 11:55:05.701695  408145 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0805 11:55:05.706252  408145 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0805 11:55:05.706277  408145 status.go:422] ha-672593 apiserver status = Running (err=<nil>)
	I0805 11:55:05.706289  408145 status.go:257] ha-672593 status: &{Name:ha-672593 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 11:55:05.706311  408145 status.go:255] checking status of ha-672593-m02 ...
	I0805 11:55:05.706624  408145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:05.706668  408145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:05.722208  408145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45541
	I0805 11:55:05.722689  408145 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:05.723192  408145 main.go:141] libmachine: Using API Version  1
	I0805 11:55:05.723224  408145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:05.723624  408145 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:05.723858  408145 main.go:141] libmachine: (ha-672593-m02) Calling .GetState
	I0805 11:55:05.725598  408145 status.go:330] ha-672593-m02 host status = "Running" (err=<nil>)
	I0805 11:55:05.725623  408145 host.go:66] Checking if "ha-672593-m02" exists ...
	I0805 11:55:05.726051  408145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:05.726123  408145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:05.742635  408145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32881
	I0805 11:55:05.743089  408145 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:05.743622  408145 main.go:141] libmachine: Using API Version  1
	I0805 11:55:05.743643  408145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:05.744010  408145 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:05.744204  408145 main.go:141] libmachine: (ha-672593-m02) Calling .GetIP
	I0805 11:55:05.747168  408145 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:55:05.747605  408145 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:48:16 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-672593-m02 Clientid:01:52:54:00:67:7b:e8}
	I0805 11:55:05.747627  408145 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:55:05.747816  408145 host.go:66] Checking if "ha-672593-m02" exists ...
	I0805 11:55:05.748238  408145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:05.748285  408145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:05.764555  408145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37923
	I0805 11:55:05.765050  408145 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:05.765559  408145 main.go:141] libmachine: Using API Version  1
	I0805 11:55:05.765582  408145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:05.765918  408145 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:05.766116  408145 main.go:141] libmachine: (ha-672593-m02) Calling .DriverName
	I0805 11:55:05.766286  408145 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 11:55:05.766308  408145 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHHostname
	I0805 11:55:05.768971  408145 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:55:05.769352  408145 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:48:16 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-672593-m02 Clientid:01:52:54:00:67:7b:e8}
	I0805 11:55:05.769379  408145 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:55:05.769490  408145 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHPort
	I0805 11:55:05.769660  408145 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHKeyPath
	I0805 11:55:05.769803  408145 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHUsername
	I0805 11:55:05.769922  408145 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m02/id_rsa Username:docker}
	W0805 11:55:06.852047  408145 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.68:22: connect: no route to host
	I0805 11:55:06.852127  408145 retry.go:31] will retry after 235.93273ms: dial tcp 192.168.39.68:22: connect: no route to host
	W0805 11:55:09.924036  408145 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.68:22: connect: no route to host
	W0805 11:55:09.924159  408145 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	E0805 11:55:09.924190  408145 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	I0805 11:55:09.924202  408145 status.go:257] ha-672593-m02 status: &{Name:ha-672593-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0805 11:55:09.924231  408145 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	I0805 11:55:09.924256  408145 status.go:255] checking status of ha-672593-m03 ...
	I0805 11:55:09.924583  408145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:09.924638  408145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:09.940000  408145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38371
	I0805 11:55:09.940434  408145 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:09.940899  408145 main.go:141] libmachine: Using API Version  1
	I0805 11:55:09.940921  408145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:09.941283  408145 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:09.941520  408145 main.go:141] libmachine: (ha-672593-m03) Calling .GetState
	I0805 11:55:09.943064  408145 status.go:330] ha-672593-m03 host status = "Running" (err=<nil>)
	I0805 11:55:09.943080  408145 host.go:66] Checking if "ha-672593-m03" exists ...
	I0805 11:55:09.943372  408145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:09.943415  408145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:09.957884  408145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34641
	I0805 11:55:09.958330  408145 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:09.958832  408145 main.go:141] libmachine: Using API Version  1
	I0805 11:55:09.958855  408145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:09.959175  408145 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:09.959353  408145 main.go:141] libmachine: (ha-672593-m03) Calling .GetIP
	I0805 11:55:09.962039  408145 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:55:09.962539  408145 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-672593-m03 Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:55:09.962569  408145 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:55:09.962753  408145 host.go:66] Checking if "ha-672593-m03" exists ...
	I0805 11:55:09.963098  408145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:09.963138  408145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:09.978427  408145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44329
	I0805 11:55:09.978805  408145 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:09.979246  408145 main.go:141] libmachine: Using API Version  1
	I0805 11:55:09.979266  408145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:09.979598  408145 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:09.979845  408145 main.go:141] libmachine: (ha-672593-m03) Calling .DriverName
	I0805 11:55:09.980033  408145 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 11:55:09.980055  408145 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHHostname
	I0805 11:55:09.982334  408145 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:55:09.982728  408145 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-672593-m03 Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:55:09.982764  408145 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:55:09.982937  408145 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHPort
	I0805 11:55:09.983140  408145 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHKeyPath
	I0805 11:55:09.983297  408145 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHUsername
	I0805 11:55:09.983453  408145 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m03/id_rsa Username:docker}
	I0805 11:55:10.063383  408145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 11:55:10.079299  408145 kubeconfig.go:125] found "ha-672593" server: "https://192.168.39.254:8443"
	I0805 11:55:10.079336  408145 api_server.go:166] Checking apiserver status ...
	I0805 11:55:10.079381  408145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 11:55:10.093438  408145 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup
	W0805 11:55:10.103656  408145 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 11:55:10.103724  408145 ssh_runner.go:195] Run: ls
	I0805 11:55:10.108355  408145 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0805 11:55:10.113514  408145 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0805 11:55:10.113540  408145 status.go:422] ha-672593-m03 apiserver status = Running (err=<nil>)
	I0805 11:55:10.113549  408145 status.go:257] ha-672593-m03 status: &{Name:ha-672593-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 11:55:10.113566  408145 status.go:255] checking status of ha-672593-m04 ...
	I0805 11:55:10.113914  408145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:10.113960  408145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:10.130518  408145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40813
	I0805 11:55:10.130958  408145 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:10.131520  408145 main.go:141] libmachine: Using API Version  1
	I0805 11:55:10.131546  408145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:10.131930  408145 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:10.132140  408145 main.go:141] libmachine: (ha-672593-m04) Calling .GetState
	I0805 11:55:10.133631  408145 status.go:330] ha-672593-m04 host status = "Running" (err=<nil>)
	I0805 11:55:10.133650  408145 host.go:66] Checking if "ha-672593-m04" exists ...
	I0805 11:55:10.133937  408145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:10.133989  408145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:10.148730  408145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33961
	I0805 11:55:10.149172  408145 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:10.149700  408145 main.go:141] libmachine: Using API Version  1
	I0805 11:55:10.149721  408145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:10.150053  408145 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:10.150278  408145 main.go:141] libmachine: (ha-672593-m04) Calling .GetIP
	I0805 11:55:10.153038  408145 main.go:141] libmachine: (ha-672593-m04) DBG | domain ha-672593-m04 has defined MAC address 52:54:00:23:8c:55 in network mk-ha-672593
	I0805 11:55:10.153451  408145 main.go:141] libmachine: (ha-672593-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:8c:55", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:50:59 +0000 UTC Type:0 Mac:52:54:00:23:8c:55 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-672593-m04 Clientid:01:52:54:00:23:8c:55}
	I0805 11:55:10.153485  408145 main.go:141] libmachine: (ha-672593-m04) DBG | domain ha-672593-m04 has defined IP address 192.168.39.4 and MAC address 52:54:00:23:8c:55 in network mk-ha-672593
	I0805 11:55:10.153606  408145 host.go:66] Checking if "ha-672593-m04" exists ...
	I0805 11:55:10.154018  408145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:10.154065  408145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:10.169579  408145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45351
	I0805 11:55:10.169973  408145 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:10.170470  408145 main.go:141] libmachine: Using API Version  1
	I0805 11:55:10.170498  408145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:10.170798  408145 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:10.170966  408145 main.go:141] libmachine: (ha-672593-m04) Calling .DriverName
	I0805 11:55:10.171141  408145 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 11:55:10.171160  408145 main.go:141] libmachine: (ha-672593-m04) Calling .GetSSHHostname
	I0805 11:55:10.173726  408145 main.go:141] libmachine: (ha-672593-m04) DBG | domain ha-672593-m04 has defined MAC address 52:54:00:23:8c:55 in network mk-ha-672593
	I0805 11:55:10.174084  408145 main.go:141] libmachine: (ha-672593-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:8c:55", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:50:59 +0000 UTC Type:0 Mac:52:54:00:23:8c:55 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-672593-m04 Clientid:01:52:54:00:23:8c:55}
	I0805 11:55:10.174111  408145 main.go:141] libmachine: (ha-672593-m04) DBG | domain ha-672593-m04 has defined IP address 192.168.39.4 and MAC address 52:54:00:23:8c:55 in network mk-ha-672593
	I0805 11:55:10.174231  408145 main.go:141] libmachine: (ha-672593-m04) Calling .GetSSHPort
	I0805 11:55:10.174425  408145 main.go:141] libmachine: (ha-672593-m04) Calling .GetSSHKeyPath
	I0805 11:55:10.174554  408145 main.go:141] libmachine: (ha-672593-m04) Calling .GetSSHUsername
	I0805 11:55:10.174687  408145 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m04/id_rsa Username:docker}
	I0805 11:55:10.259044  408145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 11:55:10.272684  408145 status.go:257] ha-672593-m04 status: &{Name:ha-672593-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-672593 status -v=7 --alsologtostderr: exit status 3 (4.252937267s)

                                                
                                                
-- stdout --
	ha-672593
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-672593-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-672593-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-672593-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 11:55:12.532192  408261 out.go:291] Setting OutFile to fd 1 ...
	I0805 11:55:12.532755  408261 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 11:55:12.532772  408261 out.go:304] Setting ErrFile to fd 2...
	I0805 11:55:12.532778  408261 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 11:55:12.533218  408261 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
	I0805 11:55:12.533657  408261 out.go:298] Setting JSON to false
	I0805 11:55:12.533816  408261 mustload.go:65] Loading cluster: ha-672593
	I0805 11:55:12.533902  408261 notify.go:220] Checking for updates...
	I0805 11:55:12.534247  408261 config.go:182] Loaded profile config "ha-672593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 11:55:12.534278  408261 status.go:255] checking status of ha-672593 ...
	I0805 11:55:12.534659  408261 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:12.534716  408261 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:12.551510  408261 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37509
	I0805 11:55:12.552005  408261 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:12.552670  408261 main.go:141] libmachine: Using API Version  1
	I0805 11:55:12.552690  408261 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:12.553124  408261 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:12.553365  408261 main.go:141] libmachine: (ha-672593) Calling .GetState
	I0805 11:55:12.555135  408261 status.go:330] ha-672593 host status = "Running" (err=<nil>)
	I0805 11:55:12.555166  408261 host.go:66] Checking if "ha-672593" exists ...
	I0805 11:55:12.555600  408261 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:12.555661  408261 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:12.570916  408261 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35963
	I0805 11:55:12.571356  408261 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:12.571900  408261 main.go:141] libmachine: Using API Version  1
	I0805 11:55:12.571923  408261 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:12.572379  408261 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:12.572573  408261 main.go:141] libmachine: (ha-672593) Calling .GetIP
	I0805 11:55:12.575708  408261 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:55:12.576196  408261 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:55:12.576228  408261 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:55:12.576381  408261 host.go:66] Checking if "ha-672593" exists ...
	I0805 11:55:12.576652  408261 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:12.576698  408261 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:12.591690  408261 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33091
	I0805 11:55:12.592111  408261 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:12.592571  408261 main.go:141] libmachine: Using API Version  1
	I0805 11:55:12.592599  408261 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:12.592894  408261 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:12.593087  408261 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:55:12.593304  408261 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 11:55:12.593337  408261 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:55:12.596389  408261 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:55:12.596854  408261 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:55:12.596880  408261 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:55:12.597029  408261 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:55:12.597210  408261 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:55:12.597360  408261 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:55:12.597496  408261 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/id_rsa Username:docker}
	I0805 11:55:12.684063  408261 ssh_runner.go:195] Run: systemctl --version
	I0805 11:55:12.690862  408261 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 11:55:12.706696  408261 kubeconfig.go:125] found "ha-672593" server: "https://192.168.39.254:8443"
	I0805 11:55:12.706731  408261 api_server.go:166] Checking apiserver status ...
	I0805 11:55:12.706787  408261 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 11:55:12.722805  408261 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup
	W0805 11:55:12.732815  408261 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 11:55:12.732865  408261 ssh_runner.go:195] Run: ls
	I0805 11:55:12.737837  408261 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0805 11:55:12.742420  408261 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0805 11:55:12.742446  408261 status.go:422] ha-672593 apiserver status = Running (err=<nil>)
	I0805 11:55:12.742460  408261 status.go:257] ha-672593 status: &{Name:ha-672593 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 11:55:12.742486  408261 status.go:255] checking status of ha-672593-m02 ...
	I0805 11:55:12.742914  408261 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:12.742986  408261 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:12.759343  408261 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38915
	I0805 11:55:12.759817  408261 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:12.760340  408261 main.go:141] libmachine: Using API Version  1
	I0805 11:55:12.760364  408261 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:12.760723  408261 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:12.760951  408261 main.go:141] libmachine: (ha-672593-m02) Calling .GetState
	I0805 11:55:12.762706  408261 status.go:330] ha-672593-m02 host status = "Running" (err=<nil>)
	I0805 11:55:12.762726  408261 host.go:66] Checking if "ha-672593-m02" exists ...
	I0805 11:55:12.763027  408261 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:12.763061  408261 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:12.779730  408261 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34159
	I0805 11:55:12.780203  408261 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:12.780769  408261 main.go:141] libmachine: Using API Version  1
	I0805 11:55:12.780794  408261 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:12.781213  408261 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:12.781372  408261 main.go:141] libmachine: (ha-672593-m02) Calling .GetIP
	I0805 11:55:12.784289  408261 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:55:12.784724  408261 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:48:16 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-672593-m02 Clientid:01:52:54:00:67:7b:e8}
	I0805 11:55:12.784743  408261 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:55:12.784910  408261 host.go:66] Checking if "ha-672593-m02" exists ...
	I0805 11:55:12.785298  408261 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:12.785344  408261 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:12.800323  408261 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46733
	I0805 11:55:12.800755  408261 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:12.801178  408261 main.go:141] libmachine: Using API Version  1
	I0805 11:55:12.801200  408261 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:12.801462  408261 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:12.801672  408261 main.go:141] libmachine: (ha-672593-m02) Calling .DriverName
	I0805 11:55:12.801826  408261 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 11:55:12.801847  408261 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHHostname
	I0805 11:55:12.804640  408261 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:55:12.805029  408261 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:48:16 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-672593-m02 Clientid:01:52:54:00:67:7b:e8}
	I0805 11:55:12.805064  408261 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:55:12.805168  408261 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHPort
	I0805 11:55:12.805366  408261 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHKeyPath
	I0805 11:55:12.805521  408261 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHUsername
	I0805 11:55:12.805655  408261 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m02/id_rsa Username:docker}
	W0805 11:55:12.995979  408261 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.68:22: connect: no route to host
	I0805 11:55:12.996049  408261 retry.go:31] will retry after 317.790853ms: dial tcp 192.168.39.68:22: connect: no route to host
	W0805 11:55:16.388033  408261 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.68:22: connect: no route to host
	W0805 11:55:16.388188  408261 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	E0805 11:55:16.388217  408261 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	I0805 11:55:16.388228  408261 status.go:257] ha-672593-m02 status: &{Name:ha-672593-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0805 11:55:16.388259  408261 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	I0805 11:55:16.388288  408261 status.go:255] checking status of ha-672593-m03 ...
	I0805 11:55:16.388651  408261 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:16.388713  408261 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:16.403704  408261 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43517
	I0805 11:55:16.404202  408261 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:16.404695  408261 main.go:141] libmachine: Using API Version  1
	I0805 11:55:16.404729  408261 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:16.405084  408261 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:16.405285  408261 main.go:141] libmachine: (ha-672593-m03) Calling .GetState
	I0805 11:55:16.406995  408261 status.go:330] ha-672593-m03 host status = "Running" (err=<nil>)
	I0805 11:55:16.407017  408261 host.go:66] Checking if "ha-672593-m03" exists ...
	I0805 11:55:16.407344  408261 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:16.407414  408261 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:16.421816  408261 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33751
	I0805 11:55:16.422168  408261 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:16.422577  408261 main.go:141] libmachine: Using API Version  1
	I0805 11:55:16.422597  408261 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:16.422890  408261 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:16.423082  408261 main.go:141] libmachine: (ha-672593-m03) Calling .GetIP
	I0805 11:55:16.425730  408261 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:55:16.426164  408261 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-672593-m03 Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:55:16.426191  408261 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:55:16.426356  408261 host.go:66] Checking if "ha-672593-m03" exists ...
	I0805 11:55:16.426758  408261 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:16.426811  408261 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:16.441625  408261 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40679
	I0805 11:55:16.442002  408261 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:16.442420  408261 main.go:141] libmachine: Using API Version  1
	I0805 11:55:16.442440  408261 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:16.442754  408261 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:16.442945  408261 main.go:141] libmachine: (ha-672593-m03) Calling .DriverName
	I0805 11:55:16.443256  408261 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 11:55:16.443280  408261 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHHostname
	I0805 11:55:16.445746  408261 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:55:16.446121  408261 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-672593-m03 Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:55:16.446142  408261 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:55:16.446334  408261 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHPort
	I0805 11:55:16.446518  408261 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHKeyPath
	I0805 11:55:16.446674  408261 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHUsername
	I0805 11:55:16.446812  408261 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m03/id_rsa Username:docker}
	I0805 11:55:16.527715  408261 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 11:55:16.543811  408261 kubeconfig.go:125] found "ha-672593" server: "https://192.168.39.254:8443"
	I0805 11:55:16.543849  408261 api_server.go:166] Checking apiserver status ...
	I0805 11:55:16.543893  408261 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 11:55:16.563180  408261 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup
	W0805 11:55:16.573911  408261 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 11:55:16.573977  408261 ssh_runner.go:195] Run: ls
	I0805 11:55:16.581338  408261 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0805 11:55:16.585520  408261 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0805 11:55:16.585547  408261 status.go:422] ha-672593-m03 apiserver status = Running (err=<nil>)
	I0805 11:55:16.585559  408261 status.go:257] ha-672593-m03 status: &{Name:ha-672593-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 11:55:16.585581  408261 status.go:255] checking status of ha-672593-m04 ...
	I0805 11:55:16.585878  408261 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:16.585913  408261 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:16.601559  408261 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45531
	I0805 11:55:16.602079  408261 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:16.602547  408261 main.go:141] libmachine: Using API Version  1
	I0805 11:55:16.602569  408261 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:16.602985  408261 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:16.603198  408261 main.go:141] libmachine: (ha-672593-m04) Calling .GetState
	I0805 11:55:16.604781  408261 status.go:330] ha-672593-m04 host status = "Running" (err=<nil>)
	I0805 11:55:16.604806  408261 host.go:66] Checking if "ha-672593-m04" exists ...
	I0805 11:55:16.605195  408261 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:16.605237  408261 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:16.622469  408261 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45757
	I0805 11:55:16.622935  408261 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:16.623452  408261 main.go:141] libmachine: Using API Version  1
	I0805 11:55:16.623481  408261 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:16.623825  408261 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:16.624043  408261 main.go:141] libmachine: (ha-672593-m04) Calling .GetIP
	I0805 11:55:16.626767  408261 main.go:141] libmachine: (ha-672593-m04) DBG | domain ha-672593-m04 has defined MAC address 52:54:00:23:8c:55 in network mk-ha-672593
	I0805 11:55:16.627196  408261 main.go:141] libmachine: (ha-672593-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:8c:55", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:50:59 +0000 UTC Type:0 Mac:52:54:00:23:8c:55 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-672593-m04 Clientid:01:52:54:00:23:8c:55}
	I0805 11:55:16.627221  408261 main.go:141] libmachine: (ha-672593-m04) DBG | domain ha-672593-m04 has defined IP address 192.168.39.4 and MAC address 52:54:00:23:8c:55 in network mk-ha-672593
	I0805 11:55:16.627369  408261 host.go:66] Checking if "ha-672593-m04" exists ...
	I0805 11:55:16.627789  408261 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:16.627831  408261 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:16.643448  408261 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40333
	I0805 11:55:16.643899  408261 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:16.644338  408261 main.go:141] libmachine: Using API Version  1
	I0805 11:55:16.644366  408261 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:16.644705  408261 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:16.644852  408261 main.go:141] libmachine: (ha-672593-m04) Calling .DriverName
	I0805 11:55:16.645253  408261 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 11:55:16.645283  408261 main.go:141] libmachine: (ha-672593-m04) Calling .GetSSHHostname
	I0805 11:55:16.647962  408261 main.go:141] libmachine: (ha-672593-m04) DBG | domain ha-672593-m04 has defined MAC address 52:54:00:23:8c:55 in network mk-ha-672593
	I0805 11:55:16.648380  408261 main.go:141] libmachine: (ha-672593-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:8c:55", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:50:59 +0000 UTC Type:0 Mac:52:54:00:23:8c:55 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-672593-m04 Clientid:01:52:54:00:23:8c:55}
	I0805 11:55:16.648411  408261 main.go:141] libmachine: (ha-672593-m04) DBG | domain ha-672593-m04 has defined IP address 192.168.39.4 and MAC address 52:54:00:23:8c:55 in network mk-ha-672593
	I0805 11:55:16.648730  408261 main.go:141] libmachine: (ha-672593-m04) Calling .GetSSHPort
	I0805 11:55:16.648915  408261 main.go:141] libmachine: (ha-672593-m04) Calling .GetSSHKeyPath
	I0805 11:55:16.649067  408261 main.go:141] libmachine: (ha-672593-m04) Calling .GetSSHUsername
	I0805 11:55:16.649212  408261 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m04/id_rsa Username:docker}
	I0805 11:55:16.726566  408261 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 11:55:16.740034  408261 status.go:257] ha-672593-m04 status: &{Name:ha-672593-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-672593 status -v=7 --alsologtostderr: exit status 3 (3.737444716s)

                                                
                                                
-- stdout --
	ha-672593
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-672593-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-672593-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-672593-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 11:55:23.542284  408377 out.go:291] Setting OutFile to fd 1 ...
	I0805 11:55:23.542554  408377 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 11:55:23.542563  408377 out.go:304] Setting ErrFile to fd 2...
	I0805 11:55:23.542567  408377 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 11:55:23.542737  408377 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
	I0805 11:55:23.542939  408377 out.go:298] Setting JSON to false
	I0805 11:55:23.542967  408377 mustload.go:65] Loading cluster: ha-672593
	I0805 11:55:23.543057  408377 notify.go:220] Checking for updates...
	I0805 11:55:23.543340  408377 config.go:182] Loaded profile config "ha-672593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 11:55:23.543361  408377 status.go:255] checking status of ha-672593 ...
	I0805 11:55:23.543789  408377 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:23.543873  408377 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:23.559856  408377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45369
	I0805 11:55:23.560335  408377 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:23.561005  408377 main.go:141] libmachine: Using API Version  1
	I0805 11:55:23.561060  408377 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:23.561578  408377 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:23.561821  408377 main.go:141] libmachine: (ha-672593) Calling .GetState
	I0805 11:55:23.563494  408377 status.go:330] ha-672593 host status = "Running" (err=<nil>)
	I0805 11:55:23.563529  408377 host.go:66] Checking if "ha-672593" exists ...
	I0805 11:55:23.563851  408377 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:23.563896  408377 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:23.579016  408377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38669
	I0805 11:55:23.579516  408377 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:23.580009  408377 main.go:141] libmachine: Using API Version  1
	I0805 11:55:23.580030  408377 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:23.580362  408377 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:23.580520  408377 main.go:141] libmachine: (ha-672593) Calling .GetIP
	I0805 11:55:23.583244  408377 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:55:23.583659  408377 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:55:23.583685  408377 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:55:23.583849  408377 host.go:66] Checking if "ha-672593" exists ...
	I0805 11:55:23.584136  408377 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:23.584171  408377 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:23.598927  408377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34045
	I0805 11:55:23.599358  408377 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:23.599845  408377 main.go:141] libmachine: Using API Version  1
	I0805 11:55:23.599865  408377 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:23.600165  408377 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:23.600360  408377 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:55:23.600557  408377 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 11:55:23.600586  408377 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:55:23.603846  408377 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:55:23.604276  408377 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:55:23.604315  408377 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:55:23.604470  408377 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:55:23.604762  408377 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:55:23.604923  408377 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:55:23.605058  408377 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/id_rsa Username:docker}
	I0805 11:55:23.691537  408377 ssh_runner.go:195] Run: systemctl --version
	I0805 11:55:23.699498  408377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 11:55:23.717067  408377 kubeconfig.go:125] found "ha-672593" server: "https://192.168.39.254:8443"
	I0805 11:55:23.717099  408377 api_server.go:166] Checking apiserver status ...
	I0805 11:55:23.717144  408377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 11:55:23.732186  408377 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup
	W0805 11:55:23.741872  408377 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 11:55:23.741917  408377 ssh_runner.go:195] Run: ls
	I0805 11:55:23.746742  408377 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0805 11:55:23.753689  408377 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0805 11:55:23.753716  408377 status.go:422] ha-672593 apiserver status = Running (err=<nil>)
	I0805 11:55:23.753727  408377 status.go:257] ha-672593 status: &{Name:ha-672593 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 11:55:23.753749  408377 status.go:255] checking status of ha-672593-m02 ...
	I0805 11:55:23.754031  408377 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:23.754069  408377 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:23.768886  408377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36657
	I0805 11:55:23.769369  408377 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:23.769863  408377 main.go:141] libmachine: Using API Version  1
	I0805 11:55:23.769885  408377 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:23.770212  408377 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:23.770421  408377 main.go:141] libmachine: (ha-672593-m02) Calling .GetState
	I0805 11:55:23.771996  408377 status.go:330] ha-672593-m02 host status = "Running" (err=<nil>)
	I0805 11:55:23.772018  408377 host.go:66] Checking if "ha-672593-m02" exists ...
	I0805 11:55:23.772455  408377 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:23.772493  408377 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:23.786866  408377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34155
	I0805 11:55:23.787272  408377 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:23.787853  408377 main.go:141] libmachine: Using API Version  1
	I0805 11:55:23.787880  408377 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:23.788196  408377 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:23.788398  408377 main.go:141] libmachine: (ha-672593-m02) Calling .GetIP
	I0805 11:55:23.791005  408377 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:55:23.791440  408377 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:48:16 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-672593-m02 Clientid:01:52:54:00:67:7b:e8}
	I0805 11:55:23.791466  408377 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:55:23.791578  408377 host.go:66] Checking if "ha-672593-m02" exists ...
	I0805 11:55:23.791993  408377 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:23.792035  408377 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:23.807131  408377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34853
	I0805 11:55:23.807593  408377 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:23.808089  408377 main.go:141] libmachine: Using API Version  1
	I0805 11:55:23.808112  408377 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:23.808469  408377 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:23.808690  408377 main.go:141] libmachine: (ha-672593-m02) Calling .DriverName
	I0805 11:55:23.808892  408377 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 11:55:23.808913  408377 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHHostname
	I0805 11:55:23.811649  408377 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:55:23.812119  408377 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:48:16 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-672593-m02 Clientid:01:52:54:00:67:7b:e8}
	I0805 11:55:23.812146  408377 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:55:23.812277  408377 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHPort
	I0805 11:55:23.812458  408377 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHKeyPath
	I0805 11:55:23.812655  408377 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHUsername
	I0805 11:55:23.812793  408377 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m02/id_rsa Username:docker}
	W0805 11:55:26.884025  408377 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.68:22: connect: no route to host
	W0805 11:55:26.884182  408377 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	E0805 11:55:26.884209  408377 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	I0805 11:55:26.884218  408377 status.go:257] ha-672593-m02 status: &{Name:ha-672593-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0805 11:55:26.884238  408377 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	I0805 11:55:26.884246  408377 status.go:255] checking status of ha-672593-m03 ...
	I0805 11:55:26.884602  408377 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:26.884657  408377 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:26.899989  408377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36639
	I0805 11:55:26.900508  408377 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:26.900975  408377 main.go:141] libmachine: Using API Version  1
	I0805 11:55:26.901001  408377 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:26.901430  408377 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:26.901689  408377 main.go:141] libmachine: (ha-672593-m03) Calling .GetState
	I0805 11:55:26.903549  408377 status.go:330] ha-672593-m03 host status = "Running" (err=<nil>)
	I0805 11:55:26.903569  408377 host.go:66] Checking if "ha-672593-m03" exists ...
	I0805 11:55:26.903906  408377 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:26.903947  408377 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:26.919507  408377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44141
	I0805 11:55:26.919922  408377 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:26.920388  408377 main.go:141] libmachine: Using API Version  1
	I0805 11:55:26.920412  408377 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:26.920765  408377 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:26.920957  408377 main.go:141] libmachine: (ha-672593-m03) Calling .GetIP
	I0805 11:55:26.923849  408377 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:55:26.924245  408377 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-672593-m03 Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:55:26.924267  408377 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:55:26.924459  408377 host.go:66] Checking if "ha-672593-m03" exists ...
	I0805 11:55:26.924756  408377 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:26.924791  408377 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:26.939157  408377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37543
	I0805 11:55:26.939499  408377 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:26.939926  408377 main.go:141] libmachine: Using API Version  1
	I0805 11:55:26.939950  408377 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:26.940255  408377 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:26.940434  408377 main.go:141] libmachine: (ha-672593-m03) Calling .DriverName
	I0805 11:55:26.940628  408377 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 11:55:26.940658  408377 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHHostname
	I0805 11:55:26.943080  408377 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:55:26.943549  408377 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-672593-m03 Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:55:26.943565  408377 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:55:26.943761  408377 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHPort
	I0805 11:55:26.943918  408377 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHKeyPath
	I0805 11:55:26.944087  408377 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHUsername
	I0805 11:55:26.944268  408377 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m03/id_rsa Username:docker}
	I0805 11:55:27.028511  408377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 11:55:27.044331  408377 kubeconfig.go:125] found "ha-672593" server: "https://192.168.39.254:8443"
	I0805 11:55:27.044356  408377 api_server.go:166] Checking apiserver status ...
	I0805 11:55:27.044396  408377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 11:55:27.058197  408377 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup
	W0805 11:55:27.067703  408377 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 11:55:27.067759  408377 ssh_runner.go:195] Run: ls
	I0805 11:55:27.072597  408377 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0805 11:55:27.079228  408377 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0805 11:55:27.079256  408377 status.go:422] ha-672593-m03 apiserver status = Running (err=<nil>)
	I0805 11:55:27.079265  408377 status.go:257] ha-672593-m03 status: &{Name:ha-672593-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 11:55:27.079281  408377 status.go:255] checking status of ha-672593-m04 ...
	I0805 11:55:27.079677  408377 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:27.079723  408377 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:27.094622  408377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35575
	I0805 11:55:27.095119  408377 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:27.095659  408377 main.go:141] libmachine: Using API Version  1
	I0805 11:55:27.095677  408377 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:27.096001  408377 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:27.096202  408377 main.go:141] libmachine: (ha-672593-m04) Calling .GetState
	I0805 11:55:27.097654  408377 status.go:330] ha-672593-m04 host status = "Running" (err=<nil>)
	I0805 11:55:27.097676  408377 host.go:66] Checking if "ha-672593-m04" exists ...
	I0805 11:55:27.097982  408377 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:27.098018  408377 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:27.114380  408377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46417
	I0805 11:55:27.114828  408377 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:27.115314  408377 main.go:141] libmachine: Using API Version  1
	I0805 11:55:27.115339  408377 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:27.115687  408377 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:27.115918  408377 main.go:141] libmachine: (ha-672593-m04) Calling .GetIP
	I0805 11:55:27.118685  408377 main.go:141] libmachine: (ha-672593-m04) DBG | domain ha-672593-m04 has defined MAC address 52:54:00:23:8c:55 in network mk-ha-672593
	I0805 11:55:27.119137  408377 main.go:141] libmachine: (ha-672593-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:8c:55", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:50:59 +0000 UTC Type:0 Mac:52:54:00:23:8c:55 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-672593-m04 Clientid:01:52:54:00:23:8c:55}
	I0805 11:55:27.119178  408377 main.go:141] libmachine: (ha-672593-m04) DBG | domain ha-672593-m04 has defined IP address 192.168.39.4 and MAC address 52:54:00:23:8c:55 in network mk-ha-672593
	I0805 11:55:27.119507  408377 host.go:66] Checking if "ha-672593-m04" exists ...
	I0805 11:55:27.119984  408377 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:27.120033  408377 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:27.136063  408377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40227
	I0805 11:55:27.136519  408377 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:27.137030  408377 main.go:141] libmachine: Using API Version  1
	I0805 11:55:27.137055  408377 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:27.137408  408377 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:27.137611  408377 main.go:141] libmachine: (ha-672593-m04) Calling .DriverName
	I0805 11:55:27.137816  408377 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 11:55:27.137841  408377 main.go:141] libmachine: (ha-672593-m04) Calling .GetSSHHostname
	I0805 11:55:27.140688  408377 main.go:141] libmachine: (ha-672593-m04) DBG | domain ha-672593-m04 has defined MAC address 52:54:00:23:8c:55 in network mk-ha-672593
	I0805 11:55:27.141093  408377 main.go:141] libmachine: (ha-672593-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:8c:55", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:50:59 +0000 UTC Type:0 Mac:52:54:00:23:8c:55 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-672593-m04 Clientid:01:52:54:00:23:8c:55}
	I0805 11:55:27.141130  408377 main.go:141] libmachine: (ha-672593-m04) DBG | domain ha-672593-m04 has defined IP address 192.168.39.4 and MAC address 52:54:00:23:8c:55 in network mk-ha-672593
	I0805 11:55:27.141236  408377 main.go:141] libmachine: (ha-672593-m04) Calling .GetSSHPort
	I0805 11:55:27.141428  408377 main.go:141] libmachine: (ha-672593-m04) Calling .GetSSHKeyPath
	I0805 11:55:27.141594  408377 main.go:141] libmachine: (ha-672593-m04) Calling .GetSSHUsername
	I0805 11:55:27.141736  408377 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m04/id_rsa Username:docker}
	I0805 11:55:27.219585  408377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 11:55:27.234923  408377 status.go:257] ha-672593-m04 status: &{Name:ha-672593-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
E0805 11:55:27.753144  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.crt: no such file or directory
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-672593 status -v=7 --alsologtostderr: exit status 7 (638.398645ms)

                                                
                                                
-- stdout --
	ha-672593
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-672593-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-672593-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-672593-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 11:55:37.326600  408513 out.go:291] Setting OutFile to fd 1 ...
	I0805 11:55:37.326751  408513 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 11:55:37.326764  408513 out.go:304] Setting ErrFile to fd 2...
	I0805 11:55:37.326771  408513 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 11:55:37.327045  408513 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
	I0805 11:55:37.327283  408513 out.go:298] Setting JSON to false
	I0805 11:55:37.327332  408513 mustload.go:65] Loading cluster: ha-672593
	I0805 11:55:37.327881  408513 config.go:182] Loaded profile config "ha-672593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 11:55:37.327904  408513 status.go:255] checking status of ha-672593 ...
	I0805 11:55:37.328457  408513 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:37.328512  408513 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:37.328600  408513 notify.go:220] Checking for updates...
	I0805 11:55:37.352694  408513 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44965
	I0805 11:55:37.353136  408513 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:37.353827  408513 main.go:141] libmachine: Using API Version  1
	I0805 11:55:37.353851  408513 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:37.354185  408513 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:37.354379  408513 main.go:141] libmachine: (ha-672593) Calling .GetState
	I0805 11:55:37.355915  408513 status.go:330] ha-672593 host status = "Running" (err=<nil>)
	I0805 11:55:37.355940  408513 host.go:66] Checking if "ha-672593" exists ...
	I0805 11:55:37.356201  408513 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:37.356235  408513 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:37.371646  408513 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40695
	I0805 11:55:37.372054  408513 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:37.372619  408513 main.go:141] libmachine: Using API Version  1
	I0805 11:55:37.372672  408513 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:37.372987  408513 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:37.373197  408513 main.go:141] libmachine: (ha-672593) Calling .GetIP
	I0805 11:55:37.376260  408513 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:55:37.376672  408513 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:55:37.376690  408513 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:55:37.376861  408513 host.go:66] Checking if "ha-672593" exists ...
	I0805 11:55:37.377165  408513 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:37.377200  408513 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:37.391933  408513 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35673
	I0805 11:55:37.392362  408513 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:37.392782  408513 main.go:141] libmachine: Using API Version  1
	I0805 11:55:37.392805  408513 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:37.393134  408513 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:37.393306  408513 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:55:37.393479  408513 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 11:55:37.393516  408513 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:55:37.396244  408513 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:55:37.396695  408513 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:55:37.396735  408513 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:55:37.396899  408513 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:55:37.397071  408513 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:55:37.397243  408513 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:55:37.397388  408513 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/id_rsa Username:docker}
	I0805 11:55:37.487491  408513 ssh_runner.go:195] Run: systemctl --version
	I0805 11:55:37.493557  408513 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 11:55:37.509937  408513 kubeconfig.go:125] found "ha-672593" server: "https://192.168.39.254:8443"
	I0805 11:55:37.509966  408513 api_server.go:166] Checking apiserver status ...
	I0805 11:55:37.510008  408513 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 11:55:37.524526  408513 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup
	W0805 11:55:37.533490  408513 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 11:55:37.533537  408513 ssh_runner.go:195] Run: ls
	I0805 11:55:37.537842  408513 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0805 11:55:37.543768  408513 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0805 11:55:37.543796  408513 status.go:422] ha-672593 apiserver status = Running (err=<nil>)
	I0805 11:55:37.543810  408513 status.go:257] ha-672593 status: &{Name:ha-672593 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 11:55:37.543836  408513 status.go:255] checking status of ha-672593-m02 ...
	I0805 11:55:37.544189  408513 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:37.544235  408513 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:37.559550  408513 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42923
	I0805 11:55:37.560029  408513 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:37.560541  408513 main.go:141] libmachine: Using API Version  1
	I0805 11:55:37.560561  408513 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:37.560885  408513 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:37.561080  408513 main.go:141] libmachine: (ha-672593-m02) Calling .GetState
	I0805 11:55:37.562664  408513 status.go:330] ha-672593-m02 host status = "Stopped" (err=<nil>)
	I0805 11:55:37.562678  408513 status.go:343] host is not running, skipping remaining checks
	I0805 11:55:37.562686  408513 status.go:257] ha-672593-m02 status: &{Name:ha-672593-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 11:55:37.562706  408513 status.go:255] checking status of ha-672593-m03 ...
	I0805 11:55:37.563009  408513 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:37.563042  408513 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:37.577415  408513 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39779
	I0805 11:55:37.577789  408513 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:37.578284  408513 main.go:141] libmachine: Using API Version  1
	I0805 11:55:37.578313  408513 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:37.578605  408513 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:37.578802  408513 main.go:141] libmachine: (ha-672593-m03) Calling .GetState
	I0805 11:55:37.580447  408513 status.go:330] ha-672593-m03 host status = "Running" (err=<nil>)
	I0805 11:55:37.580465  408513 host.go:66] Checking if "ha-672593-m03" exists ...
	I0805 11:55:37.580786  408513 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:37.580856  408513 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:37.595012  408513 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33989
	I0805 11:55:37.595404  408513 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:37.595861  408513 main.go:141] libmachine: Using API Version  1
	I0805 11:55:37.595882  408513 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:37.596223  408513 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:37.596448  408513 main.go:141] libmachine: (ha-672593-m03) Calling .GetIP
	I0805 11:55:37.598872  408513 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:55:37.599381  408513 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-672593-m03 Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:55:37.599422  408513 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:55:37.599581  408513 host.go:66] Checking if "ha-672593-m03" exists ...
	I0805 11:55:37.599968  408513 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:37.600008  408513 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:37.614637  408513 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42379
	I0805 11:55:37.615077  408513 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:37.615647  408513 main.go:141] libmachine: Using API Version  1
	I0805 11:55:37.615673  408513 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:37.615982  408513 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:37.616182  408513 main.go:141] libmachine: (ha-672593-m03) Calling .DriverName
	I0805 11:55:37.616409  408513 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 11:55:37.616433  408513 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHHostname
	I0805 11:55:37.618905  408513 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:55:37.619297  408513 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-672593-m03 Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:55:37.619322  408513 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:55:37.619486  408513 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHPort
	I0805 11:55:37.619652  408513 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHKeyPath
	I0805 11:55:37.619817  408513 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHUsername
	I0805 11:55:37.619975  408513 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m03/id_rsa Username:docker}
	I0805 11:55:37.704504  408513 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 11:55:37.722065  408513 kubeconfig.go:125] found "ha-672593" server: "https://192.168.39.254:8443"
	I0805 11:55:37.722105  408513 api_server.go:166] Checking apiserver status ...
	I0805 11:55:37.722139  408513 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 11:55:37.739343  408513 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup
	W0805 11:55:37.749681  408513 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 11:55:37.749744  408513 ssh_runner.go:195] Run: ls
	I0805 11:55:37.753959  408513 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0805 11:55:37.758321  408513 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0805 11:55:37.758347  408513 status.go:422] ha-672593-m03 apiserver status = Running (err=<nil>)
	I0805 11:55:37.758357  408513 status.go:257] ha-672593-m03 status: &{Name:ha-672593-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 11:55:37.758373  408513 status.go:255] checking status of ha-672593-m04 ...
	I0805 11:55:37.758676  408513 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:37.758709  408513 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:37.774349  408513 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33907
	I0805 11:55:37.774769  408513 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:37.775264  408513 main.go:141] libmachine: Using API Version  1
	I0805 11:55:37.775286  408513 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:37.775620  408513 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:37.775835  408513 main.go:141] libmachine: (ha-672593-m04) Calling .GetState
	I0805 11:55:37.777388  408513 status.go:330] ha-672593-m04 host status = "Running" (err=<nil>)
	I0805 11:55:37.777406  408513 host.go:66] Checking if "ha-672593-m04" exists ...
	I0805 11:55:37.777777  408513 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:37.777822  408513 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:37.793450  408513 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36105
	I0805 11:55:37.793920  408513 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:37.794376  408513 main.go:141] libmachine: Using API Version  1
	I0805 11:55:37.794395  408513 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:37.794672  408513 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:37.794867  408513 main.go:141] libmachine: (ha-672593-m04) Calling .GetIP
	I0805 11:55:37.797900  408513 main.go:141] libmachine: (ha-672593-m04) DBG | domain ha-672593-m04 has defined MAC address 52:54:00:23:8c:55 in network mk-ha-672593
	I0805 11:55:37.798597  408513 main.go:141] libmachine: (ha-672593-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:8c:55", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:50:59 +0000 UTC Type:0 Mac:52:54:00:23:8c:55 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-672593-m04 Clientid:01:52:54:00:23:8c:55}
	I0805 11:55:37.798625  408513 main.go:141] libmachine: (ha-672593-m04) DBG | domain ha-672593-m04 has defined IP address 192.168.39.4 and MAC address 52:54:00:23:8c:55 in network mk-ha-672593
	I0805 11:55:37.798798  408513 host.go:66] Checking if "ha-672593-m04" exists ...
	I0805 11:55:37.799080  408513 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:37.799111  408513 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:37.815277  408513 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37779
	I0805 11:55:37.815725  408513 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:37.816234  408513 main.go:141] libmachine: Using API Version  1
	I0805 11:55:37.816258  408513 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:37.816555  408513 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:37.816743  408513 main.go:141] libmachine: (ha-672593-m04) Calling .DriverName
	I0805 11:55:37.816937  408513 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 11:55:37.816955  408513 main.go:141] libmachine: (ha-672593-m04) Calling .GetSSHHostname
	I0805 11:55:37.819448  408513 main.go:141] libmachine: (ha-672593-m04) DBG | domain ha-672593-m04 has defined MAC address 52:54:00:23:8c:55 in network mk-ha-672593
	I0805 11:55:37.819868  408513 main.go:141] libmachine: (ha-672593-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:8c:55", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:50:59 +0000 UTC Type:0 Mac:52:54:00:23:8c:55 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-672593-m04 Clientid:01:52:54:00:23:8c:55}
	I0805 11:55:37.819895  408513 main.go:141] libmachine: (ha-672593-m04) DBG | domain ha-672593-m04 has defined IP address 192.168.39.4 and MAC address 52:54:00:23:8c:55 in network mk-ha-672593
	I0805 11:55:37.820057  408513 main.go:141] libmachine: (ha-672593-m04) Calling .GetSSHPort
	I0805 11:55:37.820223  408513 main.go:141] libmachine: (ha-672593-m04) Calling .GetSSHKeyPath
	I0805 11:55:37.820343  408513 main.go:141] libmachine: (ha-672593-m04) Calling .GetSSHUsername
	I0805 11:55:37.820442  408513 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m04/id_rsa Username:docker}
	I0805 11:55:37.899695  408513 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 11:55:37.917826  408513 status.go:257] ha-672593-m04 status: &{Name:ha-672593-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-672593 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-672593 -n ha-672593
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-672593 logs -n 25: (1.366658817s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-672593 ssh -n                                                                 | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-672593 cp ha-672593-m03:/home/docker/cp-test.txt                              | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593:/home/docker/cp-test_ha-672593-m03_ha-672593.txt                       |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n                                                                 | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n ha-672593 sudo cat                                              | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | /home/docker/cp-test_ha-672593-m03_ha-672593.txt                                 |           |         |         |                     |                     |
	| cp      | ha-672593 cp ha-672593-m03:/home/docker/cp-test.txt                              | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593-m02:/home/docker/cp-test_ha-672593-m03_ha-672593-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n                                                                 | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n ha-672593-m02 sudo cat                                          | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | /home/docker/cp-test_ha-672593-m03_ha-672593-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-672593 cp ha-672593-m03:/home/docker/cp-test.txt                              | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593-m04:/home/docker/cp-test_ha-672593-m03_ha-672593-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n                                                                 | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n ha-672593-m04 sudo cat                                          | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | /home/docker/cp-test_ha-672593-m03_ha-672593-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-672593 cp testdata/cp-test.txt                                                | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n                                                                 | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-672593 cp ha-672593-m04:/home/docker/cp-test.txt                              | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2308329850/001/cp-test_ha-672593-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n                                                                 | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-672593 cp ha-672593-m04:/home/docker/cp-test.txt                              | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593:/home/docker/cp-test_ha-672593-m04_ha-672593.txt                       |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n                                                                 | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n ha-672593 sudo cat                                              | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | /home/docker/cp-test_ha-672593-m04_ha-672593.txt                                 |           |         |         |                     |                     |
	| cp      | ha-672593 cp ha-672593-m04:/home/docker/cp-test.txt                              | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593-m02:/home/docker/cp-test_ha-672593-m04_ha-672593-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n                                                                 | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n ha-672593-m02 sudo cat                                          | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | /home/docker/cp-test_ha-672593-m04_ha-672593-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-672593 cp ha-672593-m04:/home/docker/cp-test.txt                              | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593-m03:/home/docker/cp-test_ha-672593-m04_ha-672593-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n                                                                 | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n ha-672593-m03 sudo cat                                          | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | /home/docker/cp-test_ha-672593-m04_ha-672593-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-672593 node stop m02 -v=7                                                     | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-672593 node start m02 -v=7                                                    | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:54 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 11:47:01
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 11:47:01.406932  402885 out.go:291] Setting OutFile to fd 1 ...
	I0805 11:47:01.407221  402885 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 11:47:01.407231  402885 out.go:304] Setting ErrFile to fd 2...
	I0805 11:47:01.407235  402885 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 11:47:01.407430  402885 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
	I0805 11:47:01.408026  402885 out.go:298] Setting JSON to false
	I0805 11:47:01.409097  402885 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":5368,"bootTime":1722853053,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0805 11:47:01.409167  402885 start.go:139] virtualization: kvm guest
	I0805 11:47:01.411485  402885 out.go:177] * [ha-672593] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0805 11:47:01.412749  402885 notify.go:220] Checking for updates...
	I0805 11:47:01.412776  402885 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 11:47:01.413914  402885 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 11:47:01.415104  402885 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 11:47:01.416329  402885 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 11:47:01.417431  402885 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0805 11:47:01.418611  402885 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 11:47:01.419828  402885 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 11:47:01.454526  402885 out.go:177] * Using the kvm2 driver based on user configuration
	I0805 11:47:01.455720  402885 start.go:297] selected driver: kvm2
	I0805 11:47:01.455736  402885 start.go:901] validating driver "kvm2" against <nil>
	I0805 11:47:01.455768  402885 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 11:47:01.456730  402885 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 11:47:01.456816  402885 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19377-383955/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0805 11:47:01.472514  402885 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0805 11:47:01.472573  402885 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 11:47:01.472803  402885 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 11:47:01.472868  402885 cni.go:84] Creating CNI manager for ""
	I0805 11:47:01.472880  402885 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0805 11:47:01.472885  402885 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0805 11:47:01.472947  402885 start.go:340] cluster config:
	{Name:ha-672593 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-672593 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0805 11:47:01.473039  402885 iso.go:125] acquiring lock: {Name:mk78a4988ea0dfb86bb6f7367e362683a39fd912 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 11:47:01.474952  402885 out.go:177] * Starting "ha-672593" primary control-plane node in "ha-672593" cluster
	I0805 11:47:01.476115  402885 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 11:47:01.476152  402885 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0805 11:47:01.476171  402885 cache.go:56] Caching tarball of preloaded images
	I0805 11:47:01.476256  402885 preload.go:172] Found /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0805 11:47:01.476266  402885 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0805 11:47:01.476580  402885 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/config.json ...
	I0805 11:47:01.476599  402885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/config.json: {Name:mk12aeb8990dfd2e3b7b889000f511c048d38e4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:47:01.476722  402885 start.go:360] acquireMachinesLock for ha-672593: {Name:mk3babe91d55c30c0b650587cdec6489eb3a7ed6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 11:47:01.476748  402885 start.go:364] duration metric: took 15.125µs to acquireMachinesLock for "ha-672593"
	I0805 11:47:01.476768  402885 start.go:93] Provisioning new machine with config: &{Name:ha-672593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-672593 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 11:47:01.476842  402885 start.go:125] createHost starting for "" (driver="kvm2")
	I0805 11:47:01.478568  402885 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 11:47:01.478706  402885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:47:01.478754  402885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:47:01.493830  402885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36387
	I0805 11:47:01.494257  402885 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:47:01.494855  402885 main.go:141] libmachine: Using API Version  1
	I0805 11:47:01.494883  402885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:47:01.495156  402885 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:47:01.495344  402885 main.go:141] libmachine: (ha-672593) Calling .GetMachineName
	I0805 11:47:01.495540  402885 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:47:01.495679  402885 start.go:159] libmachine.API.Create for "ha-672593" (driver="kvm2")
	I0805 11:47:01.495706  402885 client.go:168] LocalClient.Create starting
	I0805 11:47:01.495769  402885 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem
	I0805 11:47:01.495815  402885 main.go:141] libmachine: Decoding PEM data...
	I0805 11:47:01.495835  402885 main.go:141] libmachine: Parsing certificate...
	I0805 11:47:01.495901  402885 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem
	I0805 11:47:01.495926  402885 main.go:141] libmachine: Decoding PEM data...
	I0805 11:47:01.495949  402885 main.go:141] libmachine: Parsing certificate...
	I0805 11:47:01.495981  402885 main.go:141] libmachine: Running pre-create checks...
	I0805 11:47:01.495992  402885 main.go:141] libmachine: (ha-672593) Calling .PreCreateCheck
	I0805 11:47:01.496378  402885 main.go:141] libmachine: (ha-672593) Calling .GetConfigRaw
	I0805 11:47:01.496812  402885 main.go:141] libmachine: Creating machine...
	I0805 11:47:01.496826  402885 main.go:141] libmachine: (ha-672593) Calling .Create
	I0805 11:47:01.496984  402885 main.go:141] libmachine: (ha-672593) Creating KVM machine...
	I0805 11:47:01.498181  402885 main.go:141] libmachine: (ha-672593) DBG | found existing default KVM network
	I0805 11:47:01.498912  402885 main.go:141] libmachine: (ha-672593) DBG | I0805 11:47:01.498771  402908 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0805 11:47:01.498934  402885 main.go:141] libmachine: (ha-672593) DBG | created network xml: 
	I0805 11:47:01.498949  402885 main.go:141] libmachine: (ha-672593) DBG | <network>
	I0805 11:47:01.498976  402885 main.go:141] libmachine: (ha-672593) DBG |   <name>mk-ha-672593</name>
	I0805 11:47:01.498991  402885 main.go:141] libmachine: (ha-672593) DBG |   <dns enable='no'/>
	I0805 11:47:01.498998  402885 main.go:141] libmachine: (ha-672593) DBG |   
	I0805 11:47:01.499009  402885 main.go:141] libmachine: (ha-672593) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0805 11:47:01.499021  402885 main.go:141] libmachine: (ha-672593) DBG |     <dhcp>
	I0805 11:47:01.499091  402885 main.go:141] libmachine: (ha-672593) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0805 11:47:01.499113  402885 main.go:141] libmachine: (ha-672593) DBG |     </dhcp>
	I0805 11:47:01.499126  402885 main.go:141] libmachine: (ha-672593) DBG |   </ip>
	I0805 11:47:01.499134  402885 main.go:141] libmachine: (ha-672593) DBG |   
	I0805 11:47:01.499159  402885 main.go:141] libmachine: (ha-672593) DBG | </network>
	I0805 11:47:01.499180  402885 main.go:141] libmachine: (ha-672593) DBG | 
	I0805 11:47:01.504434  402885 main.go:141] libmachine: (ha-672593) DBG | trying to create private KVM network mk-ha-672593 192.168.39.0/24...
	I0805 11:47:01.570407  402885 main.go:141] libmachine: (ha-672593) DBG | private KVM network mk-ha-672593 192.168.39.0/24 created
	I0805 11:47:01.570444  402885 main.go:141] libmachine: (ha-672593) Setting up store path in /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593 ...
	I0805 11:47:01.570457  402885 main.go:141] libmachine: (ha-672593) DBG | I0805 11:47:01.570404  402908 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 11:47:01.570492  402885 main.go:141] libmachine: (ha-672593) Building disk image from file:///home/jenkins/minikube-integration/19377-383955/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0805 11:47:01.570653  402885 main.go:141] libmachine: (ha-672593) Downloading /home/jenkins/minikube-integration/19377-383955/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19377-383955/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0805 11:47:01.851874  402885 main.go:141] libmachine: (ha-672593) DBG | I0805 11:47:01.851756  402908 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/id_rsa...
	I0805 11:47:02.115451  402885 main.go:141] libmachine: (ha-672593) DBG | I0805 11:47:02.115280  402908 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/ha-672593.rawdisk...
	I0805 11:47:02.115480  402885 main.go:141] libmachine: (ha-672593) DBG | Writing magic tar header
	I0805 11:47:02.115490  402885 main.go:141] libmachine: (ha-672593) DBG | Writing SSH key tar header
	I0805 11:47:02.115498  402885 main.go:141] libmachine: (ha-672593) DBG | I0805 11:47:02.115426  402908 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593 ...
	I0805 11:47:02.115609  402885 main.go:141] libmachine: (ha-672593) Setting executable bit set on /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593 (perms=drwx------)
	I0805 11:47:02.115624  402885 main.go:141] libmachine: (ha-672593) Setting executable bit set on /home/jenkins/minikube-integration/19377-383955/.minikube/machines (perms=drwxr-xr-x)
	I0805 11:47:02.115632  402885 main.go:141] libmachine: (ha-672593) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593
	I0805 11:47:02.115641  402885 main.go:141] libmachine: (ha-672593) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19377-383955/.minikube/machines
	I0805 11:47:02.115652  402885 main.go:141] libmachine: (ha-672593) Setting executable bit set on /home/jenkins/minikube-integration/19377-383955/.minikube (perms=drwxr-xr-x)
	I0805 11:47:02.115662  402885 main.go:141] libmachine: (ha-672593) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 11:47:02.115679  402885 main.go:141] libmachine: (ha-672593) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19377-383955
	I0805 11:47:02.115705  402885 main.go:141] libmachine: (ha-672593) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0805 11:47:02.115712  402885 main.go:141] libmachine: (ha-672593) DBG | Checking permissions on dir: /home/jenkins
	I0805 11:47:02.115717  402885 main.go:141] libmachine: (ha-672593) DBG | Checking permissions on dir: /home
	I0805 11:47:02.115724  402885 main.go:141] libmachine: (ha-672593) DBG | Skipping /home - not owner
	I0805 11:47:02.115732  402885 main.go:141] libmachine: (ha-672593) Setting executable bit set on /home/jenkins/minikube-integration/19377-383955 (perms=drwxrwxr-x)
	I0805 11:47:02.115759  402885 main.go:141] libmachine: (ha-672593) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0805 11:47:02.115774  402885 main.go:141] libmachine: (ha-672593) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0805 11:47:02.115844  402885 main.go:141] libmachine: (ha-672593) Creating domain...
	I0805 11:47:02.117016  402885 main.go:141] libmachine: (ha-672593) define libvirt domain using xml: 
	I0805 11:47:02.117034  402885 main.go:141] libmachine: (ha-672593) <domain type='kvm'>
	I0805 11:47:02.117041  402885 main.go:141] libmachine: (ha-672593)   <name>ha-672593</name>
	I0805 11:47:02.117064  402885 main.go:141] libmachine: (ha-672593)   <memory unit='MiB'>2200</memory>
	I0805 11:47:02.117070  402885 main.go:141] libmachine: (ha-672593)   <vcpu>2</vcpu>
	I0805 11:47:02.117074  402885 main.go:141] libmachine: (ha-672593)   <features>
	I0805 11:47:02.117079  402885 main.go:141] libmachine: (ha-672593)     <acpi/>
	I0805 11:47:02.117083  402885 main.go:141] libmachine: (ha-672593)     <apic/>
	I0805 11:47:02.117090  402885 main.go:141] libmachine: (ha-672593)     <pae/>
	I0805 11:47:02.117095  402885 main.go:141] libmachine: (ha-672593)     
	I0805 11:47:02.117101  402885 main.go:141] libmachine: (ha-672593)   </features>
	I0805 11:47:02.117106  402885 main.go:141] libmachine: (ha-672593)   <cpu mode='host-passthrough'>
	I0805 11:47:02.117111  402885 main.go:141] libmachine: (ha-672593)   
	I0805 11:47:02.117117  402885 main.go:141] libmachine: (ha-672593)   </cpu>
	I0805 11:47:02.117122  402885 main.go:141] libmachine: (ha-672593)   <os>
	I0805 11:47:02.117132  402885 main.go:141] libmachine: (ha-672593)     <type>hvm</type>
	I0805 11:47:02.117140  402885 main.go:141] libmachine: (ha-672593)     <boot dev='cdrom'/>
	I0805 11:47:02.117149  402885 main.go:141] libmachine: (ha-672593)     <boot dev='hd'/>
	I0805 11:47:02.117156  402885 main.go:141] libmachine: (ha-672593)     <bootmenu enable='no'/>
	I0805 11:47:02.117160  402885 main.go:141] libmachine: (ha-672593)   </os>
	I0805 11:47:02.117165  402885 main.go:141] libmachine: (ha-672593)   <devices>
	I0805 11:47:02.117171  402885 main.go:141] libmachine: (ha-672593)     <disk type='file' device='cdrom'>
	I0805 11:47:02.117178  402885 main.go:141] libmachine: (ha-672593)       <source file='/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/boot2docker.iso'/>
	I0805 11:47:02.117184  402885 main.go:141] libmachine: (ha-672593)       <target dev='hdc' bus='scsi'/>
	I0805 11:47:02.117191  402885 main.go:141] libmachine: (ha-672593)       <readonly/>
	I0805 11:47:02.117195  402885 main.go:141] libmachine: (ha-672593)     </disk>
	I0805 11:47:02.117200  402885 main.go:141] libmachine: (ha-672593)     <disk type='file' device='disk'>
	I0805 11:47:02.117209  402885 main.go:141] libmachine: (ha-672593)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0805 11:47:02.117222  402885 main.go:141] libmachine: (ha-672593)       <source file='/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/ha-672593.rawdisk'/>
	I0805 11:47:02.117230  402885 main.go:141] libmachine: (ha-672593)       <target dev='hda' bus='virtio'/>
	I0805 11:47:02.117234  402885 main.go:141] libmachine: (ha-672593)     </disk>
	I0805 11:47:02.117239  402885 main.go:141] libmachine: (ha-672593)     <interface type='network'>
	I0805 11:47:02.117245  402885 main.go:141] libmachine: (ha-672593)       <source network='mk-ha-672593'/>
	I0805 11:47:02.117250  402885 main.go:141] libmachine: (ha-672593)       <model type='virtio'/>
	I0805 11:47:02.117257  402885 main.go:141] libmachine: (ha-672593)     </interface>
	I0805 11:47:02.117271  402885 main.go:141] libmachine: (ha-672593)     <interface type='network'>
	I0805 11:47:02.117277  402885 main.go:141] libmachine: (ha-672593)       <source network='default'/>
	I0805 11:47:02.117282  402885 main.go:141] libmachine: (ha-672593)       <model type='virtio'/>
	I0805 11:47:02.117290  402885 main.go:141] libmachine: (ha-672593)     </interface>
	I0805 11:47:02.117321  402885 main.go:141] libmachine: (ha-672593)     <serial type='pty'>
	I0805 11:47:02.117344  402885 main.go:141] libmachine: (ha-672593)       <target port='0'/>
	I0805 11:47:02.117360  402885 main.go:141] libmachine: (ha-672593)     </serial>
	I0805 11:47:02.117375  402885 main.go:141] libmachine: (ha-672593)     <console type='pty'>
	I0805 11:47:02.117386  402885 main.go:141] libmachine: (ha-672593)       <target type='serial' port='0'/>
	I0805 11:47:02.117412  402885 main.go:141] libmachine: (ha-672593)     </console>
	I0805 11:47:02.117425  402885 main.go:141] libmachine: (ha-672593)     <rng model='virtio'>
	I0805 11:47:02.117445  402885 main.go:141] libmachine: (ha-672593)       <backend model='random'>/dev/random</backend>
	I0805 11:47:02.117461  402885 main.go:141] libmachine: (ha-672593)     </rng>
	I0805 11:47:02.117472  402885 main.go:141] libmachine: (ha-672593)     
	I0805 11:47:02.117482  402885 main.go:141] libmachine: (ha-672593)     
	I0805 11:47:02.117492  402885 main.go:141] libmachine: (ha-672593)   </devices>
	I0805 11:47:02.117507  402885 main.go:141] libmachine: (ha-672593) </domain>
	I0805 11:47:02.117522  402885 main.go:141] libmachine: (ha-672593) 
	I0805 11:47:02.121948  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:fd:a1:d4 in network default
	I0805 11:47:02.122593  402885 main.go:141] libmachine: (ha-672593) Ensuring networks are active...
	I0805 11:47:02.122620  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:02.123298  402885 main.go:141] libmachine: (ha-672593) Ensuring network default is active
	I0805 11:47:02.123585  402885 main.go:141] libmachine: (ha-672593) Ensuring network mk-ha-672593 is active
	I0805 11:47:02.124089  402885 main.go:141] libmachine: (ha-672593) Getting domain xml...
	I0805 11:47:02.124741  402885 main.go:141] libmachine: (ha-672593) Creating domain...
	I0805 11:47:03.319883  402885 main.go:141] libmachine: (ha-672593) Waiting to get IP...
	I0805 11:47:03.320698  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:03.321100  402885 main.go:141] libmachine: (ha-672593) DBG | unable to find current IP address of domain ha-672593 in network mk-ha-672593
	I0805 11:47:03.321129  402885 main.go:141] libmachine: (ha-672593) DBG | I0805 11:47:03.321084  402908 retry.go:31] will retry after 197.742325ms: waiting for machine to come up
	I0805 11:47:03.520616  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:03.521078  402885 main.go:141] libmachine: (ha-672593) DBG | unable to find current IP address of domain ha-672593 in network mk-ha-672593
	I0805 11:47:03.521107  402885 main.go:141] libmachine: (ha-672593) DBG | I0805 11:47:03.521037  402908 retry.go:31] will retry after 332.591294ms: waiting for machine to come up
	I0805 11:47:03.855863  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:03.856337  402885 main.go:141] libmachine: (ha-672593) DBG | unable to find current IP address of domain ha-672593 in network mk-ha-672593
	I0805 11:47:03.856368  402885 main.go:141] libmachine: (ha-672593) DBG | I0805 11:47:03.856293  402908 retry.go:31] will retry after 293.806863ms: waiting for machine to come up
	I0805 11:47:04.151867  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:04.152292  402885 main.go:141] libmachine: (ha-672593) DBG | unable to find current IP address of domain ha-672593 in network mk-ha-672593
	I0805 11:47:04.152327  402885 main.go:141] libmachine: (ha-672593) DBG | I0805 11:47:04.152261  402908 retry.go:31] will retry after 546.881134ms: waiting for machine to come up
	I0805 11:47:04.701205  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:04.701717  402885 main.go:141] libmachine: (ha-672593) DBG | unable to find current IP address of domain ha-672593 in network mk-ha-672593
	I0805 11:47:04.701747  402885 main.go:141] libmachine: (ha-672593) DBG | I0805 11:47:04.701681  402908 retry.go:31] will retry after 690.115664ms: waiting for machine to come up
	I0805 11:47:05.393676  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:05.394222  402885 main.go:141] libmachine: (ha-672593) DBG | unable to find current IP address of domain ha-672593 in network mk-ha-672593
	I0805 11:47:05.394251  402885 main.go:141] libmachine: (ha-672593) DBG | I0805 11:47:05.394152  402908 retry.go:31] will retry after 700.558042ms: waiting for machine to come up
	I0805 11:47:06.096140  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:06.096609  402885 main.go:141] libmachine: (ha-672593) DBG | unable to find current IP address of domain ha-672593 in network mk-ha-672593
	I0805 11:47:06.096657  402885 main.go:141] libmachine: (ha-672593) DBG | I0805 11:47:06.096558  402908 retry.go:31] will retry after 1.106283154s: waiting for machine to come up
	I0805 11:47:07.204382  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:07.204777  402885 main.go:141] libmachine: (ha-672593) DBG | unable to find current IP address of domain ha-672593 in network mk-ha-672593
	I0805 11:47:07.204803  402885 main.go:141] libmachine: (ha-672593) DBG | I0805 11:47:07.204737  402908 retry.go:31] will retry after 909.769737ms: waiting for machine to come up
	I0805 11:47:08.115835  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:08.116335  402885 main.go:141] libmachine: (ha-672593) DBG | unable to find current IP address of domain ha-672593 in network mk-ha-672593
	I0805 11:47:08.116368  402885 main.go:141] libmachine: (ha-672593) DBG | I0805 11:47:08.116278  402908 retry.go:31] will retry after 1.197387753s: waiting for machine to come up
	I0805 11:47:09.315548  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:09.315864  402885 main.go:141] libmachine: (ha-672593) DBG | unable to find current IP address of domain ha-672593 in network mk-ha-672593
	I0805 11:47:09.315895  402885 main.go:141] libmachine: (ha-672593) DBG | I0805 11:47:09.315809  402908 retry.go:31] will retry after 1.807716024s: waiting for machine to come up
	I0805 11:47:11.125701  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:11.126191  402885 main.go:141] libmachine: (ha-672593) DBG | unable to find current IP address of domain ha-672593 in network mk-ha-672593
	I0805 11:47:11.126215  402885 main.go:141] libmachine: (ha-672593) DBG | I0805 11:47:11.126140  402908 retry.go:31] will retry after 1.998972255s: waiting for machine to come up
	I0805 11:47:13.127302  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:13.127827  402885 main.go:141] libmachine: (ha-672593) DBG | unable to find current IP address of domain ha-672593 in network mk-ha-672593
	I0805 11:47:13.127858  402885 main.go:141] libmachine: (ha-672593) DBG | I0805 11:47:13.127717  402908 retry.go:31] will retry after 3.556381088s: waiting for machine to come up
	I0805 11:47:16.685699  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:16.686021  402885 main.go:141] libmachine: (ha-672593) DBG | unable to find current IP address of domain ha-672593 in network mk-ha-672593
	I0805 11:47:16.686045  402885 main.go:141] libmachine: (ha-672593) DBG | I0805 11:47:16.685991  402908 retry.go:31] will retry after 4.271029073s: waiting for machine to come up
	I0805 11:47:20.962319  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:20.962715  402885 main.go:141] libmachine: (ha-672593) DBG | unable to find current IP address of domain ha-672593 in network mk-ha-672593
	I0805 11:47:20.962744  402885 main.go:141] libmachine: (ha-672593) DBG | I0805 11:47:20.962659  402908 retry.go:31] will retry after 5.361767594s: waiting for machine to come up
	I0805 11:47:26.329675  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:26.330117  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has current primary IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:26.330139  402885 main.go:141] libmachine: (ha-672593) Found IP for machine: 192.168.39.102
	I0805 11:47:26.330152  402885 main.go:141] libmachine: (ha-672593) Reserving static IP address...
	I0805 11:47:26.330520  402885 main.go:141] libmachine: (ha-672593) DBG | unable to find host DHCP lease matching {name: "ha-672593", mac: "52:54:00:9e:d5:95", ip: "192.168.39.102"} in network mk-ha-672593
	I0805 11:47:26.403576  402885 main.go:141] libmachine: (ha-672593) DBG | Getting to WaitForSSH function...
	I0805 11:47:26.403615  402885 main.go:141] libmachine: (ha-672593) Reserved static IP address: 192.168.39.102
	I0805 11:47:26.403627  402885 main.go:141] libmachine: (ha-672593) Waiting for SSH to be available...
	I0805 11:47:26.406287  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:26.406640  402885 main.go:141] libmachine: (ha-672593) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593
	I0805 11:47:26.406714  402885 main.go:141] libmachine: (ha-672593) DBG | unable to find defined IP address of network mk-ha-672593 interface with MAC address 52:54:00:9e:d5:95
	I0805 11:47:26.406879  402885 main.go:141] libmachine: (ha-672593) DBG | Using SSH client type: external
	I0805 11:47:26.406903  402885 main.go:141] libmachine: (ha-672593) DBG | Using SSH private key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/id_rsa (-rw-------)
	I0805 11:47:26.406928  402885 main.go:141] libmachine: (ha-672593) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 11:47:26.406941  402885 main.go:141] libmachine: (ha-672593) DBG | About to run SSH command:
	I0805 11:47:26.406956  402885 main.go:141] libmachine: (ha-672593) DBG | exit 0
	I0805 11:47:26.410442  402885 main.go:141] libmachine: (ha-672593) DBG | SSH cmd err, output: exit status 255: 
	I0805 11:47:26.410467  402885 main.go:141] libmachine: (ha-672593) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0805 11:47:26.410478  402885 main.go:141] libmachine: (ha-672593) DBG | command : exit 0
	I0805 11:47:26.410490  402885 main.go:141] libmachine: (ha-672593) DBG | err     : exit status 255
	I0805 11:47:26.410500  402885 main.go:141] libmachine: (ha-672593) DBG | output  : 
	I0805 11:47:29.412979  402885 main.go:141] libmachine: (ha-672593) DBG | Getting to WaitForSSH function...
	I0805 11:47:29.415178  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:29.415509  402885 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:47:29.415538  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:29.415640  402885 main.go:141] libmachine: (ha-672593) DBG | Using SSH client type: external
	I0805 11:47:29.415669  402885 main.go:141] libmachine: (ha-672593) DBG | Using SSH private key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/id_rsa (-rw-------)
	I0805 11:47:29.415707  402885 main.go:141] libmachine: (ha-672593) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.102 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 11:47:29.415718  402885 main.go:141] libmachine: (ha-672593) DBG | About to run SSH command:
	I0805 11:47:29.415733  402885 main.go:141] libmachine: (ha-672593) DBG | exit 0
	I0805 11:47:29.543923  402885 main.go:141] libmachine: (ha-672593) DBG | SSH cmd err, output: <nil>: 
	I0805 11:47:29.544178  402885 main.go:141] libmachine: (ha-672593) KVM machine creation complete!
	I0805 11:47:29.544569  402885 main.go:141] libmachine: (ha-672593) Calling .GetConfigRaw
	I0805 11:47:29.545201  402885 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:47:29.545407  402885 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:47:29.545583  402885 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0805 11:47:29.545614  402885 main.go:141] libmachine: (ha-672593) Calling .GetState
	I0805 11:47:29.546800  402885 main.go:141] libmachine: Detecting operating system of created instance...
	I0805 11:47:29.546813  402885 main.go:141] libmachine: Waiting for SSH to be available...
	I0805 11:47:29.546820  402885 main.go:141] libmachine: Getting to WaitForSSH function...
	I0805 11:47:29.546825  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:47:29.548715  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:29.549065  402885 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:47:29.549092  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:29.549216  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:47:29.549406  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:47:29.549545  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:47:29.549692  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:47:29.549833  402885 main.go:141] libmachine: Using SSH client type: native
	I0805 11:47:29.550100  402885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0805 11:47:29.550114  402885 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0805 11:47:29.663179  402885 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 11:47:29.663202  402885 main.go:141] libmachine: Detecting the provisioner...
	I0805 11:47:29.663210  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:47:29.666721  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:29.667145  402885 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:47:29.667166  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:29.667334  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:47:29.667524  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:47:29.667687  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:47:29.667847  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:47:29.668030  402885 main.go:141] libmachine: Using SSH client type: native
	I0805 11:47:29.668198  402885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0805 11:47:29.668208  402885 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0805 11:47:29.780645  402885 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0805 11:47:29.780749  402885 main.go:141] libmachine: found compatible host: buildroot
	I0805 11:47:29.780761  402885 main.go:141] libmachine: Provisioning with buildroot...
	I0805 11:47:29.780768  402885 main.go:141] libmachine: (ha-672593) Calling .GetMachineName
	I0805 11:47:29.781042  402885 buildroot.go:166] provisioning hostname "ha-672593"
	I0805 11:47:29.781081  402885 main.go:141] libmachine: (ha-672593) Calling .GetMachineName
	I0805 11:47:29.781288  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:47:29.783827  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:29.784232  402885 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:47:29.784264  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:29.784384  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:47:29.784556  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:47:29.784705  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:47:29.784879  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:47:29.785072  402885 main.go:141] libmachine: Using SSH client type: native
	I0805 11:47:29.785238  402885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0805 11:47:29.785261  402885 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-672593 && echo "ha-672593" | sudo tee /etc/hostname
	I0805 11:47:29.911387  402885 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-672593
	
	I0805 11:47:29.911455  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:47:29.914263  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:29.914580  402885 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:47:29.914605  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:29.914787  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:47:29.915038  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:47:29.915221  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:47:29.915385  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:47:29.915580  402885 main.go:141] libmachine: Using SSH client type: native
	I0805 11:47:29.915795  402885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0805 11:47:29.915813  402885 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-672593' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-672593/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-672593' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 11:47:30.040854  402885 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 11:47:30.040890  402885 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19377-383955/.minikube CaCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19377-383955/.minikube}
	I0805 11:47:30.040934  402885 buildroot.go:174] setting up certificates
	I0805 11:47:30.040947  402885 provision.go:84] configureAuth start
	I0805 11:47:30.040962  402885 main.go:141] libmachine: (ha-672593) Calling .GetMachineName
	I0805 11:47:30.041282  402885 main.go:141] libmachine: (ha-672593) Calling .GetIP
	I0805 11:47:30.043919  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:30.044419  402885 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:47:30.044445  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:30.044586  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:47:30.046846  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:30.047093  402885 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:47:30.047122  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:30.047264  402885 provision.go:143] copyHostCerts
	I0805 11:47:30.047300  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 11:47:30.047380  402885 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem, removing ...
	I0805 11:47:30.047395  402885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 11:47:30.047483  402885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem (1082 bytes)
	I0805 11:47:30.047616  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 11:47:30.047647  402885 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem, removing ...
	I0805 11:47:30.047659  402885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 11:47:30.047704  402885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem (1123 bytes)
	I0805 11:47:30.047815  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 11:47:30.047856  402885 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem, removing ...
	I0805 11:47:30.047869  402885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 11:47:30.047918  402885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem (1675 bytes)
	I0805 11:47:30.048067  402885 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem org=jenkins.ha-672593 san=[127.0.0.1 192.168.39.102 ha-672593 localhost minikube]
	I0805 11:47:30.244143  402885 provision.go:177] copyRemoteCerts
	I0805 11:47:30.244208  402885 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 11:47:30.244237  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:47:30.246801  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:30.247127  402885 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:47:30.247153  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:30.247352  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:47:30.247580  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:47:30.247765  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:47:30.247930  402885 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/id_rsa Username:docker}
	I0805 11:47:30.333425  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 11:47:30.333489  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 11:47:30.356829  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 11:47:30.356901  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 11:47:30.380400  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 11:47:30.380461  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0805 11:47:30.403423  402885 provision.go:87] duration metric: took 362.461937ms to configureAuth
	I0805 11:47:30.403448  402885 buildroot.go:189] setting minikube options for container-runtime
	I0805 11:47:30.403621  402885 config.go:182] Loaded profile config "ha-672593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 11:47:30.403706  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:47:30.405998  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:30.406288  402885 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:47:30.406315  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:30.406439  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:47:30.406651  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:47:30.406830  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:47:30.407075  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:47:30.407275  402885 main.go:141] libmachine: Using SSH client type: native
	I0805 11:47:30.407449  402885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0805 11:47:30.407466  402885 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 11:47:30.677264  402885 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 11:47:30.677295  402885 main.go:141] libmachine: Checking connection to Docker...
	I0805 11:47:30.677303  402885 main.go:141] libmachine: (ha-672593) Calling .GetURL
	I0805 11:47:30.678830  402885 main.go:141] libmachine: (ha-672593) DBG | Using libvirt version 6000000
	I0805 11:47:30.681221  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:30.681528  402885 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:47:30.681556  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:30.681800  402885 main.go:141] libmachine: Docker is up and running!
	I0805 11:47:30.681828  402885 main.go:141] libmachine: Reticulating splines...
	I0805 11:47:30.681838  402885 client.go:171] duration metric: took 29.18612156s to LocalClient.Create
	I0805 11:47:30.681864  402885 start.go:167] duration metric: took 29.186183459s to libmachine.API.Create "ha-672593"
	I0805 11:47:30.681876  402885 start.go:293] postStartSetup for "ha-672593" (driver="kvm2")
	I0805 11:47:30.681888  402885 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 11:47:30.681906  402885 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:47:30.682170  402885 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 11:47:30.682194  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:47:30.684393  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:30.684666  402885 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:47:30.684693  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:30.684853  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:47:30.685033  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:47:30.685184  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:47:30.685295  402885 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/id_rsa Username:docker}
	I0805 11:47:30.770326  402885 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 11:47:30.774907  402885 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 11:47:30.774936  402885 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/addons for local assets ...
	I0805 11:47:30.775025  402885 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/files for local assets ...
	I0805 11:47:30.775100  402885 filesync.go:149] local asset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> 3912192.pem in /etc/ssl/certs
	I0805 11:47:30.775107  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> /etc/ssl/certs/3912192.pem
	I0805 11:47:30.775211  402885 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 11:47:30.784903  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 11:47:30.812355  402885 start.go:296] duration metric: took 130.462768ms for postStartSetup
	I0805 11:47:30.812515  402885 main.go:141] libmachine: (ha-672593) Calling .GetConfigRaw
	I0805 11:47:30.813149  402885 main.go:141] libmachine: (ha-672593) Calling .GetIP
	I0805 11:47:30.815890  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:30.816226  402885 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:47:30.816254  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:30.816544  402885 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/config.json ...
	I0805 11:47:30.816756  402885 start.go:128] duration metric: took 29.339901951s to createHost
	I0805 11:47:30.816797  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:47:30.818999  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:30.819327  402885 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:47:30.819366  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:30.819462  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:47:30.819647  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:47:30.819822  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:47:30.819935  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:47:30.820104  402885 main.go:141] libmachine: Using SSH client type: native
	I0805 11:47:30.820329  402885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0805 11:47:30.820353  402885 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 11:47:30.932357  402885 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722858450.914487602
	
	I0805 11:47:30.932384  402885 fix.go:216] guest clock: 1722858450.914487602
	I0805 11:47:30.932394  402885 fix.go:229] Guest: 2024-08-05 11:47:30.914487602 +0000 UTC Remote: 2024-08-05 11:47:30.816784327 +0000 UTC m=+29.447989374 (delta=97.703275ms)
	I0805 11:47:30.932421  402885 fix.go:200] guest clock delta is within tolerance: 97.703275ms
	I0805 11:47:30.932428  402885 start.go:83] releasing machines lock for "ha-672593", held for 29.455670749s
	I0805 11:47:30.932453  402885 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:47:30.932785  402885 main.go:141] libmachine: (ha-672593) Calling .GetIP
	I0805 11:47:30.935097  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:30.935406  402885 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:47:30.935434  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:30.935581  402885 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:47:30.936066  402885 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:47:30.936245  402885 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:47:30.936332  402885 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 11:47:30.936373  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:47:30.936471  402885 ssh_runner.go:195] Run: cat /version.json
	I0805 11:47:30.936504  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:47:30.938883  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:30.939052  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:30.939238  402885 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:47:30.939260  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:30.939387  402885 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:47:30.939411  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:30.939423  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:47:30.939618  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:47:30.939633  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:47:30.939793  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:47:30.939800  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:47:30.939946  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:47:30.939933  402885 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/id_rsa Username:docker}
	I0805 11:47:30.940044  402885 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/id_rsa Username:docker}
	I0805 11:47:31.039737  402885 ssh_runner.go:195] Run: systemctl --version
	I0805 11:47:31.045475  402885 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 11:47:31.197205  402885 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 11:47:31.203650  402885 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 11:47:31.203709  402885 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 11:47:31.219157  402885 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 11:47:31.219181  402885 start.go:495] detecting cgroup driver to use...
	I0805 11:47:31.219243  402885 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 11:47:31.235548  402885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 11:47:31.249152  402885 docker.go:217] disabling cri-docker service (if available) ...
	I0805 11:47:31.249217  402885 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 11:47:31.262673  402885 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 11:47:31.276464  402885 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 11:47:31.388840  402885 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 11:47:31.545015  402885 docker.go:233] disabling docker service ...
	I0805 11:47:31.545107  402885 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 11:47:31.559814  402885 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 11:47:31.572831  402885 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 11:47:31.698544  402885 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 11:47:31.820235  402885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 11:47:31.834206  402885 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 11:47:31.852152  402885 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0805 11:47:31.852231  402885 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:47:31.862655  402885 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 11:47:31.862738  402885 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:47:31.873423  402885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:47:31.883959  402885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:47:31.894368  402885 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 11:47:31.906774  402885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:47:31.918325  402885 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:47:31.936356  402885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:47:31.948286  402885 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 11:47:31.959200  402885 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0805 11:47:31.959239  402885 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0805 11:47:31.974768  402885 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 11:47:31.985693  402885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 11:47:32.126784  402885 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 11:47:32.260710  402885 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 11:47:32.260793  402885 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 11:47:32.265705  402885 start.go:563] Will wait 60s for crictl version
	I0805 11:47:32.265775  402885 ssh_runner.go:195] Run: which crictl
	I0805 11:47:32.269618  402885 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 11:47:32.310458  402885 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 11:47:32.310546  402885 ssh_runner.go:195] Run: crio --version
	I0805 11:47:32.338923  402885 ssh_runner.go:195] Run: crio --version
	I0805 11:47:32.367635  402885 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0805 11:47:32.368941  402885 main.go:141] libmachine: (ha-672593) Calling .GetIP
	I0805 11:47:32.371554  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:32.371976  402885 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:47:32.372006  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:47:32.372218  402885 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0805 11:47:32.376375  402885 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 11:47:32.388848  402885 kubeadm.go:883] updating cluster {Name:ha-672593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-672593 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 11:47:32.388986  402885 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 11:47:32.389053  402885 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 11:47:32.427488  402885 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0805 11:47:32.427574  402885 ssh_runner.go:195] Run: which lz4
	I0805 11:47:32.431340  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0805 11:47:32.431455  402885 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0805 11:47:32.435364  402885 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 11:47:32.435390  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0805 11:47:33.806142  402885 crio.go:462] duration metric: took 1.374734579s to copy over tarball
	I0805 11:47:33.806232  402885 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 11:47:35.968986  402885 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.162714569s)
	I0805 11:47:35.969032  402885 crio.go:469] duration metric: took 2.162856294s to extract the tarball
	I0805 11:47:35.969045  402885 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0805 11:47:36.007014  402885 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 11:47:36.054239  402885 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 11:47:36.054272  402885 cache_images.go:84] Images are preloaded, skipping loading
	I0805 11:47:36.054283  402885 kubeadm.go:934] updating node { 192.168.39.102 8443 v1.30.3 crio true true} ...
	I0805 11:47:36.054430  402885 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-672593 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-672593 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 11:47:36.054499  402885 ssh_runner.go:195] Run: crio config
	I0805 11:47:36.104058  402885 cni.go:84] Creating CNI manager for ""
	I0805 11:47:36.104084  402885 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0805 11:47:36.104097  402885 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 11:47:36.104127  402885 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.102 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-672593 NodeName:ha-672593 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.102"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.102 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 11:47:36.104307  402885 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.102
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-672593"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.102
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.102"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 11:47:36.104341  402885 kube-vip.go:115] generating kube-vip config ...
	I0805 11:47:36.104392  402885 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0805 11:47:36.123514  402885 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0805 11:47:36.123633  402885 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0805 11:47:36.123690  402885 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 11:47:36.133420  402885 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 11:47:36.133496  402885 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0805 11:47:36.142489  402885 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0805 11:47:36.159165  402885 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 11:47:36.175609  402885 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0805 11:47:36.192086  402885 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0805 11:47:36.207817  402885 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0805 11:47:36.211345  402885 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 11:47:36.222877  402885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 11:47:36.352753  402885 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 11:47:36.370110  402885 certs.go:68] Setting up /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593 for IP: 192.168.39.102
	I0805 11:47:36.370135  402885 certs.go:194] generating shared ca certs ...
	I0805 11:47:36.370156  402885 certs.go:226] acquiring lock for ca certs: {Name:mk0abfcaff3883fbb5243c47b487f9200d9166d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:47:36.370327  402885 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key
	I0805 11:47:36.370389  402885 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key
	I0805 11:47:36.370405  402885 certs.go:256] generating profile certs ...
	I0805 11:47:36.370550  402885 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/client.key
	I0805 11:47:36.370571  402885 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/client.crt with IP's: []
	I0805 11:47:36.443467  402885 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/client.crt ...
	I0805 11:47:36.443497  402885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/client.crt: {Name:mk64efb16e1b54b1ad46318bd3555907edacc1fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:47:36.443681  402885 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/client.key ...
	I0805 11:47:36.443696  402885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/client.key: {Name:mka046d85df2c8ea9a81fa425ffb812340b51d52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:47:36.443826  402885 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key.d1de58d0
	I0805 11:47:36.443846  402885 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt.d1de58d0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.254]
	I0805 11:47:36.625998  402885 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt.d1de58d0 ...
	I0805 11:47:36.626035  402885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt.d1de58d0: {Name:mk971fadbe0c7eacc8f710f7033a3327fa9ee2d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:47:36.626234  402885 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key.d1de58d0 ...
	I0805 11:47:36.626253  402885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key.d1de58d0: {Name:mk1c5497454e604075e29d080c0dc346f196be2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:47:36.626361  402885 certs.go:381] copying /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt.d1de58d0 -> /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt
	I0805 11:47:36.626442  402885 certs.go:385] copying /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key.d1de58d0 -> /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key
	I0805 11:47:36.626498  402885 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/proxy-client.key
	I0805 11:47:36.626515  402885 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/proxy-client.crt with IP's: []
	I0805 11:47:36.984398  402885 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/proxy-client.crt ...
	I0805 11:47:36.984430  402885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/proxy-client.crt: {Name:mk4cd8e8ae8575603b5e1fa8b77e6557d8c1ece5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:47:36.984602  402885 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/proxy-client.key ...
	I0805 11:47:36.984623  402885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/proxy-client.key: {Name:mkce91befe6e8431fd2dfc816ef3f4abd3a91050 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:47:36.984720  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0805 11:47:36.984744  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0805 11:47:36.984756  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0805 11:47:36.984769  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0805 11:47:36.984782  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0805 11:47:36.984806  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0805 11:47:36.984819  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0805 11:47:36.984831  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0805 11:47:36.984882  402885 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem (1338 bytes)
	W0805 11:47:36.984918  402885 certs.go:480] ignoring /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219_empty.pem, impossibly tiny 0 bytes
	I0805 11:47:36.984927  402885 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 11:47:36.984948  402885 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem (1082 bytes)
	I0805 11:47:36.984977  402885 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem (1123 bytes)
	I0805 11:47:36.985007  402885 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem (1675 bytes)
	I0805 11:47:36.985044  402885 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 11:47:36.985075  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0805 11:47:36.985089  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem -> /usr/share/ca-certificates/391219.pem
	I0805 11:47:36.985106  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> /usr/share/ca-certificates/3912192.pem
	I0805 11:47:36.985694  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 11:47:37.011987  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 11:47:37.035377  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 11:47:37.058543  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 11:47:37.081913  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0805 11:47:37.105057  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0805 11:47:37.130939  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 11:47:37.157787  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 11:47:37.186187  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 11:47:37.213848  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem --> /usr/share/ca-certificates/391219.pem (1338 bytes)
	I0805 11:47:37.237251  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /usr/share/ca-certificates/3912192.pem (1708 bytes)
	I0805 11:47:37.260594  402885 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 11:47:37.277763  402885 ssh_runner.go:195] Run: openssl version
	I0805 11:47:37.284446  402885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391219.pem && ln -fs /usr/share/ca-certificates/391219.pem /etc/ssl/certs/391219.pem"
	I0805 11:47:37.295487  402885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/391219.pem
	I0805 11:47:37.299976  402885 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 11:39 /usr/share/ca-certificates/391219.pem
	I0805 11:47:37.300031  402885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391219.pem
	I0805 11:47:37.305863  402885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391219.pem /etc/ssl/certs/51391683.0"
	I0805 11:47:37.316889  402885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3912192.pem && ln -fs /usr/share/ca-certificates/3912192.pem /etc/ssl/certs/3912192.pem"
	I0805 11:47:37.327673  402885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3912192.pem
	I0805 11:47:37.332181  402885 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 11:39 /usr/share/ca-certificates/3912192.pem
	I0805 11:47:37.332236  402885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3912192.pem
	I0805 11:47:37.338018  402885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3912192.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 11:47:37.348910  402885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 11:47:37.359270  402885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 11:47:37.363584  402885 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0805 11:47:37.363622  402885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 11:47:37.369239  402885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 11:47:37.379604  402885 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 11:47:37.383537  402885 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0805 11:47:37.383591  402885 kubeadm.go:392] StartCluster: {Name:ha-672593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-672593 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 11:47:37.383664  402885 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 11:47:37.383715  402885 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 11:47:37.419857  402885 cri.go:89] found id: ""
	I0805 11:47:37.419925  402885 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 11:47:37.430051  402885 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 11:47:37.439307  402885 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 11:47:37.448637  402885 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 11:47:37.448654  402885 kubeadm.go:157] found existing configuration files:
	
	I0805 11:47:37.448703  402885 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 11:47:37.457516  402885 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 11:47:37.457576  402885 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 11:47:37.466851  402885 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 11:47:37.475755  402885 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 11:47:37.475799  402885 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 11:47:37.485152  402885 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 11:47:37.494271  402885 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 11:47:37.494313  402885 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 11:47:37.503315  402885 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 11:47:37.512128  402885 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 11:47:37.512173  402885 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 11:47:37.521489  402885 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 11:47:37.624606  402885 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0805 11:47:37.624706  402885 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 11:47:37.757030  402885 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 11:47:37.757209  402885 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 11:47:37.757380  402885 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 11:47:37.962278  402885 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 11:47:37.964191  402885 out.go:204]   - Generating certificates and keys ...
	I0805 11:47:37.964276  402885 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 11:47:37.964378  402885 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 11:47:38.139549  402885 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0805 11:47:38.277362  402885 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0805 11:47:38.403783  402885 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0805 11:47:38.484752  402885 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0805 11:47:38.681349  402885 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0805 11:47:38.681515  402885 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-672593 localhost] and IPs [192.168.39.102 127.0.0.1 ::1]
	I0805 11:47:38.773264  402885 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0805 11:47:38.773407  402885 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-672593 localhost] and IPs [192.168.39.102 127.0.0.1 ::1]
	I0805 11:47:38.924683  402885 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0805 11:47:39.021527  402885 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0805 11:47:39.134668  402885 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0805 11:47:39.134782  402885 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 11:47:39.422524  402885 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 11:47:39.955462  402885 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0805 11:47:40.308237  402885 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 11:47:40.361656  402885 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 11:47:40.479271  402885 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 11:47:40.479670  402885 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 11:47:40.482134  402885 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 11:47:40.483922  402885 out.go:204]   - Booting up control plane ...
	I0805 11:47:40.484030  402885 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 11:47:40.484132  402885 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 11:47:40.484213  402885 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 11:47:40.498471  402885 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 11:47:40.499335  402885 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 11:47:40.499412  402885 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 11:47:40.625259  402885 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0805 11:47:40.625403  402885 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0805 11:47:41.626409  402885 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00164977s
	I0805 11:47:41.626669  402885 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0805 11:47:47.305194  402885 kubeadm.go:310] [api-check] The API server is healthy after 5.680388911s
	I0805 11:47:47.319874  402885 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 11:47:47.332106  402885 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 11:47:47.367795  402885 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0805 11:47:47.368002  402885 kubeadm.go:310] [mark-control-plane] Marking the node ha-672593 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 11:47:47.379722  402885 kubeadm.go:310] [bootstrap-token] Using token: rofbjc.vrvrgkgc24h3j2yi
	I0805 11:47:47.381086  402885 out.go:204]   - Configuring RBAC rules ...
	I0805 11:47:47.381212  402885 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 11:47:47.392759  402885 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 11:47:47.400174  402885 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 11:47:47.405738  402885 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 11:47:47.408907  402885 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 11:47:47.411987  402885 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 11:47:47.711919  402885 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 11:47:48.186021  402885 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0805 11:47:48.711616  402885 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0805 11:47:48.711640  402885 kubeadm.go:310] 
	I0805 11:47:48.711690  402885 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0805 11:47:48.711694  402885 kubeadm.go:310] 
	I0805 11:47:48.711852  402885 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0805 11:47:48.711878  402885 kubeadm.go:310] 
	I0805 11:47:48.711939  402885 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0805 11:47:48.712020  402885 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 11:47:48.712090  402885 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 11:47:48.712103  402885 kubeadm.go:310] 
	I0805 11:47:48.712173  402885 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0805 11:47:48.712186  402885 kubeadm.go:310] 
	I0805 11:47:48.712249  402885 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 11:47:48.712263  402885 kubeadm.go:310] 
	I0805 11:47:48.712334  402885 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0805 11:47:48.712448  402885 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 11:47:48.712537  402885 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 11:47:48.712550  402885 kubeadm.go:310] 
	I0805 11:47:48.712663  402885 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0805 11:47:48.712767  402885 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0805 11:47:48.712777  402885 kubeadm.go:310] 
	I0805 11:47:48.712875  402885 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rofbjc.vrvrgkgc24h3j2yi \
	I0805 11:47:48.712981  402885 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d5d31a77e9c4cbf19599d2fca5d8f2345e115b01301fa4b841f92bcfec86ddc6 \
	I0805 11:47:48.713009  402885 kubeadm.go:310] 	--control-plane 
	I0805 11:47:48.713022  402885 kubeadm.go:310] 
	I0805 11:47:48.713117  402885 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0805 11:47:48.713128  402885 kubeadm.go:310] 
	I0805 11:47:48.713258  402885 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rofbjc.vrvrgkgc24h3j2yi \
	I0805 11:47:48.713435  402885 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d5d31a77e9c4cbf19599d2fca5d8f2345e115b01301fa4b841f92bcfec86ddc6 
	I0805 11:47:48.713543  402885 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 11:47:48.713554  402885 cni.go:84] Creating CNI manager for ""
	I0805 11:47:48.713560  402885 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0805 11:47:48.715158  402885 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0805 11:47:48.716496  402885 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0805 11:47:48.721941  402885 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0805 11:47:48.721958  402885 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0805 11:47:48.742232  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0805 11:47:49.123502  402885 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 11:47:49.123602  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:47:49.123629  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-672593 minikube.k8s.io/updated_at=2024_08_05T11_47_49_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f minikube.k8s.io/name=ha-672593 minikube.k8s.io/primary=true
	I0805 11:47:49.265775  402885 ops.go:34] apiserver oom_adj: -16
	I0805 11:47:49.265862  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:47:49.766504  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:47:50.266289  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:47:50.766141  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:47:51.266826  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:47:51.765994  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:47:52.266341  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:47:52.766819  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:47:53.265993  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:47:53.766780  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:47:54.266174  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:47:54.766950  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:47:55.266643  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:47:55.766565  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:47:56.266555  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:47:56.766945  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:47:57.266238  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:47:57.766591  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:47:58.266182  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:47:58.766160  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:47:59.266832  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:47:59.765979  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:48:00.265995  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 11:48:00.356758  402885 kubeadm.go:1113] duration metric: took 11.233215725s to wait for elevateKubeSystemPrivileges
	I0805 11:48:00.356804  402885 kubeadm.go:394] duration metric: took 22.97321577s to StartCluster
	I0805 11:48:00.356828  402885 settings.go:142] acquiring lock: {Name:mkef693333292ed53a03690c72ec170ce2e26d3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:48:00.356910  402885 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 11:48:00.357556  402885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/kubeconfig: {Name:mkf2ea766e58530103015ce4ba9d1ed3336f3926 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:48:00.357769  402885 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 11:48:00.357792  402885 start.go:241] waiting for startup goroutines ...
	I0805 11:48:00.357777  402885 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0805 11:48:00.357792  402885 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 11:48:00.357854  402885 addons.go:69] Setting storage-provisioner=true in profile "ha-672593"
	I0805 11:48:00.357877  402885 addons.go:69] Setting default-storageclass=true in profile "ha-672593"
	I0805 11:48:00.357926  402885 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-672593"
	I0805 11:48:00.357886  402885 addons.go:234] Setting addon storage-provisioner=true in "ha-672593"
	I0805 11:48:00.357999  402885 config.go:182] Loaded profile config "ha-672593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 11:48:00.358011  402885 host.go:66] Checking if "ha-672593" exists ...
	I0805 11:48:00.358440  402885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:48:00.358477  402885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:48:00.358440  402885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:48:00.358553  402885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:48:00.373501  402885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37729
	I0805 11:48:00.373587  402885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33195
	I0805 11:48:00.374054  402885 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:48:00.374090  402885 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:48:00.374598  402885 main.go:141] libmachine: Using API Version  1
	I0805 11:48:00.374614  402885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:48:00.374727  402885 main.go:141] libmachine: Using API Version  1
	I0805 11:48:00.374754  402885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:48:00.375056  402885 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:48:00.375072  402885 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:48:00.375279  402885 main.go:141] libmachine: (ha-672593) Calling .GetState
	I0805 11:48:00.375644  402885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:48:00.375680  402885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:48:00.377443  402885 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 11:48:00.377697  402885 kapi.go:59] client config for ha-672593: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/client.crt", KeyFile:"/home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/client.key", CAFile:"/home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 11:48:00.378183  402885 cert_rotation.go:137] Starting client certificate rotation controller
	I0805 11:48:00.378399  402885 addons.go:234] Setting addon default-storageclass=true in "ha-672593"
	I0805 11:48:00.378432  402885 host.go:66] Checking if "ha-672593" exists ...
	I0805 11:48:00.378673  402885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:48:00.378694  402885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:48:00.391307  402885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43259
	I0805 11:48:00.391871  402885 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:48:00.392426  402885 main.go:141] libmachine: Using API Version  1
	I0805 11:48:00.392447  402885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:48:00.392774  402885 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:48:00.392954  402885 main.go:141] libmachine: (ha-672593) Calling .GetState
	I0805 11:48:00.393569  402885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35773
	I0805 11:48:00.394032  402885 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:48:00.394569  402885 main.go:141] libmachine: Using API Version  1
	I0805 11:48:00.394593  402885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:48:00.394932  402885 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:48:00.395040  402885 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:48:00.395555  402885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:48:00.395590  402885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:48:00.397044  402885 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 11:48:00.398512  402885 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 11:48:00.398535  402885 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 11:48:00.398555  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:48:00.401399  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:48:00.401772  402885 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:48:00.401794  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:48:00.402038  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:48:00.402203  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:48:00.402341  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:48:00.402490  402885 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/id_rsa Username:docker}
	I0805 11:48:00.412017  402885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32779
	I0805 11:48:00.412450  402885 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:48:00.413008  402885 main.go:141] libmachine: Using API Version  1
	I0805 11:48:00.413034  402885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:48:00.413379  402885 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:48:00.413561  402885 main.go:141] libmachine: (ha-672593) Calling .GetState
	I0805 11:48:00.414973  402885 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:48:00.415191  402885 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 11:48:00.415206  402885 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 11:48:00.415222  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:48:00.417804  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:48:00.418274  402885 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:48:00.418299  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:48:00.418470  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:48:00.418640  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:48:00.418826  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:48:00.418970  402885 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/id_rsa Username:docker}
	I0805 11:48:00.490607  402885 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0805 11:48:00.551074  402885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 11:48:00.573464  402885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 11:48:01.002396  402885 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0805 11:48:01.002507  402885 main.go:141] libmachine: Making call to close driver server
	I0805 11:48:01.002530  402885 main.go:141] libmachine: (ha-672593) Calling .Close
	I0805 11:48:01.002830  402885 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:48:01.002853  402885 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:48:01.002867  402885 main.go:141] libmachine: Making call to close driver server
	I0805 11:48:01.002869  402885 main.go:141] libmachine: (ha-672593) DBG | Closing plugin on server side
	I0805 11:48:01.002877  402885 main.go:141] libmachine: (ha-672593) Calling .Close
	I0805 11:48:01.003123  402885 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:48:01.003139  402885 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:48:01.003265  402885 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0805 11:48:01.003275  402885 round_trippers.go:469] Request Headers:
	I0805 11:48:01.003286  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:48:01.003294  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:48:01.027307  402885 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0805 11:48:01.029401  402885 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0805 11:48:01.029417  402885 round_trippers.go:469] Request Headers:
	I0805 11:48:01.029425  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:48:01.029429  402885 round_trippers.go:473]     Content-Type: application/json
	I0805 11:48:01.029433  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:48:01.034128  402885 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 11:48:01.041462  402885 main.go:141] libmachine: Making call to close driver server
	I0805 11:48:01.041477  402885 main.go:141] libmachine: (ha-672593) Calling .Close
	I0805 11:48:01.041790  402885 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:48:01.041809  402885 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:48:01.041832  402885 main.go:141] libmachine: (ha-672593) DBG | Closing plugin on server side
	I0805 11:48:01.316901  402885 main.go:141] libmachine: Making call to close driver server
	I0805 11:48:01.316932  402885 main.go:141] libmachine: (ha-672593) Calling .Close
	I0805 11:48:01.317371  402885 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:48:01.317401  402885 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:48:01.317412  402885 main.go:141] libmachine: Making call to close driver server
	I0805 11:48:01.317420  402885 main.go:141] libmachine: (ha-672593) Calling .Close
	I0805 11:48:01.317698  402885 main.go:141] libmachine: Successfully made call to close driver server
	I0805 11:48:01.317708  402885 main.go:141] libmachine: (ha-672593) DBG | Closing plugin on server side
	I0805 11:48:01.317720  402885 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 11:48:01.319871  402885 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0805 11:48:01.321367  402885 addons.go:510] duration metric: took 963.568265ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0805 11:48:01.321416  402885 start.go:246] waiting for cluster config update ...
	I0805 11:48:01.321443  402885 start.go:255] writing updated cluster config ...
	I0805 11:48:01.323158  402885 out.go:177] 
	I0805 11:48:01.324946  402885 config.go:182] Loaded profile config "ha-672593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 11:48:01.325050  402885 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/config.json ...
	I0805 11:48:01.326478  402885 out.go:177] * Starting "ha-672593-m02" control-plane node in "ha-672593" cluster
	I0805 11:48:01.327903  402885 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 11:48:01.327937  402885 cache.go:56] Caching tarball of preloaded images
	I0805 11:48:01.328091  402885 preload.go:172] Found /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0805 11:48:01.328112  402885 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0805 11:48:01.328225  402885 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/config.json ...
	I0805 11:48:01.328526  402885 start.go:360] acquireMachinesLock for ha-672593-m02: {Name:mk3babe91d55c30c0b650587cdec6489eb3a7ed6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 11:48:01.328611  402885 start.go:364] duration metric: took 49.348µs to acquireMachinesLock for "ha-672593-m02"
	I0805 11:48:01.328645  402885 start.go:93] Provisioning new machine with config: &{Name:ha-672593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-672593 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 11:48:01.328755  402885 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0805 11:48:01.330357  402885 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 11:48:01.330488  402885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:48:01.330522  402885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:48:01.345624  402885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32799
	I0805 11:48:01.346110  402885 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:48:01.346627  402885 main.go:141] libmachine: Using API Version  1
	I0805 11:48:01.346648  402885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:48:01.346924  402885 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:48:01.347103  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetMachineName
	I0805 11:48:01.347229  402885 main.go:141] libmachine: (ha-672593-m02) Calling .DriverName
	I0805 11:48:01.347387  402885 start.go:159] libmachine.API.Create for "ha-672593" (driver="kvm2")
	I0805 11:48:01.347409  402885 client.go:168] LocalClient.Create starting
	I0805 11:48:01.347439  402885 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem
	I0805 11:48:01.347487  402885 main.go:141] libmachine: Decoding PEM data...
	I0805 11:48:01.347508  402885 main.go:141] libmachine: Parsing certificate...
	I0805 11:48:01.347578  402885 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem
	I0805 11:48:01.347605  402885 main.go:141] libmachine: Decoding PEM data...
	I0805 11:48:01.347617  402885 main.go:141] libmachine: Parsing certificate...
	I0805 11:48:01.347642  402885 main.go:141] libmachine: Running pre-create checks...
	I0805 11:48:01.347654  402885 main.go:141] libmachine: (ha-672593-m02) Calling .PreCreateCheck
	I0805 11:48:01.347883  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetConfigRaw
	I0805 11:48:01.348402  402885 main.go:141] libmachine: Creating machine...
	I0805 11:48:01.348423  402885 main.go:141] libmachine: (ha-672593-m02) Calling .Create
	I0805 11:48:01.348600  402885 main.go:141] libmachine: (ha-672593-m02) Creating KVM machine...
	I0805 11:48:01.349851  402885 main.go:141] libmachine: (ha-672593-m02) DBG | found existing default KVM network
	I0805 11:48:01.349995  402885 main.go:141] libmachine: (ha-672593-m02) DBG | found existing private KVM network mk-ha-672593
	I0805 11:48:01.350143  402885 main.go:141] libmachine: (ha-672593-m02) Setting up store path in /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m02 ...
	I0805 11:48:01.350168  402885 main.go:141] libmachine: (ha-672593-m02) Building disk image from file:///home/jenkins/minikube-integration/19377-383955/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0805 11:48:01.350241  402885 main.go:141] libmachine: (ha-672593-m02) DBG | I0805 11:48:01.350134  403313 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 11:48:01.350361  402885 main.go:141] libmachine: (ha-672593-m02) Downloading /home/jenkins/minikube-integration/19377-383955/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19377-383955/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0805 11:48:01.641041  402885 main.go:141] libmachine: (ha-672593-m02) DBG | I0805 11:48:01.640909  403313 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m02/id_rsa...
	I0805 11:48:01.696896  402885 main.go:141] libmachine: (ha-672593-m02) DBG | I0805 11:48:01.696742  403313 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m02/ha-672593-m02.rawdisk...
	I0805 11:48:01.696947  402885 main.go:141] libmachine: (ha-672593-m02) DBG | Writing magic tar header
	I0805 11:48:01.696965  402885 main.go:141] libmachine: (ha-672593-m02) DBG | Writing SSH key tar header
	I0805 11:48:01.696979  402885 main.go:141] libmachine: (ha-672593-m02) DBG | I0805 11:48:01.696920  403313 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m02 ...
	I0805 11:48:01.697096  402885 main.go:141] libmachine: (ha-672593-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m02
	I0805 11:48:01.697150  402885 main.go:141] libmachine: (ha-672593-m02) Setting executable bit set on /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m02 (perms=drwx------)
	I0805 11:48:01.697180  402885 main.go:141] libmachine: (ha-672593-m02) Setting executable bit set on /home/jenkins/minikube-integration/19377-383955/.minikube/machines (perms=drwxr-xr-x)
	I0805 11:48:01.697196  402885 main.go:141] libmachine: (ha-672593-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19377-383955/.minikube/machines
	I0805 11:48:01.697207  402885 main.go:141] libmachine: (ha-672593-m02) Setting executable bit set on /home/jenkins/minikube-integration/19377-383955/.minikube (perms=drwxr-xr-x)
	I0805 11:48:01.697219  402885 main.go:141] libmachine: (ha-672593-m02) Setting executable bit set on /home/jenkins/minikube-integration/19377-383955 (perms=drwxrwxr-x)
	I0805 11:48:01.697227  402885 main.go:141] libmachine: (ha-672593-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0805 11:48:01.697236  402885 main.go:141] libmachine: (ha-672593-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0805 11:48:01.697245  402885 main.go:141] libmachine: (ha-672593-m02) Creating domain...
	I0805 11:48:01.697252  402885 main.go:141] libmachine: (ha-672593-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 11:48:01.697266  402885 main.go:141] libmachine: (ha-672593-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19377-383955
	I0805 11:48:01.697279  402885 main.go:141] libmachine: (ha-672593-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0805 11:48:01.697293  402885 main.go:141] libmachine: (ha-672593-m02) DBG | Checking permissions on dir: /home/jenkins
	I0805 11:48:01.697301  402885 main.go:141] libmachine: (ha-672593-m02) DBG | Checking permissions on dir: /home
	I0805 11:48:01.697309  402885 main.go:141] libmachine: (ha-672593-m02) DBG | Skipping /home - not owner
	I0805 11:48:01.698416  402885 main.go:141] libmachine: (ha-672593-m02) define libvirt domain using xml: 
	I0805 11:48:01.698443  402885 main.go:141] libmachine: (ha-672593-m02) <domain type='kvm'>
	I0805 11:48:01.698454  402885 main.go:141] libmachine: (ha-672593-m02)   <name>ha-672593-m02</name>
	I0805 11:48:01.698462  402885 main.go:141] libmachine: (ha-672593-m02)   <memory unit='MiB'>2200</memory>
	I0805 11:48:01.698471  402885 main.go:141] libmachine: (ha-672593-m02)   <vcpu>2</vcpu>
	I0805 11:48:01.698481  402885 main.go:141] libmachine: (ha-672593-m02)   <features>
	I0805 11:48:01.698491  402885 main.go:141] libmachine: (ha-672593-m02)     <acpi/>
	I0805 11:48:01.698500  402885 main.go:141] libmachine: (ha-672593-m02)     <apic/>
	I0805 11:48:01.698511  402885 main.go:141] libmachine: (ha-672593-m02)     <pae/>
	I0805 11:48:01.698520  402885 main.go:141] libmachine: (ha-672593-m02)     
	I0805 11:48:01.698530  402885 main.go:141] libmachine: (ha-672593-m02)   </features>
	I0805 11:48:01.698536  402885 main.go:141] libmachine: (ha-672593-m02)   <cpu mode='host-passthrough'>
	I0805 11:48:01.698547  402885 main.go:141] libmachine: (ha-672593-m02)   
	I0805 11:48:01.698554  402885 main.go:141] libmachine: (ha-672593-m02)   </cpu>
	I0805 11:48:01.698563  402885 main.go:141] libmachine: (ha-672593-m02)   <os>
	I0805 11:48:01.698580  402885 main.go:141] libmachine: (ha-672593-m02)     <type>hvm</type>
	I0805 11:48:01.698592  402885 main.go:141] libmachine: (ha-672593-m02)     <boot dev='cdrom'/>
	I0805 11:48:01.698600  402885 main.go:141] libmachine: (ha-672593-m02)     <boot dev='hd'/>
	I0805 11:48:01.698614  402885 main.go:141] libmachine: (ha-672593-m02)     <bootmenu enable='no'/>
	I0805 11:48:01.698622  402885 main.go:141] libmachine: (ha-672593-m02)   </os>
	I0805 11:48:01.698630  402885 main.go:141] libmachine: (ha-672593-m02)   <devices>
	I0805 11:48:01.698641  402885 main.go:141] libmachine: (ha-672593-m02)     <disk type='file' device='cdrom'>
	I0805 11:48:01.698655  402885 main.go:141] libmachine: (ha-672593-m02)       <source file='/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m02/boot2docker.iso'/>
	I0805 11:48:01.698668  402885 main.go:141] libmachine: (ha-672593-m02)       <target dev='hdc' bus='scsi'/>
	I0805 11:48:01.698677  402885 main.go:141] libmachine: (ha-672593-m02)       <readonly/>
	I0805 11:48:01.698690  402885 main.go:141] libmachine: (ha-672593-m02)     </disk>
	I0805 11:48:01.698703  402885 main.go:141] libmachine: (ha-672593-m02)     <disk type='file' device='disk'>
	I0805 11:48:01.698710  402885 main.go:141] libmachine: (ha-672593-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0805 11:48:01.698723  402885 main.go:141] libmachine: (ha-672593-m02)       <source file='/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m02/ha-672593-m02.rawdisk'/>
	I0805 11:48:01.698735  402885 main.go:141] libmachine: (ha-672593-m02)       <target dev='hda' bus='virtio'/>
	I0805 11:48:01.698743  402885 main.go:141] libmachine: (ha-672593-m02)     </disk>
	I0805 11:48:01.698754  402885 main.go:141] libmachine: (ha-672593-m02)     <interface type='network'>
	I0805 11:48:01.698787  402885 main.go:141] libmachine: (ha-672593-m02)       <source network='mk-ha-672593'/>
	I0805 11:48:01.698810  402885 main.go:141] libmachine: (ha-672593-m02)       <model type='virtio'/>
	I0805 11:48:01.698819  402885 main.go:141] libmachine: (ha-672593-m02)     </interface>
	I0805 11:48:01.698830  402885 main.go:141] libmachine: (ha-672593-m02)     <interface type='network'>
	I0805 11:48:01.698840  402885 main.go:141] libmachine: (ha-672593-m02)       <source network='default'/>
	I0805 11:48:01.698855  402885 main.go:141] libmachine: (ha-672593-m02)       <model type='virtio'/>
	I0805 11:48:01.698867  402885 main.go:141] libmachine: (ha-672593-m02)     </interface>
	I0805 11:48:01.698877  402885 main.go:141] libmachine: (ha-672593-m02)     <serial type='pty'>
	I0805 11:48:01.698892  402885 main.go:141] libmachine: (ha-672593-m02)       <target port='0'/>
	I0805 11:48:01.698905  402885 main.go:141] libmachine: (ha-672593-m02)     </serial>
	I0805 11:48:01.698918  402885 main.go:141] libmachine: (ha-672593-m02)     <console type='pty'>
	I0805 11:48:01.698928  402885 main.go:141] libmachine: (ha-672593-m02)       <target type='serial' port='0'/>
	I0805 11:48:01.698936  402885 main.go:141] libmachine: (ha-672593-m02)     </console>
	I0805 11:48:01.698950  402885 main.go:141] libmachine: (ha-672593-m02)     <rng model='virtio'>
	I0805 11:48:01.698961  402885 main.go:141] libmachine: (ha-672593-m02)       <backend model='random'>/dev/random</backend>
	I0805 11:48:01.698970  402885 main.go:141] libmachine: (ha-672593-m02)     </rng>
	I0805 11:48:01.698978  402885 main.go:141] libmachine: (ha-672593-m02)     
	I0805 11:48:01.698991  402885 main.go:141] libmachine: (ha-672593-m02)     
	I0805 11:48:01.699024  402885 main.go:141] libmachine: (ha-672593-m02)   </devices>
	I0805 11:48:01.699054  402885 main.go:141] libmachine: (ha-672593-m02) </domain>
	I0805 11:48:01.699066  402885 main.go:141] libmachine: (ha-672593-m02) 
	I0805 11:48:01.706052  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:ea:0c:74 in network default
	I0805 11:48:01.706817  402885 main.go:141] libmachine: (ha-672593-m02) Ensuring networks are active...
	I0805 11:48:01.706843  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:01.707678  402885 main.go:141] libmachine: (ha-672593-m02) Ensuring network default is active
	I0805 11:48:01.708124  402885 main.go:141] libmachine: (ha-672593-m02) Ensuring network mk-ha-672593 is active
	I0805 11:48:01.708718  402885 main.go:141] libmachine: (ha-672593-m02) Getting domain xml...
	I0805 11:48:01.709550  402885 main.go:141] libmachine: (ha-672593-m02) Creating domain...
	I0805 11:48:02.917859  402885 main.go:141] libmachine: (ha-672593-m02) Waiting to get IP...
	I0805 11:48:02.918747  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:02.919199  402885 main.go:141] libmachine: (ha-672593-m02) DBG | unable to find current IP address of domain ha-672593-m02 in network mk-ha-672593
	I0805 11:48:02.919217  402885 main.go:141] libmachine: (ha-672593-m02) DBG | I0805 11:48:02.919183  403313 retry.go:31] will retry after 302.863518ms: waiting for machine to come up
	I0805 11:48:03.223803  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:03.224253  402885 main.go:141] libmachine: (ha-672593-m02) DBG | unable to find current IP address of domain ha-672593-m02 in network mk-ha-672593
	I0805 11:48:03.224282  402885 main.go:141] libmachine: (ha-672593-m02) DBG | I0805 11:48:03.224201  403313 retry.go:31] will retry after 382.819723ms: waiting for machine to come up
	I0805 11:48:03.608940  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:03.609403  402885 main.go:141] libmachine: (ha-672593-m02) DBG | unable to find current IP address of domain ha-672593-m02 in network mk-ha-672593
	I0805 11:48:03.609428  402885 main.go:141] libmachine: (ha-672593-m02) DBG | I0805 11:48:03.609344  403313 retry.go:31] will retry after 318.082741ms: waiting for machine to come up
	I0805 11:48:03.928829  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:03.929244  402885 main.go:141] libmachine: (ha-672593-m02) DBG | unable to find current IP address of domain ha-672593-m02 in network mk-ha-672593
	I0805 11:48:03.929274  402885 main.go:141] libmachine: (ha-672593-m02) DBG | I0805 11:48:03.929187  403313 retry.go:31] will retry after 479.149529ms: waiting for machine to come up
	I0805 11:48:04.409675  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:04.410224  402885 main.go:141] libmachine: (ha-672593-m02) DBG | unable to find current IP address of domain ha-672593-m02 in network mk-ha-672593
	I0805 11:48:04.410265  402885 main.go:141] libmachine: (ha-672593-m02) DBG | I0805 11:48:04.410173  403313 retry.go:31] will retry after 683.38485ms: waiting for machine to come up
	I0805 11:48:05.095020  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:05.095382  402885 main.go:141] libmachine: (ha-672593-m02) DBG | unable to find current IP address of domain ha-672593-m02 in network mk-ha-672593
	I0805 11:48:05.095411  402885 main.go:141] libmachine: (ha-672593-m02) DBG | I0805 11:48:05.095355  403313 retry.go:31] will retry after 944.815364ms: waiting for machine to come up
	I0805 11:48:06.042078  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:06.042559  402885 main.go:141] libmachine: (ha-672593-m02) DBG | unable to find current IP address of domain ha-672593-m02 in network mk-ha-672593
	I0805 11:48:06.042591  402885 main.go:141] libmachine: (ha-672593-m02) DBG | I0805 11:48:06.042512  403313 retry.go:31] will retry after 934.806892ms: waiting for machine to come up
	I0805 11:48:06.979021  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:06.979515  402885 main.go:141] libmachine: (ha-672593-m02) DBG | unable to find current IP address of domain ha-672593-m02 in network mk-ha-672593
	I0805 11:48:06.979541  402885 main.go:141] libmachine: (ha-672593-m02) DBG | I0805 11:48:06.979475  403313 retry.go:31] will retry after 1.203623715s: waiting for machine to come up
	I0805 11:48:08.184893  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:08.185316  402885 main.go:141] libmachine: (ha-672593-m02) DBG | unable to find current IP address of domain ha-672593-m02 in network mk-ha-672593
	I0805 11:48:08.185346  402885 main.go:141] libmachine: (ha-672593-m02) DBG | I0805 11:48:08.185260  403313 retry.go:31] will retry after 1.41925065s: waiting for machine to come up
	I0805 11:48:09.606879  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:09.607341  402885 main.go:141] libmachine: (ha-672593-m02) DBG | unable to find current IP address of domain ha-672593-m02 in network mk-ha-672593
	I0805 11:48:09.607370  402885 main.go:141] libmachine: (ha-672593-m02) DBG | I0805 11:48:09.607270  403313 retry.go:31] will retry after 1.671138336s: waiting for machine to come up
	I0805 11:48:11.280997  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:11.281363  402885 main.go:141] libmachine: (ha-672593-m02) DBG | unable to find current IP address of domain ha-672593-m02 in network mk-ha-672593
	I0805 11:48:11.281389  402885 main.go:141] libmachine: (ha-672593-m02) DBG | I0805 11:48:11.281332  403313 retry.go:31] will retry after 2.578509384s: waiting for machine to come up
	I0805 11:48:13.862566  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:13.862965  402885 main.go:141] libmachine: (ha-672593-m02) DBG | unable to find current IP address of domain ha-672593-m02 in network mk-ha-672593
	I0805 11:48:13.862990  402885 main.go:141] libmachine: (ha-672593-m02) DBG | I0805 11:48:13.862912  403313 retry.go:31] will retry after 2.291998643s: waiting for machine to come up
	I0805 11:48:16.156873  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:16.157200  402885 main.go:141] libmachine: (ha-672593-m02) DBG | unable to find current IP address of domain ha-672593-m02 in network mk-ha-672593
	I0805 11:48:16.157225  402885 main.go:141] libmachine: (ha-672593-m02) DBG | I0805 11:48:16.157174  403313 retry.go:31] will retry after 4.45165891s: waiting for machine to come up
	I0805 11:48:20.613052  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:20.613503  402885 main.go:141] libmachine: (ha-672593-m02) DBG | unable to find current IP address of domain ha-672593-m02 in network mk-ha-672593
	I0805 11:48:20.613534  402885 main.go:141] libmachine: (ha-672593-m02) DBG | I0805 11:48:20.613441  403313 retry.go:31] will retry after 5.087876314s: waiting for machine to come up
	I0805 11:48:25.704853  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:25.705361  402885 main.go:141] libmachine: (ha-672593-m02) Found IP for machine: 192.168.39.68
	I0805 11:48:25.705384  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has current primary IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:25.705393  402885 main.go:141] libmachine: (ha-672593-m02) Reserving static IP address...
	I0805 11:48:25.705715  402885 main.go:141] libmachine: (ha-672593-m02) DBG | unable to find host DHCP lease matching {name: "ha-672593-m02", mac: "52:54:00:67:7b:e8", ip: "192.168.39.68"} in network mk-ha-672593
	I0805 11:48:25.776238  402885 main.go:141] libmachine: (ha-672593-m02) DBG | Getting to WaitForSSH function...
	I0805 11:48:25.776273  402885 main.go:141] libmachine: (ha-672593-m02) Reserved static IP address: 192.168.39.68
	I0805 11:48:25.776296  402885 main.go:141] libmachine: (ha-672593-m02) Waiting for SSH to be available...
	I0805 11:48:25.778763  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:25.779155  402885 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:48:16 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:minikube Clientid:01:52:54:00:67:7b:e8}
	I0805 11:48:25.779186  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:25.779266  402885 main.go:141] libmachine: (ha-672593-m02) DBG | Using SSH client type: external
	I0805 11:48:25.779298  402885 main.go:141] libmachine: (ha-672593-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m02/id_rsa (-rw-------)
	I0805 11:48:25.779330  402885 main.go:141] libmachine: (ha-672593-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.68 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 11:48:25.779346  402885 main.go:141] libmachine: (ha-672593-m02) DBG | About to run SSH command:
	I0805 11:48:25.779385  402885 main.go:141] libmachine: (ha-672593-m02) DBG | exit 0
	I0805 11:48:25.908557  402885 main.go:141] libmachine: (ha-672593-m02) DBG | SSH cmd err, output: <nil>: 
	I0805 11:48:25.908814  402885 main.go:141] libmachine: (ha-672593-m02) KVM machine creation complete!
	I0805 11:48:25.909183  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetConfigRaw
	I0805 11:48:25.909795  402885 main.go:141] libmachine: (ha-672593-m02) Calling .DriverName
	I0805 11:48:25.910028  402885 main.go:141] libmachine: (ha-672593-m02) Calling .DriverName
	I0805 11:48:25.910231  402885 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0805 11:48:25.910244  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetState
	I0805 11:48:25.911548  402885 main.go:141] libmachine: Detecting operating system of created instance...
	I0805 11:48:25.911561  402885 main.go:141] libmachine: Waiting for SSH to be available...
	I0805 11:48:25.911567  402885 main.go:141] libmachine: Getting to WaitForSSH function...
	I0805 11:48:25.911575  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHHostname
	I0805 11:48:25.913870  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:25.914377  402885 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:48:16 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-672593-m02 Clientid:01:52:54:00:67:7b:e8}
	I0805 11:48:25.914403  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:25.914600  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHPort
	I0805 11:48:25.914792  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHKeyPath
	I0805 11:48:25.914940  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHKeyPath
	I0805 11:48:25.915099  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHUsername
	I0805 11:48:25.915267  402885 main.go:141] libmachine: Using SSH client type: native
	I0805 11:48:25.915497  402885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0805 11:48:25.915513  402885 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0805 11:48:26.023197  402885 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 11:48:26.023222  402885 main.go:141] libmachine: Detecting the provisioner...
	I0805 11:48:26.023238  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHHostname
	I0805 11:48:26.025829  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:26.026174  402885 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:48:16 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-672593-m02 Clientid:01:52:54:00:67:7b:e8}
	I0805 11:48:26.026207  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:26.026292  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHPort
	I0805 11:48:26.026551  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHKeyPath
	I0805 11:48:26.026750  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHKeyPath
	I0805 11:48:26.026921  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHUsername
	I0805 11:48:26.027115  402885 main.go:141] libmachine: Using SSH client type: native
	I0805 11:48:26.027346  402885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0805 11:48:26.027364  402885 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0805 11:48:26.132333  402885 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0805 11:48:26.132440  402885 main.go:141] libmachine: found compatible host: buildroot
	I0805 11:48:26.132453  402885 main.go:141] libmachine: Provisioning with buildroot...
	I0805 11:48:26.132464  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetMachineName
	I0805 11:48:26.132744  402885 buildroot.go:166] provisioning hostname "ha-672593-m02"
	I0805 11:48:26.132763  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetMachineName
	I0805 11:48:26.132977  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHHostname
	I0805 11:48:26.135523  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:26.135901  402885 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:48:16 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-672593-m02 Clientid:01:52:54:00:67:7b:e8}
	I0805 11:48:26.135916  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:26.136114  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHPort
	I0805 11:48:26.136277  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHKeyPath
	I0805 11:48:26.136433  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHKeyPath
	I0805 11:48:26.136567  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHUsername
	I0805 11:48:26.136758  402885 main.go:141] libmachine: Using SSH client type: native
	I0805 11:48:26.136912  402885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0805 11:48:26.136924  402885 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-672593-m02 && echo "ha-672593-m02" | sudo tee /etc/hostname
	I0805 11:48:26.253208  402885 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-672593-m02
	
	I0805 11:48:26.253238  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHHostname
	I0805 11:48:26.255880  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:26.256319  402885 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:48:16 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-672593-m02 Clientid:01:52:54:00:67:7b:e8}
	I0805 11:48:26.256359  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:26.256502  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHPort
	I0805 11:48:26.256723  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHKeyPath
	I0805 11:48:26.256875  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHKeyPath
	I0805 11:48:26.257002  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHUsername
	I0805 11:48:26.257148  402885 main.go:141] libmachine: Using SSH client type: native
	I0805 11:48:26.257336  402885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0805 11:48:26.257357  402885 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-672593-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-672593-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-672593-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 11:48:26.372664  402885 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 11:48:26.372695  402885 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19377-383955/.minikube CaCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19377-383955/.minikube}
	I0805 11:48:26.372714  402885 buildroot.go:174] setting up certificates
	I0805 11:48:26.372728  402885 provision.go:84] configureAuth start
	I0805 11:48:26.372736  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetMachineName
	I0805 11:48:26.372977  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetIP
	I0805 11:48:26.375201  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:26.375595  402885 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:48:16 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-672593-m02 Clientid:01:52:54:00:67:7b:e8}
	I0805 11:48:26.375620  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:26.375730  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHHostname
	I0805 11:48:26.378096  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:26.378431  402885 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:48:16 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-672593-m02 Clientid:01:52:54:00:67:7b:e8}
	I0805 11:48:26.378451  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:26.378635  402885 provision.go:143] copyHostCerts
	I0805 11:48:26.378669  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 11:48:26.378704  402885 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem, removing ...
	I0805 11:48:26.378713  402885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 11:48:26.378776  402885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem (1082 bytes)
	I0805 11:48:26.378845  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 11:48:26.378868  402885 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem, removing ...
	I0805 11:48:26.378877  402885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 11:48:26.378910  402885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem (1123 bytes)
	I0805 11:48:26.378972  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 11:48:26.378998  402885 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem, removing ...
	I0805 11:48:26.379005  402885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 11:48:26.379042  402885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem (1675 bytes)
	I0805 11:48:26.379123  402885 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem org=jenkins.ha-672593-m02 san=[127.0.0.1 192.168.39.68 ha-672593-m02 localhost minikube]
	I0805 11:48:26.606457  402885 provision.go:177] copyRemoteCerts
	I0805 11:48:26.606519  402885 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 11:48:26.606547  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHHostname
	I0805 11:48:26.609287  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:26.609596  402885 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:48:16 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-672593-m02 Clientid:01:52:54:00:67:7b:e8}
	I0805 11:48:26.609631  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:26.609725  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHPort
	I0805 11:48:26.609945  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHKeyPath
	I0805 11:48:26.610151  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHUsername
	I0805 11:48:26.610307  402885 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m02/id_rsa Username:docker}
	I0805 11:48:26.695566  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 11:48:26.695655  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0805 11:48:26.723973  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 11:48:26.724047  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 11:48:26.747390  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 11:48:26.747457  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 11:48:26.772915  402885 provision.go:87] duration metric: took 400.171697ms to configureAuth
	I0805 11:48:26.772946  402885 buildroot.go:189] setting minikube options for container-runtime
	I0805 11:48:26.773159  402885 config.go:182] Loaded profile config "ha-672593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 11:48:26.773262  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHHostname
	I0805 11:48:26.776201  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:26.776605  402885 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:48:16 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-672593-m02 Clientid:01:52:54:00:67:7b:e8}
	I0805 11:48:26.776636  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:26.776855  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHPort
	I0805 11:48:26.777069  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHKeyPath
	I0805 11:48:26.777246  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHKeyPath
	I0805 11:48:26.777414  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHUsername
	I0805 11:48:26.777600  402885 main.go:141] libmachine: Using SSH client type: native
	I0805 11:48:26.777848  402885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0805 11:48:26.777878  402885 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 11:48:27.039393  402885 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 11:48:27.039428  402885 main.go:141] libmachine: Checking connection to Docker...
	I0805 11:48:27.039437  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetURL
	I0805 11:48:27.040785  402885 main.go:141] libmachine: (ha-672593-m02) DBG | Using libvirt version 6000000
	I0805 11:48:27.042890  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:27.043183  402885 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:48:16 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-672593-m02 Clientid:01:52:54:00:67:7b:e8}
	I0805 11:48:27.043222  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:27.043435  402885 main.go:141] libmachine: Docker is up and running!
	I0805 11:48:27.043450  402885 main.go:141] libmachine: Reticulating splines...
	I0805 11:48:27.043457  402885 client.go:171] duration metric: took 25.696041913s to LocalClient.Create
	I0805 11:48:27.043479  402885 start.go:167] duration metric: took 25.696094275s to libmachine.API.Create "ha-672593"
	I0805 11:48:27.043490  402885 start.go:293] postStartSetup for "ha-672593-m02" (driver="kvm2")
	I0805 11:48:27.043500  402885 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 11:48:27.043515  402885 main.go:141] libmachine: (ha-672593-m02) Calling .DriverName
	I0805 11:48:27.043781  402885 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 11:48:27.043806  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHHostname
	I0805 11:48:27.045836  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:27.046182  402885 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:48:16 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-672593-m02 Clientid:01:52:54:00:67:7b:e8}
	I0805 11:48:27.046204  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:27.046356  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHPort
	I0805 11:48:27.046537  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHKeyPath
	I0805 11:48:27.046718  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHUsername
	I0805 11:48:27.046852  402885 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m02/id_rsa Username:docker}
	I0805 11:48:27.130015  402885 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 11:48:27.134348  402885 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 11:48:27.134376  402885 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/addons for local assets ...
	I0805 11:48:27.134446  402885 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/files for local assets ...
	I0805 11:48:27.134547  402885 filesync.go:149] local asset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> 3912192.pem in /etc/ssl/certs
	I0805 11:48:27.134561  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> /etc/ssl/certs/3912192.pem
	I0805 11:48:27.134671  402885 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 11:48:27.144049  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 11:48:27.167995  402885 start.go:296] duration metric: took 124.489233ms for postStartSetup
	I0805 11:48:27.168050  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetConfigRaw
	I0805 11:48:27.168656  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetIP
	I0805 11:48:27.172273  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:27.172709  402885 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:48:16 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-672593-m02 Clientid:01:52:54:00:67:7b:e8}
	I0805 11:48:27.172738  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:27.172996  402885 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/config.json ...
	I0805 11:48:27.173250  402885 start.go:128] duration metric: took 25.844480317s to createHost
	I0805 11:48:27.173281  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHHostname
	I0805 11:48:27.175663  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:27.175987  402885 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:48:16 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-672593-m02 Clientid:01:52:54:00:67:7b:e8}
	I0805 11:48:27.176036  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:27.176239  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHPort
	I0805 11:48:27.176445  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHKeyPath
	I0805 11:48:27.176618  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHKeyPath
	I0805 11:48:27.176743  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHUsername
	I0805 11:48:27.176874  402885 main.go:141] libmachine: Using SSH client type: native
	I0805 11:48:27.177040  402885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0805 11:48:27.177050  402885 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 11:48:27.288647  402885 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722858507.265904771
	
	I0805 11:48:27.288679  402885 fix.go:216] guest clock: 1722858507.265904771
	I0805 11:48:27.288690  402885 fix.go:229] Guest: 2024-08-05 11:48:27.265904771 +0000 UTC Remote: 2024-08-05 11:48:27.173265737 +0000 UTC m=+85.804470788 (delta=92.639034ms)
	I0805 11:48:27.288718  402885 fix.go:200] guest clock delta is within tolerance: 92.639034ms
	I0805 11:48:27.288725  402885 start.go:83] releasing machines lock for "ha-672593-m02", held for 25.960099843s
	I0805 11:48:27.288760  402885 main.go:141] libmachine: (ha-672593-m02) Calling .DriverName
	I0805 11:48:27.289045  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetIP
	I0805 11:48:27.291857  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:27.292196  402885 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:48:16 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-672593-m02 Clientid:01:52:54:00:67:7b:e8}
	I0805 11:48:27.292227  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:27.294470  402885 out.go:177] * Found network options:
	I0805 11:48:27.295834  402885 out.go:177]   - NO_PROXY=192.168.39.102
	W0805 11:48:27.297178  402885 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 11:48:27.297210  402885 main.go:141] libmachine: (ha-672593-m02) Calling .DriverName
	I0805 11:48:27.297850  402885 main.go:141] libmachine: (ha-672593-m02) Calling .DriverName
	I0805 11:48:27.298207  402885 main.go:141] libmachine: (ha-672593-m02) Calling .DriverName
	I0805 11:48:27.298305  402885 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 11:48:27.298351  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHHostname
	W0805 11:48:27.298420  402885 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 11:48:27.298511  402885 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 11:48:27.298534  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHHostname
	I0805 11:48:27.301174  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:27.301488  402885 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:48:16 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-672593-m02 Clientid:01:52:54:00:67:7b:e8}
	I0805 11:48:27.301519  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:27.301627  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:27.301685  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHPort
	I0805 11:48:27.301878  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHKeyPath
	I0805 11:48:27.302045  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHUsername
	I0805 11:48:27.302083  402885 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:48:16 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-672593-m02 Clientid:01:52:54:00:67:7b:e8}
	I0805 11:48:27.302106  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:27.302188  402885 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m02/id_rsa Username:docker}
	I0805 11:48:27.302345  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHPort
	I0805 11:48:27.302573  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHKeyPath
	I0805 11:48:27.303922  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHUsername
	I0805 11:48:27.304102  402885 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m02/id_rsa Username:docker}
	I0805 11:48:27.533535  402885 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 11:48:27.539340  402885 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 11:48:27.539394  402885 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 11:48:27.556611  402885 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 11:48:27.556635  402885 start.go:495] detecting cgroup driver to use...
	I0805 11:48:27.556702  402885 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 11:48:27.573063  402885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 11:48:27.586934  402885 docker.go:217] disabling cri-docker service (if available) ...
	I0805 11:48:27.586986  402885 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 11:48:27.600482  402885 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 11:48:27.614532  402885 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 11:48:27.741282  402885 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 11:48:27.911805  402885 docker.go:233] disabling docker service ...
	I0805 11:48:27.911876  402885 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 11:48:27.928908  402885 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 11:48:27.942263  402885 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 11:48:28.086907  402885 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 11:48:28.207913  402885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 11:48:28.221916  402885 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 11:48:28.244146  402885 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0805 11:48:28.244214  402885 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:48:28.257376  402885 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 11:48:28.257457  402885 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:48:28.267972  402885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:48:28.278416  402885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:48:28.288915  402885 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 11:48:28.299660  402885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:48:28.309889  402885 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:48:28.327242  402885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:48:28.337306  402885 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 11:48:28.346426  402885 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0805 11:48:28.346478  402885 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0805 11:48:28.361676  402885 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 11:48:28.371024  402885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 11:48:28.489702  402885 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 11:48:28.625580  402885 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 11:48:28.625670  402885 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 11:48:28.630374  402885 start.go:563] Will wait 60s for crictl version
	I0805 11:48:28.630416  402885 ssh_runner.go:195] Run: which crictl
	I0805 11:48:28.634219  402885 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 11:48:28.681308  402885 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 11:48:28.681401  402885 ssh_runner.go:195] Run: crio --version
	I0805 11:48:28.710423  402885 ssh_runner.go:195] Run: crio --version
	I0805 11:48:28.742765  402885 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0805 11:48:28.744074  402885 out.go:177]   - env NO_PROXY=192.168.39.102
	I0805 11:48:28.745370  402885 main.go:141] libmachine: (ha-672593-m02) Calling .GetIP
	I0805 11:48:28.748024  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:28.748349  402885 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:48:16 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-672593-m02 Clientid:01:52:54:00:67:7b:e8}
	I0805 11:48:28.748366  402885 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 11:48:28.748575  402885 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0805 11:48:28.752872  402885 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 11:48:28.765707  402885 mustload.go:65] Loading cluster: ha-672593
	I0805 11:48:28.765900  402885 config.go:182] Loaded profile config "ha-672593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 11:48:28.766170  402885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:48:28.766204  402885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:48:28.781593  402885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40553
	I0805 11:48:28.782040  402885 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:48:28.782493  402885 main.go:141] libmachine: Using API Version  1
	I0805 11:48:28.782514  402885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:48:28.782819  402885 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:48:28.783004  402885 main.go:141] libmachine: (ha-672593) Calling .GetState
	I0805 11:48:28.784613  402885 host.go:66] Checking if "ha-672593" exists ...
	I0805 11:48:28.784888  402885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:48:28.784910  402885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:48:28.801139  402885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46519
	I0805 11:48:28.801558  402885 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:48:28.802039  402885 main.go:141] libmachine: Using API Version  1
	I0805 11:48:28.802057  402885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:48:28.802374  402885 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:48:28.802561  402885 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:48:28.802734  402885 certs.go:68] Setting up /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593 for IP: 192.168.39.68
	I0805 11:48:28.802749  402885 certs.go:194] generating shared ca certs ...
	I0805 11:48:28.802768  402885 certs.go:226] acquiring lock for ca certs: {Name:mk0abfcaff3883fbb5243c47b487f9200d9166d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:48:28.802921  402885 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key
	I0805 11:48:28.802999  402885 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key
	I0805 11:48:28.803014  402885 certs.go:256] generating profile certs ...
	I0805 11:48:28.803128  402885 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/client.key
	I0805 11:48:28.803164  402885 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key.143f38de
	I0805 11:48:28.803184  402885 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt.143f38de with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.68 192.168.39.254]
	I0805 11:48:29.166917  402885 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt.143f38de ...
	I0805 11:48:29.166948  402885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt.143f38de: {Name:mk675c593a87f2257d2750f97816b630d94b443e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:48:29.167153  402885 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key.143f38de ...
	I0805 11:48:29.167172  402885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key.143f38de: {Name:mkb191d4a87b24cab83b77c2e4b67c3fe8122f80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:48:29.167270  402885 certs.go:381] copying /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt.143f38de -> /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt
	I0805 11:48:29.167442  402885 certs.go:385] copying /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key.143f38de -> /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key
	I0805 11:48:29.167623  402885 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/proxy-client.key
	I0805 11:48:29.167644  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0805 11:48:29.167668  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0805 11:48:29.167687  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0805 11:48:29.167705  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0805 11:48:29.167722  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0805 11:48:29.167736  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0805 11:48:29.167772  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0805 11:48:29.167804  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0805 11:48:29.167870  402885 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem (1338 bytes)
	W0805 11:48:29.167911  402885 certs.go:480] ignoring /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219_empty.pem, impossibly tiny 0 bytes
	I0805 11:48:29.167924  402885 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 11:48:29.167957  402885 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem (1082 bytes)
	I0805 11:48:29.167992  402885 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem (1123 bytes)
	I0805 11:48:29.168034  402885 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem (1675 bytes)
	I0805 11:48:29.168095  402885 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 11:48:29.168127  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0805 11:48:29.168153  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem -> /usr/share/ca-certificates/391219.pem
	I0805 11:48:29.168170  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> /usr/share/ca-certificates/3912192.pem
	I0805 11:48:29.168214  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:48:29.171252  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:48:29.171598  402885 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:48:29.171627  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:48:29.171790  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:48:29.171998  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:48:29.172131  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:48:29.172269  402885 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/id_rsa Username:docker}
	I0805 11:48:29.248148  402885 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0805 11:48:29.253076  402885 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0805 11:48:29.264853  402885 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0805 11:48:29.268904  402885 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0805 11:48:29.279274  402885 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0805 11:48:29.283596  402885 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0805 11:48:29.294367  402885 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0805 11:48:29.298519  402885 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0805 11:48:29.311381  402885 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0805 11:48:29.316314  402885 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0805 11:48:29.326771  402885 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0805 11:48:29.330755  402885 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0805 11:48:29.341542  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 11:48:29.367072  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 11:48:29.391061  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 11:48:29.414257  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 11:48:29.440624  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0805 11:48:29.465821  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0805 11:48:29.489923  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 11:48:29.513668  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 11:48:29.536786  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 11:48:29.560954  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem --> /usr/share/ca-certificates/391219.pem (1338 bytes)
	I0805 11:48:29.585731  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /usr/share/ca-certificates/3912192.pem (1708 bytes)
	I0805 11:48:29.612407  402885 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0805 11:48:29.629067  402885 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0805 11:48:29.645661  402885 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0805 11:48:29.662647  402885 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0805 11:48:29.680905  402885 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0805 11:48:29.698729  402885 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0805 11:48:29.716375  402885 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0805 11:48:29.733737  402885 ssh_runner.go:195] Run: openssl version
	I0805 11:48:29.739709  402885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3912192.pem && ln -fs /usr/share/ca-certificates/3912192.pem /etc/ssl/certs/3912192.pem"
	I0805 11:48:29.750894  402885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3912192.pem
	I0805 11:48:29.755513  402885 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 11:39 /usr/share/ca-certificates/3912192.pem
	I0805 11:48:29.755593  402885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3912192.pem
	I0805 11:48:29.761503  402885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3912192.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 11:48:29.772864  402885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 11:48:29.784142  402885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 11:48:29.788775  402885 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0805 11:48:29.788848  402885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 11:48:29.794459  402885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 11:48:29.805331  402885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391219.pem && ln -fs /usr/share/ca-certificates/391219.pem /etc/ssl/certs/391219.pem"
	I0805 11:48:29.815852  402885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/391219.pem
	I0805 11:48:29.820248  402885 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 11:39 /usr/share/ca-certificates/391219.pem
	I0805 11:48:29.820314  402885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391219.pem
	I0805 11:48:29.826195  402885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391219.pem /etc/ssl/certs/51391683.0"
	I0805 11:48:29.836683  402885 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 11:48:29.841095  402885 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0805 11:48:29.841148  402885 kubeadm.go:934] updating node {m02 192.168.39.68 8443 v1.30.3 crio true true} ...
	I0805 11:48:29.841238  402885 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-672593-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.68
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-672593 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 11:48:29.841264  402885 kube-vip.go:115] generating kube-vip config ...
	I0805 11:48:29.841294  402885 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0805 11:48:29.858412  402885 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0805 11:48:29.858491  402885 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0805 11:48:29.858560  402885 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 11:48:29.868915  402885 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0805 11:48:29.868978  402885 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0805 11:48:29.878710  402885 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0805 11:48:29.878750  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0805 11:48:29.878778  402885 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19377-383955/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0805 11:48:29.878810  402885 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19377-383955/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0805 11:48:29.878835  402885 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0805 11:48:29.883247  402885 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0805 11:48:29.883269  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0805 11:48:30.745724  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0805 11:48:30.745806  402885 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0805 11:48:30.750912  402885 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0805 11:48:30.750943  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0805 11:48:31.103575  402885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 11:48:31.118930  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0805 11:48:31.119043  402885 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0805 11:48:31.123655  402885 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0805 11:48:31.123696  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0805 11:48:31.536979  402885 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0805 11:48:31.546582  402885 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0805 11:48:31.562857  402885 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 11:48:31.579410  402885 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0805 11:48:31.595773  402885 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0805 11:48:31.599495  402885 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 11:48:31.613985  402885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 11:48:31.744740  402885 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 11:48:31.762194  402885 host.go:66] Checking if "ha-672593" exists ...
	I0805 11:48:31.762710  402885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:48:31.762778  402885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:48:31.778060  402885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46047
	I0805 11:48:31.778484  402885 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:48:31.778959  402885 main.go:141] libmachine: Using API Version  1
	I0805 11:48:31.778978  402885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:48:31.779317  402885 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:48:31.779507  402885 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:48:31.779669  402885 start.go:317] joinCluster: &{Name:ha-672593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-672593 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 11:48:31.779779  402885 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0805 11:48:31.779802  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:48:31.782506  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:48:31.782912  402885 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:48:31.782949  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:48:31.783252  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:48:31.783430  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:48:31.783580  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:48:31.783703  402885 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/id_rsa Username:docker}
	I0805 11:48:31.940397  402885 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 11:48:31.940449  402885 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7hrapk.99ds5t9ultc1uhu4 --discovery-token-ca-cert-hash sha256:d5d31a77e9c4cbf19599d2fca5d8f2345e115b01301fa4b841f92bcfec86ddc6 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-672593-m02 --control-plane --apiserver-advertise-address=192.168.39.68 --apiserver-bind-port=8443"
	I0805 11:48:55.526418  402885 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7hrapk.99ds5t9ultc1uhu4 --discovery-token-ca-cert-hash sha256:d5d31a77e9c4cbf19599d2fca5d8f2345e115b01301fa4b841f92bcfec86ddc6 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-672593-m02 --control-plane --apiserver-advertise-address=192.168.39.68 --apiserver-bind-port=8443": (23.585944246s)
	I0805 11:48:55.526449  402885 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0805 11:48:56.068657  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-672593-m02 minikube.k8s.io/updated_at=2024_08_05T11_48_56_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f minikube.k8s.io/name=ha-672593 minikube.k8s.io/primary=false
	I0805 11:48:56.174867  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-672593-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0805 11:48:56.316883  402885 start.go:319] duration metric: took 24.537207722s to joinCluster
	I0805 11:48:56.316980  402885 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 11:48:56.317322  402885 config.go:182] Loaded profile config "ha-672593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 11:48:56.318612  402885 out.go:177] * Verifying Kubernetes components...
	I0805 11:48:56.319939  402885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 11:48:56.558841  402885 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 11:48:56.578557  402885 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 11:48:56.578917  402885 kapi.go:59] client config for ha-672593: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/client.crt", KeyFile:"/home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/client.key", CAFile:"/home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0805 11:48:56.579008  402885 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.102:8443
	I0805 11:48:56.579356  402885 node_ready.go:35] waiting up to 6m0s for node "ha-672593-m02" to be "Ready" ...
	I0805 11:48:56.579481  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:48:56.579494  402885 round_trippers.go:469] Request Headers:
	I0805 11:48:56.579505  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:48:56.579511  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:48:56.599700  402885 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0805 11:48:57.080108  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:48:57.080133  402885 round_trippers.go:469] Request Headers:
	I0805 11:48:57.080145  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:48:57.080150  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:48:57.084537  402885 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 11:48:57.580565  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:48:57.580594  402885 round_trippers.go:469] Request Headers:
	I0805 11:48:57.580605  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:48:57.580610  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:48:57.585753  402885 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0805 11:48:58.079648  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:48:58.079677  402885 round_trippers.go:469] Request Headers:
	I0805 11:48:58.079688  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:48:58.079695  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:48:58.083254  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:48:58.580559  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:48:58.580585  402885 round_trippers.go:469] Request Headers:
	I0805 11:48:58.580598  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:48:58.580603  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:48:58.584453  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:48:58.585216  402885 node_ready.go:53] node "ha-672593-m02" has status "Ready":"False"
	I0805 11:48:59.079663  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:48:59.079688  402885 round_trippers.go:469] Request Headers:
	I0805 11:48:59.079700  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:48:59.079705  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:48:59.083765  402885 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 11:48:59.580423  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:48:59.580448  402885 round_trippers.go:469] Request Headers:
	I0805 11:48:59.580456  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:48:59.580462  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:48:59.584400  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:00.080594  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:00.080620  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:00.080631  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:00.080638  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:00.087087  402885 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0805 11:49:00.580570  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:00.580597  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:00.580609  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:00.580616  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:00.583885  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:01.080445  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:01.080476  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:01.080488  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:01.080495  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:01.083958  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:01.084750  402885 node_ready.go:53] node "ha-672593-m02" has status "Ready":"False"
	I0805 11:49:01.580286  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:01.580311  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:01.580322  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:01.580329  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:01.583567  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:02.080313  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:02.080337  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:02.080345  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:02.080350  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:02.084323  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:02.580551  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:02.580574  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:02.580583  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:02.580587  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:02.584267  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:03.080170  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:03.080193  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:03.080201  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:03.080205  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:03.083026  402885 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 11:49:03.579691  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:03.579718  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:03.579730  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:03.579735  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:03.583236  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:03.583923  402885 node_ready.go:53] node "ha-672593-m02" has status "Ready":"False"
	I0805 11:49:04.080087  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:04.080122  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:04.080130  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:04.080134  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:04.083800  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:04.580499  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:04.580533  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:04.580544  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:04.580551  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:04.584076  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:05.079989  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:05.080035  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:05.080046  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:05.080050  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:05.085032  402885 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 11:49:05.579843  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:05.579873  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:05.579884  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:05.579890  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:05.583592  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:05.584365  402885 node_ready.go:53] node "ha-672593-m02" has status "Ready":"False"
	I0805 11:49:06.079736  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:06.079782  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:06.079800  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:06.079805  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:06.083142  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:06.580138  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:06.580166  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:06.580175  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:06.580180  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:06.584140  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:07.079630  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:07.079659  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:07.079670  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:07.079678  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:07.083088  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:07.580280  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:07.580305  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:07.580313  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:07.580317  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:07.583537  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:08.079621  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:08.079646  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:08.079655  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:08.079658  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:08.082922  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:08.083484  402885 node_ready.go:53] node "ha-672593-m02" has status "Ready":"False"
	I0805 11:49:08.579882  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:08.579907  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:08.579916  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:08.579920  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:08.583265  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:09.080350  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:09.080374  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:09.080387  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:09.080392  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:09.083737  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:09.579791  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:09.579814  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:09.579822  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:09.579826  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:09.583634  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:10.079631  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:10.079654  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:10.079662  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:10.079665  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:10.082948  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:10.083628  402885 node_ready.go:53] node "ha-672593-m02" has status "Ready":"False"
	I0805 11:49:10.580238  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:10.580263  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:10.580307  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:10.580314  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:10.589238  402885 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0805 11:49:11.079870  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:11.079900  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:11.079911  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:11.079915  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:11.084116  402885 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 11:49:11.580430  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:11.580455  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:11.580464  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:11.580469  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:11.583695  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:12.080479  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:12.080501  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:12.080509  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:12.080513  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:12.083693  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:12.084378  402885 node_ready.go:53] node "ha-672593-m02" has status "Ready":"False"
	I0805 11:49:12.579782  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:12.579809  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:12.579821  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:12.579827  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:12.583199  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:13.080193  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:13.080217  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:13.080225  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:13.080228  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:13.083579  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:13.580615  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:13.580640  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:13.580646  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:13.580650  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:13.585161  402885 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 11:49:14.080187  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:14.080214  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:14.080225  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:14.080231  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:14.084544  402885 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 11:49:14.085474  402885 node_ready.go:49] node "ha-672593-m02" has status "Ready":"True"
	I0805 11:49:14.085495  402885 node_ready.go:38] duration metric: took 17.506115032s for node "ha-672593-m02" to be "Ready" ...
	I0805 11:49:14.085506  402885 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 11:49:14.085635  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0805 11:49:14.085646  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:14.085653  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:14.085659  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:14.091201  402885 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0805 11:49:14.097313  402885 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-sfh7c" in "kube-system" namespace to be "Ready" ...
	I0805 11:49:14.097408  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sfh7c
	I0805 11:49:14.097416  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:14.097424  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:14.097430  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:14.100382  402885 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 11:49:14.100960  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593
	I0805 11:49:14.100975  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:14.100984  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:14.100989  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:14.103390  402885 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 11:49:14.104025  402885 pod_ready.go:92] pod "coredns-7db6d8ff4d-sfh7c" in "kube-system" namespace has status "Ready":"True"
	I0805 11:49:14.104048  402885 pod_ready.go:81] duration metric: took 6.708322ms for pod "coredns-7db6d8ff4d-sfh7c" in "kube-system" namespace to be "Ready" ...
	I0805 11:49:14.104059  402885 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-sgd4v" in "kube-system" namespace to be "Ready" ...
	I0805 11:49:14.104107  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sgd4v
	I0805 11:49:14.104116  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:14.104122  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:14.104126  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:14.106567  402885 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 11:49:14.107261  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593
	I0805 11:49:14.107278  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:14.107289  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:14.107296  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:14.109533  402885 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 11:49:14.110141  402885 pod_ready.go:92] pod "coredns-7db6d8ff4d-sgd4v" in "kube-system" namespace has status "Ready":"True"
	I0805 11:49:14.110164  402885 pod_ready.go:81] duration metric: took 6.09529ms for pod "coredns-7db6d8ff4d-sgd4v" in "kube-system" namespace to be "Ready" ...
	I0805 11:49:14.110175  402885 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-672593" in "kube-system" namespace to be "Ready" ...
	I0805 11:49:14.110229  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-672593
	I0805 11:49:14.110237  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:14.110243  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:14.110246  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:14.113626  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:14.114280  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593
	I0805 11:49:14.114294  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:14.114301  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:14.114305  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:14.117412  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:14.118170  402885 pod_ready.go:92] pod "etcd-ha-672593" in "kube-system" namespace has status "Ready":"True"
	I0805 11:49:14.118188  402885 pod_ready.go:81] duration metric: took 8.002529ms for pod "etcd-ha-672593" in "kube-system" namespace to be "Ready" ...
	I0805 11:49:14.118196  402885 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-672593-m02" in "kube-system" namespace to be "Ready" ...
	I0805 11:49:14.118238  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-672593-m02
	I0805 11:49:14.118245  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:14.118251  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:14.118257  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:14.120418  402885 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 11:49:14.121019  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:14.121031  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:14.121038  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:14.121043  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:14.123844  402885 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 11:49:14.124626  402885 pod_ready.go:92] pod "etcd-ha-672593-m02" in "kube-system" namespace has status "Ready":"True"
	I0805 11:49:14.124648  402885 pod_ready.go:81] duration metric: took 6.444632ms for pod "etcd-ha-672593-m02" in "kube-system" namespace to be "Ready" ...
	I0805 11:49:14.124666  402885 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-672593" in "kube-system" namespace to be "Ready" ...
	I0805 11:49:14.281129  402885 request.go:629] Waited for 156.38375ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-672593
	I0805 11:49:14.281215  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-672593
	I0805 11:49:14.281226  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:14.281254  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:14.281262  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:14.284965  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:14.480911  402885 request.go:629] Waited for 195.176702ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-672593
	I0805 11:49:14.481004  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593
	I0805 11:49:14.481010  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:14.481018  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:14.481025  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:14.484641  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:14.485138  402885 pod_ready.go:92] pod "kube-apiserver-ha-672593" in "kube-system" namespace has status "Ready":"True"
	I0805 11:49:14.485156  402885 pod_ready.go:81] duration metric: took 360.478367ms for pod "kube-apiserver-ha-672593" in "kube-system" namespace to be "Ready" ...
	I0805 11:49:14.485168  402885 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-672593-m02" in "kube-system" namespace to be "Ready" ...
	I0805 11:49:14.680246  402885 request.go:629] Waited for 194.979653ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-672593-m02
	I0805 11:49:14.680317  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-672593-m02
	I0805 11:49:14.680325  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:14.680337  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:14.680347  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:14.683982  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:14.881084  402885 request.go:629] Waited for 196.407276ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:14.881149  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:14.881154  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:14.881162  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:14.881166  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:14.884441  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:14.885131  402885 pod_ready.go:92] pod "kube-apiserver-ha-672593-m02" in "kube-system" namespace has status "Ready":"True"
	I0805 11:49:14.885158  402885 pod_ready.go:81] duration metric: took 399.981518ms for pod "kube-apiserver-ha-672593-m02" in "kube-system" namespace to be "Ready" ...
	I0805 11:49:14.885172  402885 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-672593" in "kube-system" namespace to be "Ready" ...
	I0805 11:49:15.081186  402885 request.go:629] Waited for 195.91074ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-672593
	I0805 11:49:15.081267  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-672593
	I0805 11:49:15.081277  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:15.081292  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:15.081302  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:15.084342  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:15.280285  402885 request.go:629] Waited for 195.278302ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-672593
	I0805 11:49:15.280404  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593
	I0805 11:49:15.280415  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:15.280426  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:15.280433  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:15.283844  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:15.284509  402885 pod_ready.go:92] pod "kube-controller-manager-ha-672593" in "kube-system" namespace has status "Ready":"True"
	I0805 11:49:15.284528  402885 pod_ready.go:81] duration metric: took 399.349189ms for pod "kube-controller-manager-ha-672593" in "kube-system" namespace to be "Ready" ...
	I0805 11:49:15.284538  402885 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-672593-m02" in "kube-system" namespace to be "Ready" ...
	I0805 11:49:15.480679  402885 request.go:629] Waited for 196.067694ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-672593-m02
	I0805 11:49:15.480766  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-672593-m02
	I0805 11:49:15.480774  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:15.480785  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:15.480795  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:15.484099  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:15.681215  402885 request.go:629] Waited for 196.399946ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:15.681312  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:15.681323  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:15.681336  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:15.681348  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:15.684647  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:15.685195  402885 pod_ready.go:92] pod "kube-controller-manager-ha-672593-m02" in "kube-system" namespace has status "Ready":"True"
	I0805 11:49:15.685220  402885 pod_ready.go:81] duration metric: took 400.675947ms for pod "kube-controller-manager-ha-672593-m02" in "kube-system" namespace to be "Ready" ...
	I0805 11:49:15.685229  402885 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mdwh2" in "kube-system" namespace to be "Ready" ...
	I0805 11:49:15.880237  402885 request.go:629] Waited for 194.922894ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mdwh2
	I0805 11:49:15.880318  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mdwh2
	I0805 11:49:15.880325  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:15.880332  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:15.880336  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:15.883927  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:16.080288  402885 request.go:629] Waited for 195.361808ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:16.080355  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:16.080361  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:16.080369  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:16.080374  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:16.083757  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:16.084301  402885 pod_ready.go:92] pod "kube-proxy-mdwh2" in "kube-system" namespace has status "Ready":"True"
	I0805 11:49:16.084321  402885 pod_ready.go:81] duration metric: took 399.08567ms for pod "kube-proxy-mdwh2" in "kube-system" namespace to be "Ready" ...
	I0805 11:49:16.084333  402885 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wtsdt" in "kube-system" namespace to be "Ready" ...
	I0805 11:49:16.280547  402885 request.go:629] Waited for 196.116287ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wtsdt
	I0805 11:49:16.280639  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wtsdt
	I0805 11:49:16.280647  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:16.280663  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:16.280671  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:16.284090  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:16.480283  402885 request.go:629] Waited for 195.575461ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-672593
	I0805 11:49:16.480354  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593
	I0805 11:49:16.480359  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:16.480367  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:16.480371  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:16.483901  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:16.484533  402885 pod_ready.go:92] pod "kube-proxy-wtsdt" in "kube-system" namespace has status "Ready":"True"
	I0805 11:49:16.484555  402885 pod_ready.go:81] duration metric: took 400.214339ms for pod "kube-proxy-wtsdt" in "kube-system" namespace to be "Ready" ...
	I0805 11:49:16.484567  402885 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-672593" in "kube-system" namespace to be "Ready" ...
	I0805 11:49:16.680924  402885 request.go:629] Waited for 196.260193ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-672593
	I0805 11:49:16.680994  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-672593
	I0805 11:49:16.680999  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:16.681007  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:16.681016  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:16.684490  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:16.880561  402885 request.go:629] Waited for 195.448909ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-672593
	I0805 11:49:16.880624  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593
	I0805 11:49:16.880628  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:16.880637  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:16.880648  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:16.884661  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:16.885298  402885 pod_ready.go:92] pod "kube-scheduler-ha-672593" in "kube-system" namespace has status "Ready":"True"
	I0805 11:49:16.885325  402885 pod_ready.go:81] duration metric: took 400.748413ms for pod "kube-scheduler-ha-672593" in "kube-system" namespace to be "Ready" ...
	I0805 11:49:16.885341  402885 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-672593-m02" in "kube-system" namespace to be "Ready" ...
	I0805 11:49:17.080234  402885 request.go:629] Waited for 194.799084ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-672593-m02
	I0805 11:49:17.080333  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-672593-m02
	I0805 11:49:17.080351  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:17.080364  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:17.080375  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:17.083923  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:17.280992  402885 request.go:629] Waited for 196.405526ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:17.281067  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:49:17.281076  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:17.281085  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:17.281096  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:17.284891  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:17.285522  402885 pod_ready.go:92] pod "kube-scheduler-ha-672593-m02" in "kube-system" namespace has status "Ready":"True"
	I0805 11:49:17.285547  402885 pod_ready.go:81] duration metric: took 400.19791ms for pod "kube-scheduler-ha-672593-m02" in "kube-system" namespace to be "Ready" ...
	I0805 11:49:17.285561  402885 pod_ready.go:38] duration metric: took 3.200021393s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 11:49:17.285580  402885 api_server.go:52] waiting for apiserver process to appear ...
	I0805 11:49:17.285655  402885 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 11:49:17.302148  402885 api_server.go:72] duration metric: took 20.985124928s to wait for apiserver process to appear ...
	I0805 11:49:17.302174  402885 api_server.go:88] waiting for apiserver healthz status ...
	I0805 11:49:17.302199  402885 api_server.go:253] Checking apiserver healthz at https://192.168.39.102:8443/healthz ...
	I0805 11:49:17.306850  402885 api_server.go:279] https://192.168.39.102:8443/healthz returned 200:
	ok
	I0805 11:49:17.306917  402885 round_trippers.go:463] GET https://192.168.39.102:8443/version
	I0805 11:49:17.306925  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:17.306933  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:17.306936  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:17.307735  402885 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 11:49:17.307851  402885 api_server.go:141] control plane version: v1.30.3
	I0805 11:49:17.307868  402885 api_server.go:131] duration metric: took 5.687191ms to wait for apiserver health ...
	I0805 11:49:17.307876  402885 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 11:49:17.480226  402885 request.go:629] Waited for 172.274918ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0805 11:49:17.480300  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0805 11:49:17.480306  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:17.480313  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:17.480317  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:17.486540  402885 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0805 11:49:17.490941  402885 system_pods.go:59] 17 kube-system pods found
	I0805 11:49:17.490973  402885 system_pods.go:61] "coredns-7db6d8ff4d-sfh7c" [98c09423-e24f-4d26-b7f9-3da3986d538b] Running
	I0805 11:49:17.490978  402885 system_pods.go:61] "coredns-7db6d8ff4d-sgd4v" [58ff9d45-f09f-4213-b1c3-d568ee5ab68a] Running
	I0805 11:49:17.490982  402885 system_pods.go:61] "etcd-ha-672593" [379ffb87-5649-41f5-8095-d7196c401f79] Running
	I0805 11:49:17.490985  402885 system_pods.go:61] "etcd-ha-672593-m02" [ea52f3ac-f7d5-407e-ba4e-a01e5effbf97] Running
	I0805 11:49:17.490988  402885 system_pods.go:61] "kindnet-7fndz" [6bdb2b4a-e7c6-4e03-80f8-cf80501095c4] Running
	I0805 11:49:17.490991  402885 system_pods.go:61] "kindnet-85fm7" [404455ee-e31a-4c52-bf6f-f16546652f70] Running
	I0805 11:49:17.490995  402885 system_pods.go:61] "kube-apiserver-ha-672593" [6c6d5c3e-1d9e-4a8b-8a63-792a94e826a5] Running
	I0805 11:49:17.490998  402885 system_pods.go:61] "kube-apiserver-ha-672593-m02" [f40f5797-3916-467c-a42f-eb18f909121b] Running
	I0805 11:49:17.491001  402885 system_pods.go:61] "kube-controller-manager-ha-672593" [515f7a5c-1f0f-40e9-91ec-1921ec498f03] Running
	I0805 11:49:17.491004  402885 system_pods.go:61] "kube-controller-manager-ha-672593-m02" [60e41780-9ffd-49ea-b9ee-3bbf4dc3ad62] Running
	I0805 11:49:17.491007  402885 system_pods.go:61] "kube-proxy-mdwh2" [93a2ab4f-2393-49f1-b185-97b90da38595] Running
	I0805 11:49:17.491012  402885 system_pods.go:61] "kube-proxy-wtsdt" [9a1664bb-e0a8-496e-a74d-3c25080dca8e] Running
	I0805 11:49:17.491019  402885 system_pods.go:61] "kube-scheduler-ha-672593" [5b680e35-89cc-4a77-a100-2feeccfa4b4b] Running
	I0805 11:49:17.491022  402885 system_pods.go:61] "kube-scheduler-ha-672593-m02" [beba4210-14b0-4bc3-a256-e61d47037355] Running
	I0805 11:49:17.491025  402885 system_pods.go:61] "kube-vip-ha-672593" [36928548-a08e-49a4-a82a-6c6c3fb52b48] Running
	I0805 11:49:17.491028  402885 system_pods.go:61] "kube-vip-ha-672593-m02" [662dd07b-4ec6-471e-8209-6d25bac5459c] Running
	I0805 11:49:17.491031  402885 system_pods.go:61] "storage-provisioner" [9c3a4e49-f517-40e4-bd83-1e69b6a7550c] Running
	I0805 11:49:17.491045  402885 system_pods.go:74] duration metric: took 183.154454ms to wait for pod list to return data ...
	I0805 11:49:17.491062  402885 default_sa.go:34] waiting for default service account to be created ...
	I0805 11:49:17.680408  402885 request.go:629] Waited for 189.264104ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I0805 11:49:17.680470  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I0805 11:49:17.680475  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:17.680483  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:17.680488  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:17.684004  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:17.684266  402885 default_sa.go:45] found service account: "default"
	I0805 11:49:17.684287  402885 default_sa.go:55] duration metric: took 193.216718ms for default service account to be created ...
	I0805 11:49:17.684298  402885 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 11:49:17.880805  402885 request.go:629] Waited for 196.392194ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0805 11:49:17.880870  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0805 11:49:17.880898  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:17.880910  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:17.880915  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:17.886649  402885 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0805 11:49:17.892241  402885 system_pods.go:86] 17 kube-system pods found
	I0805 11:49:17.892268  402885 system_pods.go:89] "coredns-7db6d8ff4d-sfh7c" [98c09423-e24f-4d26-b7f9-3da3986d538b] Running
	I0805 11:49:17.892274  402885 system_pods.go:89] "coredns-7db6d8ff4d-sgd4v" [58ff9d45-f09f-4213-b1c3-d568ee5ab68a] Running
	I0805 11:49:17.892278  402885 system_pods.go:89] "etcd-ha-672593" [379ffb87-5649-41f5-8095-d7196c401f79] Running
	I0805 11:49:17.892283  402885 system_pods.go:89] "etcd-ha-672593-m02" [ea52f3ac-f7d5-407e-ba4e-a01e5effbf97] Running
	I0805 11:49:17.892287  402885 system_pods.go:89] "kindnet-7fndz" [6bdb2b4a-e7c6-4e03-80f8-cf80501095c4] Running
	I0805 11:49:17.892290  402885 system_pods.go:89] "kindnet-85fm7" [404455ee-e31a-4c52-bf6f-f16546652f70] Running
	I0805 11:49:17.892295  402885 system_pods.go:89] "kube-apiserver-ha-672593" [6c6d5c3e-1d9e-4a8b-8a63-792a94e826a5] Running
	I0805 11:49:17.892299  402885 system_pods.go:89] "kube-apiserver-ha-672593-m02" [f40f5797-3916-467c-a42f-eb18f909121b] Running
	I0805 11:49:17.892303  402885 system_pods.go:89] "kube-controller-manager-ha-672593" [515f7a5c-1f0f-40e9-91ec-1921ec498f03] Running
	I0805 11:49:17.892307  402885 system_pods.go:89] "kube-controller-manager-ha-672593-m02" [60e41780-9ffd-49ea-b9ee-3bbf4dc3ad62] Running
	I0805 11:49:17.892312  402885 system_pods.go:89] "kube-proxy-mdwh2" [93a2ab4f-2393-49f1-b185-97b90da38595] Running
	I0805 11:49:17.892317  402885 system_pods.go:89] "kube-proxy-wtsdt" [9a1664bb-e0a8-496e-a74d-3c25080dca8e] Running
	I0805 11:49:17.892321  402885 system_pods.go:89] "kube-scheduler-ha-672593" [5b680e35-89cc-4a77-a100-2feeccfa4b4b] Running
	I0805 11:49:17.892325  402885 system_pods.go:89] "kube-scheduler-ha-672593-m02" [beba4210-14b0-4bc3-a256-e61d47037355] Running
	I0805 11:49:17.892328  402885 system_pods.go:89] "kube-vip-ha-672593" [36928548-a08e-49a4-a82a-6c6c3fb52b48] Running
	I0805 11:49:17.892332  402885 system_pods.go:89] "kube-vip-ha-672593-m02" [662dd07b-4ec6-471e-8209-6d25bac5459c] Running
	I0805 11:49:17.892336  402885 system_pods.go:89] "storage-provisioner" [9c3a4e49-f517-40e4-bd83-1e69b6a7550c] Running
	I0805 11:49:17.892343  402885 system_pods.go:126] duration metric: took 208.038563ms to wait for k8s-apps to be running ...
	I0805 11:49:17.892357  402885 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 11:49:17.892407  402885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 11:49:17.908299  402885 system_svc.go:56] duration metric: took 15.936288ms WaitForService to wait for kubelet
	I0805 11:49:17.908332  402885 kubeadm.go:582] duration metric: took 21.591309871s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 11:49:17.908358  402885 node_conditions.go:102] verifying NodePressure condition ...
	I0805 11:49:18.080827  402885 request.go:629] Waited for 172.374358ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes
	I0805 11:49:18.080907  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes
	I0805 11:49:18.080914  402885 round_trippers.go:469] Request Headers:
	I0805 11:49:18.080921  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:49:18.080927  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:49:18.084595  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:49:18.085599  402885 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 11:49:18.085631  402885 node_conditions.go:123] node cpu capacity is 2
	I0805 11:49:18.085646  402885 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 11:49:18.085652  402885 node_conditions.go:123] node cpu capacity is 2
	I0805 11:49:18.085658  402885 node_conditions.go:105] duration metric: took 177.294354ms to run NodePressure ...
	I0805 11:49:18.085674  402885 start.go:241] waiting for startup goroutines ...
	I0805 11:49:18.085706  402885 start.go:255] writing updated cluster config ...
	I0805 11:49:18.087856  402885 out.go:177] 
	I0805 11:49:18.089404  402885 config.go:182] Loaded profile config "ha-672593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 11:49:18.089497  402885 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/config.json ...
	I0805 11:49:18.091027  402885 out.go:177] * Starting "ha-672593-m03" control-plane node in "ha-672593" cluster
	I0805 11:49:18.092227  402885 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 11:49:18.092250  402885 cache.go:56] Caching tarball of preloaded images
	I0805 11:49:18.092364  402885 preload.go:172] Found /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0805 11:49:18.092381  402885 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0805 11:49:18.092499  402885 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/config.json ...
	I0805 11:49:18.092715  402885 start.go:360] acquireMachinesLock for ha-672593-m03: {Name:mk3babe91d55c30c0b650587cdec6489eb3a7ed6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 11:49:18.092771  402885 start.go:364] duration metric: took 30.723µs to acquireMachinesLock for "ha-672593-m03"
	I0805 11:49:18.092793  402885 start.go:93] Provisioning new machine with config: &{Name:ha-672593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-672593 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 11:49:18.092931  402885 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0805 11:49:18.094466  402885 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 11:49:18.094601  402885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:49:18.094640  402885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:49:18.110518  402885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39533
	I0805 11:49:18.110993  402885 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:49:18.111496  402885 main.go:141] libmachine: Using API Version  1
	I0805 11:49:18.111518  402885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:49:18.111888  402885 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:49:18.112100  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetMachineName
	I0805 11:49:18.112278  402885 main.go:141] libmachine: (ha-672593-m03) Calling .DriverName
	I0805 11:49:18.112468  402885 start.go:159] libmachine.API.Create for "ha-672593" (driver="kvm2")
	I0805 11:49:18.112503  402885 client.go:168] LocalClient.Create starting
	I0805 11:49:18.112548  402885 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem
	I0805 11:49:18.112595  402885 main.go:141] libmachine: Decoding PEM data...
	I0805 11:49:18.112618  402885 main.go:141] libmachine: Parsing certificate...
	I0805 11:49:18.112691  402885 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem
	I0805 11:49:18.112729  402885 main.go:141] libmachine: Decoding PEM data...
	I0805 11:49:18.112747  402885 main.go:141] libmachine: Parsing certificate...
	I0805 11:49:18.112773  402885 main.go:141] libmachine: Running pre-create checks...
	I0805 11:49:18.112786  402885 main.go:141] libmachine: (ha-672593-m03) Calling .PreCreateCheck
	I0805 11:49:18.112944  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetConfigRaw
	I0805 11:49:18.113366  402885 main.go:141] libmachine: Creating machine...
	I0805 11:49:18.113383  402885 main.go:141] libmachine: (ha-672593-m03) Calling .Create
	I0805 11:49:18.113521  402885 main.go:141] libmachine: (ha-672593-m03) Creating KVM machine...
	I0805 11:49:18.114665  402885 main.go:141] libmachine: (ha-672593-m03) DBG | found existing default KVM network
	I0805 11:49:18.114683  402885 main.go:141] libmachine: (ha-672593-m03) DBG | found existing private KVM network mk-ha-672593
	I0805 11:49:18.114826  402885 main.go:141] libmachine: (ha-672593-m03) Setting up store path in /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m03 ...
	I0805 11:49:18.114853  402885 main.go:141] libmachine: (ha-672593-m03) Building disk image from file:///home/jenkins/minikube-integration/19377-383955/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0805 11:49:18.114899  402885 main.go:141] libmachine: (ha-672593-m03) DBG | I0805 11:49:18.114816  403750 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 11:49:18.114988  402885 main.go:141] libmachine: (ha-672593-m03) Downloading /home/jenkins/minikube-integration/19377-383955/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19377-383955/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0805 11:49:18.417438  402885 main.go:141] libmachine: (ha-672593-m03) DBG | I0805 11:49:18.417283  403750 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m03/id_rsa...
	I0805 11:49:18.618583  402885 main.go:141] libmachine: (ha-672593-m03) DBG | I0805 11:49:18.618449  403750 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m03/ha-672593-m03.rawdisk...
	I0805 11:49:18.618613  402885 main.go:141] libmachine: (ha-672593-m03) DBG | Writing magic tar header
	I0805 11:49:18.618624  402885 main.go:141] libmachine: (ha-672593-m03) DBG | Writing SSH key tar header
	I0805 11:49:18.618632  402885 main.go:141] libmachine: (ha-672593-m03) DBG | I0805 11:49:18.618557  403750 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m03 ...
	I0805 11:49:18.618658  402885 main.go:141] libmachine: (ha-672593-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m03
	I0805 11:49:18.618727  402885 main.go:141] libmachine: (ha-672593-m03) Setting executable bit set on /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m03 (perms=drwx------)
	I0805 11:49:18.618759  402885 main.go:141] libmachine: (ha-672593-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19377-383955/.minikube/machines
	I0805 11:49:18.618772  402885 main.go:141] libmachine: (ha-672593-m03) Setting executable bit set on /home/jenkins/minikube-integration/19377-383955/.minikube/machines (perms=drwxr-xr-x)
	I0805 11:49:18.618792  402885 main.go:141] libmachine: (ha-672593-m03) Setting executable bit set on /home/jenkins/minikube-integration/19377-383955/.minikube (perms=drwxr-xr-x)
	I0805 11:49:18.618807  402885 main.go:141] libmachine: (ha-672593-m03) Setting executable bit set on /home/jenkins/minikube-integration/19377-383955 (perms=drwxrwxr-x)
	I0805 11:49:18.618823  402885 main.go:141] libmachine: (ha-672593-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0805 11:49:18.618837  402885 main.go:141] libmachine: (ha-672593-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 11:49:18.618851  402885 main.go:141] libmachine: (ha-672593-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19377-383955
	I0805 11:49:18.618868  402885 main.go:141] libmachine: (ha-672593-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0805 11:49:18.618878  402885 main.go:141] libmachine: (ha-672593-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0805 11:49:18.618887  402885 main.go:141] libmachine: (ha-672593-m03) DBG | Checking permissions on dir: /home/jenkins
	I0805 11:49:18.618892  402885 main.go:141] libmachine: (ha-672593-m03) DBG | Checking permissions on dir: /home
	I0805 11:49:18.618901  402885 main.go:141] libmachine: (ha-672593-m03) DBG | Skipping /home - not owner
	I0805 11:49:18.618911  402885 main.go:141] libmachine: (ha-672593-m03) Creating domain...
	I0805 11:49:18.619646  402885 main.go:141] libmachine: (ha-672593-m03) define libvirt domain using xml: 
	I0805 11:49:18.619668  402885 main.go:141] libmachine: (ha-672593-m03) <domain type='kvm'>
	I0805 11:49:18.619677  402885 main.go:141] libmachine: (ha-672593-m03)   <name>ha-672593-m03</name>
	I0805 11:49:18.619690  402885 main.go:141] libmachine: (ha-672593-m03)   <memory unit='MiB'>2200</memory>
	I0805 11:49:18.619714  402885 main.go:141] libmachine: (ha-672593-m03)   <vcpu>2</vcpu>
	I0805 11:49:18.619731  402885 main.go:141] libmachine: (ha-672593-m03)   <features>
	I0805 11:49:18.619759  402885 main.go:141] libmachine: (ha-672593-m03)     <acpi/>
	I0805 11:49:18.619772  402885 main.go:141] libmachine: (ha-672593-m03)     <apic/>
	I0805 11:49:18.619788  402885 main.go:141] libmachine: (ha-672593-m03)     <pae/>
	I0805 11:49:18.619834  402885 main.go:141] libmachine: (ha-672593-m03)     
	I0805 11:49:18.619862  402885 main.go:141] libmachine: (ha-672593-m03)   </features>
	I0805 11:49:18.619875  402885 main.go:141] libmachine: (ha-672593-m03)   <cpu mode='host-passthrough'>
	I0805 11:49:18.619889  402885 main.go:141] libmachine: (ha-672593-m03)   
	I0805 11:49:18.619897  402885 main.go:141] libmachine: (ha-672593-m03)   </cpu>
	I0805 11:49:18.619903  402885 main.go:141] libmachine: (ha-672593-m03)   <os>
	I0805 11:49:18.619911  402885 main.go:141] libmachine: (ha-672593-m03)     <type>hvm</type>
	I0805 11:49:18.619917  402885 main.go:141] libmachine: (ha-672593-m03)     <boot dev='cdrom'/>
	I0805 11:49:18.619925  402885 main.go:141] libmachine: (ha-672593-m03)     <boot dev='hd'/>
	I0805 11:49:18.619932  402885 main.go:141] libmachine: (ha-672593-m03)     <bootmenu enable='no'/>
	I0805 11:49:18.619947  402885 main.go:141] libmachine: (ha-672593-m03)   </os>
	I0805 11:49:18.619977  402885 main.go:141] libmachine: (ha-672593-m03)   <devices>
	I0805 11:49:18.619999  402885 main.go:141] libmachine: (ha-672593-m03)     <disk type='file' device='cdrom'>
	I0805 11:49:18.620017  402885 main.go:141] libmachine: (ha-672593-m03)       <source file='/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m03/boot2docker.iso'/>
	I0805 11:49:18.620029  402885 main.go:141] libmachine: (ha-672593-m03)       <target dev='hdc' bus='scsi'/>
	I0805 11:49:18.620042  402885 main.go:141] libmachine: (ha-672593-m03)       <readonly/>
	I0805 11:49:18.620053  402885 main.go:141] libmachine: (ha-672593-m03)     </disk>
	I0805 11:49:18.620065  402885 main.go:141] libmachine: (ha-672593-m03)     <disk type='file' device='disk'>
	I0805 11:49:18.620083  402885 main.go:141] libmachine: (ha-672593-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0805 11:49:18.620100  402885 main.go:141] libmachine: (ha-672593-m03)       <source file='/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m03/ha-672593-m03.rawdisk'/>
	I0805 11:49:18.620110  402885 main.go:141] libmachine: (ha-672593-m03)       <target dev='hda' bus='virtio'/>
	I0805 11:49:18.620119  402885 main.go:141] libmachine: (ha-672593-m03)     </disk>
	I0805 11:49:18.620127  402885 main.go:141] libmachine: (ha-672593-m03)     <interface type='network'>
	I0805 11:49:18.620137  402885 main.go:141] libmachine: (ha-672593-m03)       <source network='mk-ha-672593'/>
	I0805 11:49:18.620147  402885 main.go:141] libmachine: (ha-672593-m03)       <model type='virtio'/>
	I0805 11:49:18.620160  402885 main.go:141] libmachine: (ha-672593-m03)     </interface>
	I0805 11:49:18.620172  402885 main.go:141] libmachine: (ha-672593-m03)     <interface type='network'>
	I0805 11:49:18.620185  402885 main.go:141] libmachine: (ha-672593-m03)       <source network='default'/>
	I0805 11:49:18.620196  402885 main.go:141] libmachine: (ha-672593-m03)       <model type='virtio'/>
	I0805 11:49:18.620205  402885 main.go:141] libmachine: (ha-672593-m03)     </interface>
	I0805 11:49:18.620216  402885 main.go:141] libmachine: (ha-672593-m03)     <serial type='pty'>
	I0805 11:49:18.620228  402885 main.go:141] libmachine: (ha-672593-m03)       <target port='0'/>
	I0805 11:49:18.620235  402885 main.go:141] libmachine: (ha-672593-m03)     </serial>
	I0805 11:49:18.620245  402885 main.go:141] libmachine: (ha-672593-m03)     <console type='pty'>
	I0805 11:49:18.620256  402885 main.go:141] libmachine: (ha-672593-m03)       <target type='serial' port='0'/>
	I0805 11:49:18.620268  402885 main.go:141] libmachine: (ha-672593-m03)     </console>
	I0805 11:49:18.620279  402885 main.go:141] libmachine: (ha-672593-m03)     <rng model='virtio'>
	I0805 11:49:18.620292  402885 main.go:141] libmachine: (ha-672593-m03)       <backend model='random'>/dev/random</backend>
	I0805 11:49:18.620304  402885 main.go:141] libmachine: (ha-672593-m03)     </rng>
	I0805 11:49:18.620318  402885 main.go:141] libmachine: (ha-672593-m03)     
	I0805 11:49:18.620328  402885 main.go:141] libmachine: (ha-672593-m03)     
	I0805 11:49:18.620336  402885 main.go:141] libmachine: (ha-672593-m03)   </devices>
	I0805 11:49:18.620348  402885 main.go:141] libmachine: (ha-672593-m03) </domain>
	I0805 11:49:18.620357  402885 main.go:141] libmachine: (ha-672593-m03) 
	I0805 11:49:18.626581  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:19:78:59 in network default
	I0805 11:49:18.627011  402885 main.go:141] libmachine: (ha-672593-m03) Ensuring networks are active...
	I0805 11:49:18.627056  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:18.627664  402885 main.go:141] libmachine: (ha-672593-m03) Ensuring network default is active
	I0805 11:49:18.627934  402885 main.go:141] libmachine: (ha-672593-m03) Ensuring network mk-ha-672593 is active
	I0805 11:49:18.628245  402885 main.go:141] libmachine: (ha-672593-m03) Getting domain xml...
	I0805 11:49:18.628903  402885 main.go:141] libmachine: (ha-672593-m03) Creating domain...
	I0805 11:49:19.873424  402885 main.go:141] libmachine: (ha-672593-m03) Waiting to get IP...
	I0805 11:49:19.874277  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:19.874689  402885 main.go:141] libmachine: (ha-672593-m03) DBG | unable to find current IP address of domain ha-672593-m03 in network mk-ha-672593
	I0805 11:49:19.874750  402885 main.go:141] libmachine: (ha-672593-m03) DBG | I0805 11:49:19.874682  403750 retry.go:31] will retry after 267.860052ms: waiting for machine to come up
	I0805 11:49:20.144380  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:20.144868  402885 main.go:141] libmachine: (ha-672593-m03) DBG | unable to find current IP address of domain ha-672593-m03 in network mk-ha-672593
	I0805 11:49:20.144894  402885 main.go:141] libmachine: (ha-672593-m03) DBG | I0805 11:49:20.144813  403750 retry.go:31] will retry after 245.509323ms: waiting for machine to come up
	I0805 11:49:20.392488  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:20.392960  402885 main.go:141] libmachine: (ha-672593-m03) DBG | unable to find current IP address of domain ha-672593-m03 in network mk-ha-672593
	I0805 11:49:20.392989  402885 main.go:141] libmachine: (ha-672593-m03) DBG | I0805 11:49:20.392900  403750 retry.go:31] will retry after 374.508573ms: waiting for machine to come up
	I0805 11:49:20.769320  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:20.769855  402885 main.go:141] libmachine: (ha-672593-m03) DBG | unable to find current IP address of domain ha-672593-m03 in network mk-ha-672593
	I0805 11:49:20.769893  402885 main.go:141] libmachine: (ha-672593-m03) DBG | I0805 11:49:20.769790  403750 retry.go:31] will retry after 522.60364ms: waiting for machine to come up
	I0805 11:49:21.293910  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:21.294339  402885 main.go:141] libmachine: (ha-672593-m03) DBG | unable to find current IP address of domain ha-672593-m03 in network mk-ha-672593
	I0805 11:49:21.294363  402885 main.go:141] libmachine: (ha-672593-m03) DBG | I0805 11:49:21.294294  403750 retry.go:31] will retry after 472.93212ms: waiting for machine to come up
	I0805 11:49:21.768948  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:21.769410  402885 main.go:141] libmachine: (ha-672593-m03) DBG | unable to find current IP address of domain ha-672593-m03 in network mk-ha-672593
	I0805 11:49:21.769441  402885 main.go:141] libmachine: (ha-672593-m03) DBG | I0805 11:49:21.769360  403750 retry.go:31] will retry after 609.870077ms: waiting for machine to come up
	I0805 11:49:22.381431  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:22.381891  402885 main.go:141] libmachine: (ha-672593-m03) DBG | unable to find current IP address of domain ha-672593-m03 in network mk-ha-672593
	I0805 11:49:22.381920  402885 main.go:141] libmachine: (ha-672593-m03) DBG | I0805 11:49:22.381848  403750 retry.go:31] will retry after 879.361844ms: waiting for machine to come up
	I0805 11:49:23.263122  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:23.263646  402885 main.go:141] libmachine: (ha-672593-m03) DBG | unable to find current IP address of domain ha-672593-m03 in network mk-ha-672593
	I0805 11:49:23.263677  402885 main.go:141] libmachine: (ha-672593-m03) DBG | I0805 11:49:23.263608  403750 retry.go:31] will retry after 904.198074ms: waiting for machine to come up
	I0805 11:49:24.169201  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:24.169569  402885 main.go:141] libmachine: (ha-672593-m03) DBG | unable to find current IP address of domain ha-672593-m03 in network mk-ha-672593
	I0805 11:49:24.169593  402885 main.go:141] libmachine: (ha-672593-m03) DBG | I0805 11:49:24.169530  403750 retry.go:31] will retry after 1.542079417s: waiting for machine to come up
	I0805 11:49:25.714182  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:25.714581  402885 main.go:141] libmachine: (ha-672593-m03) DBG | unable to find current IP address of domain ha-672593-m03 in network mk-ha-672593
	I0805 11:49:25.714613  402885 main.go:141] libmachine: (ha-672593-m03) DBG | I0805 11:49:25.714541  403750 retry.go:31] will retry after 1.650814306s: waiting for machine to come up
	I0805 11:49:27.367413  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:27.367877  402885 main.go:141] libmachine: (ha-672593-m03) DBG | unable to find current IP address of domain ha-672593-m03 in network mk-ha-672593
	I0805 11:49:27.367914  402885 main.go:141] libmachine: (ha-672593-m03) DBG | I0805 11:49:27.367814  403750 retry.go:31] will retry after 2.4227249s: waiting for machine to come up
	I0805 11:49:29.792991  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:29.793659  402885 main.go:141] libmachine: (ha-672593-m03) DBG | unable to find current IP address of domain ha-672593-m03 in network mk-ha-672593
	I0805 11:49:29.793681  402885 main.go:141] libmachine: (ha-672593-m03) DBG | I0805 11:49:29.793600  403750 retry.go:31] will retry after 2.260664163s: waiting for machine to come up
	I0805 11:49:32.056713  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:32.057175  402885 main.go:141] libmachine: (ha-672593-m03) DBG | unable to find current IP address of domain ha-672593-m03 in network mk-ha-672593
	I0805 11:49:32.057202  402885 main.go:141] libmachine: (ha-672593-m03) DBG | I0805 11:49:32.057128  403750 retry.go:31] will retry after 3.608199099s: waiting for machine to come up
	I0805 11:49:35.668118  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:35.668530  402885 main.go:141] libmachine: (ha-672593-m03) DBG | unable to find current IP address of domain ha-672593-m03 in network mk-ha-672593
	I0805 11:49:35.668565  402885 main.go:141] libmachine: (ha-672593-m03) DBG | I0805 11:49:35.668472  403750 retry.go:31] will retry after 4.306357465s: waiting for machine to come up
	I0805 11:49:39.977661  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:39.978135  402885 main.go:141] libmachine: (ha-672593-m03) Found IP for machine: 192.168.39.210
	I0805 11:49:39.978160  402885 main.go:141] libmachine: (ha-672593-m03) Reserving static IP address...
	I0805 11:49:39.978173  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has current primary IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:39.978560  402885 main.go:141] libmachine: (ha-672593-m03) DBG | unable to find host DHCP lease matching {name: "ha-672593-m03", mac: "52:54:00:3d:2e:1f", ip: "192.168.39.210"} in network mk-ha-672593
	I0805 11:49:40.049737  402885 main.go:141] libmachine: (ha-672593-m03) DBG | Getting to WaitForSSH function...
	I0805 11:49:40.049773  402885 main.go:141] libmachine: (ha-672593-m03) Reserved static IP address: 192.168.39.210
	I0805 11:49:40.049788  402885 main.go:141] libmachine: (ha-672593-m03) Waiting for SSH to be available...
	I0805 11:49:40.052546  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:40.052947  402885 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:49:40.052979  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:40.053116  402885 main.go:141] libmachine: (ha-672593-m03) DBG | Using SSH client type: external
	I0805 11:49:40.053145  402885 main.go:141] libmachine: (ha-672593-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m03/id_rsa (-rw-------)
	I0805 11:49:40.053196  402885 main.go:141] libmachine: (ha-672593-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.210 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 11:49:40.053220  402885 main.go:141] libmachine: (ha-672593-m03) DBG | About to run SSH command:
	I0805 11:49:40.053238  402885 main.go:141] libmachine: (ha-672593-m03) DBG | exit 0
	I0805 11:49:40.175631  402885 main.go:141] libmachine: (ha-672593-m03) DBG | SSH cmd err, output: <nil>: 
	I0805 11:49:40.175924  402885 main.go:141] libmachine: (ha-672593-m03) KVM machine creation complete!
	I0805 11:49:40.176257  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetConfigRaw
	I0805 11:49:40.176807  402885 main.go:141] libmachine: (ha-672593-m03) Calling .DriverName
	I0805 11:49:40.176987  402885 main.go:141] libmachine: (ha-672593-m03) Calling .DriverName
	I0805 11:49:40.177152  402885 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0805 11:49:40.177165  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetState
	I0805 11:49:40.178340  402885 main.go:141] libmachine: Detecting operating system of created instance...
	I0805 11:49:40.178354  402885 main.go:141] libmachine: Waiting for SSH to be available...
	I0805 11:49:40.178365  402885 main.go:141] libmachine: Getting to WaitForSSH function...
	I0805 11:49:40.178370  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHHostname
	I0805 11:49:40.180369  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:40.180743  402885 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-672593-m03 Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:49:40.180777  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:40.180920  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHPort
	I0805 11:49:40.181087  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHKeyPath
	I0805 11:49:40.181238  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHKeyPath
	I0805 11:49:40.181368  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHUsername
	I0805 11:49:40.181525  402885 main.go:141] libmachine: Using SSH client type: native
	I0805 11:49:40.181796  402885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0805 11:49:40.181811  402885 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0805 11:49:40.282845  402885 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 11:49:40.282874  402885 main.go:141] libmachine: Detecting the provisioner...
	I0805 11:49:40.282885  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHHostname
	I0805 11:49:40.285502  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:40.285964  402885 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-672593-m03 Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:49:40.285989  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:40.286179  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHPort
	I0805 11:49:40.286403  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHKeyPath
	I0805 11:49:40.286646  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHKeyPath
	I0805 11:49:40.286792  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHUsername
	I0805 11:49:40.286948  402885 main.go:141] libmachine: Using SSH client type: native
	I0805 11:49:40.287171  402885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0805 11:49:40.287185  402885 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0805 11:49:40.388799  402885 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0805 11:49:40.388895  402885 main.go:141] libmachine: found compatible host: buildroot
	I0805 11:49:40.388910  402885 main.go:141] libmachine: Provisioning with buildroot...
	I0805 11:49:40.388926  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetMachineName
	I0805 11:49:40.389198  402885 buildroot.go:166] provisioning hostname "ha-672593-m03"
	I0805 11:49:40.389226  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetMachineName
	I0805 11:49:40.389431  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHHostname
	I0805 11:49:40.391957  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:40.392397  402885 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-672593-m03 Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:49:40.392423  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:40.392547  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHPort
	I0805 11:49:40.392704  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHKeyPath
	I0805 11:49:40.392865  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHKeyPath
	I0805 11:49:40.393039  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHUsername
	I0805 11:49:40.393243  402885 main.go:141] libmachine: Using SSH client type: native
	I0805 11:49:40.393412  402885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0805 11:49:40.393426  402885 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-672593-m03 && echo "ha-672593-m03" | sudo tee /etc/hostname
	I0805 11:49:40.510646  402885 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-672593-m03
	
	I0805 11:49:40.510680  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHHostname
	I0805 11:49:40.513341  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:40.513623  402885 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-672593-m03 Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:49:40.513661  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:40.513905  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHPort
	I0805 11:49:40.514109  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHKeyPath
	I0805 11:49:40.514274  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHKeyPath
	I0805 11:49:40.514454  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHUsername
	I0805 11:49:40.514639  402885 main.go:141] libmachine: Using SSH client type: native
	I0805 11:49:40.514835  402885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0805 11:49:40.514852  402885 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-672593-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-672593-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-672593-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 11:49:40.626042  402885 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 11:49:40.626072  402885 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19377-383955/.minikube CaCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19377-383955/.minikube}
	I0805 11:49:40.626094  402885 buildroot.go:174] setting up certificates
	I0805 11:49:40.626103  402885 provision.go:84] configureAuth start
	I0805 11:49:40.626114  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetMachineName
	I0805 11:49:40.626432  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetIP
	I0805 11:49:40.629094  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:40.629495  402885 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-672593-m03 Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:49:40.629523  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:40.629677  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHHostname
	I0805 11:49:40.631827  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:40.632138  402885 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-672593-m03 Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:49:40.632168  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:40.632284  402885 provision.go:143] copyHostCerts
	I0805 11:49:40.632321  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 11:49:40.632388  402885 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem, removing ...
	I0805 11:49:40.632409  402885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 11:49:40.632486  402885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem (1082 bytes)
	I0805 11:49:40.632570  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 11:49:40.632596  402885 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem, removing ...
	I0805 11:49:40.632604  402885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 11:49:40.632630  402885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem (1123 bytes)
	I0805 11:49:40.632675  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 11:49:40.632695  402885 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem, removing ...
	I0805 11:49:40.632701  402885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 11:49:40.632721  402885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem (1675 bytes)
	I0805 11:49:40.632769  402885 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem org=jenkins.ha-672593-m03 san=[127.0.0.1 192.168.39.210 ha-672593-m03 localhost minikube]
	I0805 11:49:40.789050  402885 provision.go:177] copyRemoteCerts
	I0805 11:49:40.789114  402885 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 11:49:40.789142  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHHostname
	I0805 11:49:40.791859  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:40.792190  402885 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-672593-m03 Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:49:40.792216  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:40.792445  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHPort
	I0805 11:49:40.792669  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHKeyPath
	I0805 11:49:40.792858  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHUsername
	I0805 11:49:40.793040  402885 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m03/id_rsa Username:docker}
	I0805 11:49:40.876523  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 11:49:40.876619  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 11:49:40.900431  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 11:49:40.900512  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0805 11:49:40.923930  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 11:49:40.924001  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0805 11:49:40.948068  402885 provision.go:87] duration metric: took 321.949684ms to configureAuth
	I0805 11:49:40.948097  402885 buildroot.go:189] setting minikube options for container-runtime
	I0805 11:49:40.948344  402885 config.go:182] Loaded profile config "ha-672593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 11:49:40.948463  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHHostname
	I0805 11:49:40.951011  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:40.951445  402885 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-672593-m03 Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:49:40.951477  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:40.951644  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHPort
	I0805 11:49:40.951886  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHKeyPath
	I0805 11:49:40.952061  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHKeyPath
	I0805 11:49:40.952187  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHUsername
	I0805 11:49:40.952338  402885 main.go:141] libmachine: Using SSH client type: native
	I0805 11:49:40.952510  402885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0805 11:49:40.952524  402885 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 11:49:41.209174  402885 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 11:49:41.209206  402885 main.go:141] libmachine: Checking connection to Docker...
	I0805 11:49:41.209215  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetURL
	I0805 11:49:41.210659  402885 main.go:141] libmachine: (ha-672593-m03) DBG | Using libvirt version 6000000
	I0805 11:49:41.213052  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:41.213509  402885 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-672593-m03 Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:49:41.213539  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:41.213704  402885 main.go:141] libmachine: Docker is up and running!
	I0805 11:49:41.213720  402885 main.go:141] libmachine: Reticulating splines...
	I0805 11:49:41.213728  402885 client.go:171] duration metric: took 23.101213828s to LocalClient.Create
	I0805 11:49:41.213756  402885 start.go:167] duration metric: took 23.101289851s to libmachine.API.Create "ha-672593"
	I0805 11:49:41.213769  402885 start.go:293] postStartSetup for "ha-672593-m03" (driver="kvm2")
	I0805 11:49:41.213786  402885 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 11:49:41.213810  402885 main.go:141] libmachine: (ha-672593-m03) Calling .DriverName
	I0805 11:49:41.214069  402885 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 11:49:41.214089  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHHostname
	I0805 11:49:41.216132  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:41.216484  402885 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-672593-m03 Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:49:41.216515  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:41.216666  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHPort
	I0805 11:49:41.216855  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHKeyPath
	I0805 11:49:41.217016  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHUsername
	I0805 11:49:41.217125  402885 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m03/id_rsa Username:docker}
	I0805 11:49:41.299248  402885 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 11:49:41.303540  402885 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 11:49:41.303568  402885 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/addons for local assets ...
	I0805 11:49:41.303653  402885 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/files for local assets ...
	I0805 11:49:41.303770  402885 filesync.go:149] local asset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> 3912192.pem in /etc/ssl/certs
	I0805 11:49:41.303788  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> /etc/ssl/certs/3912192.pem
	I0805 11:49:41.303904  402885 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 11:49:41.313868  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 11:49:41.341779  402885 start.go:296] duration metric: took 127.992765ms for postStartSetup
	I0805 11:49:41.341833  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetConfigRaw
	I0805 11:49:41.342533  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetIP
	I0805 11:49:41.345689  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:41.346158  402885 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-672593-m03 Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:49:41.346190  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:41.346491  402885 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/config.json ...
	I0805 11:49:41.346691  402885 start.go:128] duration metric: took 23.253744147s to createHost
	I0805 11:49:41.346721  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHHostname
	I0805 11:49:41.349004  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:41.349345  402885 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-672593-m03 Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:49:41.349381  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:41.349519  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHPort
	I0805 11:49:41.349713  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHKeyPath
	I0805 11:49:41.349876  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHKeyPath
	I0805 11:49:41.349994  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHUsername
	I0805 11:49:41.350202  402885 main.go:141] libmachine: Using SSH client type: native
	I0805 11:49:41.350410  402885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0805 11:49:41.350424  402885 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 11:49:41.452458  402885 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722858581.427209706
	
	I0805 11:49:41.452487  402885 fix.go:216] guest clock: 1722858581.427209706
	I0805 11:49:41.452495  402885 fix.go:229] Guest: 2024-08-05 11:49:41.427209706 +0000 UTC Remote: 2024-08-05 11:49:41.34670633 +0000 UTC m=+159.977911377 (delta=80.503376ms)
	I0805 11:49:41.452514  402885 fix.go:200] guest clock delta is within tolerance: 80.503376ms
	I0805 11:49:41.452522  402885 start.go:83] releasing machines lock for "ha-672593-m03", held for 23.359741777s
	I0805 11:49:41.452547  402885 main.go:141] libmachine: (ha-672593-m03) Calling .DriverName
	I0805 11:49:41.452802  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetIP
	I0805 11:49:41.455471  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:41.455836  402885 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-672593-m03 Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:49:41.455857  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:41.457788  402885 out.go:177] * Found network options:
	I0805 11:49:41.459200  402885 out.go:177]   - NO_PROXY=192.168.39.102,192.168.39.68
	W0805 11:49:41.460706  402885 proxy.go:119] fail to check proxy env: Error ip not in block
	W0805 11:49:41.460726  402885 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 11:49:41.460740  402885 main.go:141] libmachine: (ha-672593-m03) Calling .DriverName
	I0805 11:49:41.461235  402885 main.go:141] libmachine: (ha-672593-m03) Calling .DriverName
	I0805 11:49:41.461410  402885 main.go:141] libmachine: (ha-672593-m03) Calling .DriverName
	I0805 11:49:41.461510  402885 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 11:49:41.461549  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHHostname
	W0805 11:49:41.461615  402885 proxy.go:119] fail to check proxy env: Error ip not in block
	W0805 11:49:41.461641  402885 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 11:49:41.461716  402885 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 11:49:41.461764  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHHostname
	I0805 11:49:41.464420  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:41.464679  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:41.464853  402885 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-672593-m03 Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:49:41.464880  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:41.464999  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHPort
	I0805 11:49:41.465107  402885 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-672593-m03 Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:49:41.465134  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:41.465172  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHKeyPath
	I0805 11:49:41.465283  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHPort
	I0805 11:49:41.465371  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHUsername
	I0805 11:49:41.465462  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHKeyPath
	I0805 11:49:41.465549  402885 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m03/id_rsa Username:docker}
	I0805 11:49:41.465591  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHUsername
	I0805 11:49:41.465704  402885 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m03/id_rsa Username:docker}
	I0805 11:49:41.695763  402885 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 11:49:41.701998  402885 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 11:49:41.702066  402885 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 11:49:41.718465  402885 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 11:49:41.718491  402885 start.go:495] detecting cgroup driver to use...
	I0805 11:49:41.718598  402885 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 11:49:41.735354  402885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 11:49:41.749933  402885 docker.go:217] disabling cri-docker service (if available) ...
	I0805 11:49:41.750012  402885 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 11:49:41.764742  402885 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 11:49:41.780242  402885 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 11:49:41.901102  402885 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 11:49:42.077051  402885 docker.go:233] disabling docker service ...
	I0805 11:49:42.077130  402885 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 11:49:42.093296  402885 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 11:49:42.106818  402885 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 11:49:42.240445  402885 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 11:49:42.372217  402885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 11:49:42.388583  402885 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 11:49:42.407950  402885 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0805 11:49:42.408024  402885 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:49:42.418305  402885 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 11:49:42.418360  402885 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:49:42.428269  402885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:49:42.437963  402885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:49:42.447753  402885 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 11:49:42.458747  402885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:49:42.469409  402885 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:49:42.487180  402885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:49:42.498318  402885 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 11:49:42.508644  402885 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0805 11:49:42.508697  402885 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0805 11:49:42.522991  402885 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 11:49:42.532658  402885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 11:49:42.651868  402885 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 11:49:42.786123  402885 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 11:49:42.786203  402885 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 11:49:42.791223  402885 start.go:563] Will wait 60s for crictl version
	I0805 11:49:42.791280  402885 ssh_runner.go:195] Run: which crictl
	I0805 11:49:42.795237  402885 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 11:49:42.837359  402885 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 11:49:42.837466  402885 ssh_runner.go:195] Run: crio --version
	I0805 11:49:42.865426  402885 ssh_runner.go:195] Run: crio --version
	I0805 11:49:42.895825  402885 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0805 11:49:42.897127  402885 out.go:177]   - env NO_PROXY=192.168.39.102
	I0805 11:49:42.898310  402885 out.go:177]   - env NO_PROXY=192.168.39.102,192.168.39.68
	I0805 11:49:42.899503  402885 main.go:141] libmachine: (ha-672593-m03) Calling .GetIP
	I0805 11:49:42.902494  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:42.902908  402885 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-672593-m03 Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:49:42.902938  402885 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:49:42.903194  402885 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0805 11:49:42.907439  402885 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 11:49:42.920962  402885 mustload.go:65] Loading cluster: ha-672593
	I0805 11:49:42.921198  402885 config.go:182] Loaded profile config "ha-672593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 11:49:42.921455  402885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:49:42.921497  402885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:49:42.936259  402885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32987
	I0805 11:49:42.936727  402885 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:49:42.937191  402885 main.go:141] libmachine: Using API Version  1
	I0805 11:49:42.937213  402885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:49:42.937525  402885 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:49:42.937752  402885 main.go:141] libmachine: (ha-672593) Calling .GetState
	I0805 11:49:42.939304  402885 host.go:66] Checking if "ha-672593" exists ...
	I0805 11:49:42.939685  402885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:49:42.939728  402885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:49:42.955663  402885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43889
	I0805 11:49:42.956157  402885 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:49:42.956605  402885 main.go:141] libmachine: Using API Version  1
	I0805 11:49:42.956626  402885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:49:42.956921  402885 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:49:42.957073  402885 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:49:42.957257  402885 certs.go:68] Setting up /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593 for IP: 192.168.39.210
	I0805 11:49:42.957268  402885 certs.go:194] generating shared ca certs ...
	I0805 11:49:42.957286  402885 certs.go:226] acquiring lock for ca certs: {Name:mk0abfcaff3883fbb5243c47b487f9200d9166d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:49:42.957406  402885 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key
	I0805 11:49:42.957445  402885 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key
	I0805 11:49:42.957454  402885 certs.go:256] generating profile certs ...
	I0805 11:49:42.957523  402885 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/client.key
	I0805 11:49:42.957545  402885 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key.0007e6ae
	I0805 11:49:42.957560  402885 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt.0007e6ae with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.68 192.168.39.210 192.168.39.254]
	I0805 11:49:43.159482  402885 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt.0007e6ae ...
	I0805 11:49:43.159512  402885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt.0007e6ae: {Name:mk9efa0743d1a8bc6f436032786c5c9439a3c942 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:49:43.159679  402885 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key.0007e6ae ...
	I0805 11:49:43.159692  402885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key.0007e6ae: {Name:mk1f341e70467d49b67ce7b0a18ef6fdf82f8399 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:49:43.159779  402885 certs.go:381] copying /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt.0007e6ae -> /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt
	I0805 11:49:43.159912  402885 certs.go:385] copying /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key.0007e6ae -> /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key
	I0805 11:49:43.160042  402885 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/proxy-client.key
	I0805 11:49:43.160060  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0805 11:49:43.160074  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0805 11:49:43.160087  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0805 11:49:43.160104  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0805 11:49:43.160116  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0805 11:49:43.160129  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0805 11:49:43.160148  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0805 11:49:43.160160  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0805 11:49:43.160218  402885 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem (1338 bytes)
	W0805 11:49:43.160251  402885 certs.go:480] ignoring /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219_empty.pem, impossibly tiny 0 bytes
	I0805 11:49:43.160261  402885 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 11:49:43.160281  402885 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem (1082 bytes)
	I0805 11:49:43.160301  402885 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem (1123 bytes)
	I0805 11:49:43.160325  402885 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem (1675 bytes)
	I0805 11:49:43.160369  402885 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 11:49:43.160395  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem -> /usr/share/ca-certificates/391219.pem
	I0805 11:49:43.160411  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> /usr/share/ca-certificates/3912192.pem
	I0805 11:49:43.160422  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0805 11:49:43.160456  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:49:43.163504  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:49:43.163971  402885 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:49:43.164001  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:49:43.164208  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:49:43.164424  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:49:43.164600  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:49:43.164759  402885 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/id_rsa Username:docker}
	I0805 11:49:43.244031  402885 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0805 11:49:43.249619  402885 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0805 11:49:43.261087  402885 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0805 11:49:43.265484  402885 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0805 11:49:43.276152  402885 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0805 11:49:43.280485  402885 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0805 11:49:43.290860  402885 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0805 11:49:43.296389  402885 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0805 11:49:43.306729  402885 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0805 11:49:43.313915  402885 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0805 11:49:43.325500  402885 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0805 11:49:43.331434  402885 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0805 11:49:43.345944  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 11:49:43.372230  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 11:49:43.395401  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 11:49:43.418504  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 11:49:43.441270  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0805 11:49:43.464130  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0805 11:49:43.486038  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 11:49:43.509477  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 11:49:43.533958  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem --> /usr/share/ca-certificates/391219.pem (1338 bytes)
	I0805 11:49:43.558887  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /usr/share/ca-certificates/3912192.pem (1708 bytes)
	I0805 11:49:43.582322  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 11:49:43.604921  402885 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0805 11:49:43.622098  402885 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0805 11:49:43.638169  402885 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0805 11:49:43.654185  402885 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0805 11:49:43.671899  402885 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0805 11:49:43.688657  402885 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0805 11:49:43.705637  402885 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0805 11:49:43.721912  402885 ssh_runner.go:195] Run: openssl version
	I0805 11:49:43.727649  402885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 11:49:43.738314  402885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 11:49:43.742602  402885 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0805 11:49:43.742656  402885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 11:49:43.748430  402885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 11:49:43.758991  402885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391219.pem && ln -fs /usr/share/ca-certificates/391219.pem /etc/ssl/certs/391219.pem"
	I0805 11:49:43.770286  402885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/391219.pem
	I0805 11:49:43.774655  402885 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 11:39 /usr/share/ca-certificates/391219.pem
	I0805 11:49:43.774708  402885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391219.pem
	I0805 11:49:43.780544  402885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391219.pem /etc/ssl/certs/51391683.0"
	I0805 11:49:43.792673  402885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3912192.pem && ln -fs /usr/share/ca-certificates/3912192.pem /etc/ssl/certs/3912192.pem"
	I0805 11:49:43.803111  402885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3912192.pem
	I0805 11:49:43.807374  402885 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 11:39 /usr/share/ca-certificates/3912192.pem
	I0805 11:49:43.807420  402885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3912192.pem
	I0805 11:49:43.812770  402885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3912192.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 11:49:43.824737  402885 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 11:49:43.828599  402885 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0805 11:49:43.828677  402885 kubeadm.go:934] updating node {m03 192.168.39.210 8443 v1.30.3 crio true true} ...
	I0805 11:49:43.828791  402885 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-672593-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.210
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-672593 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 11:49:43.828820  402885 kube-vip.go:115] generating kube-vip config ...
	I0805 11:49:43.828879  402885 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0805 11:49:43.845482  402885 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0805 11:49:43.845552  402885 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0805 11:49:43.845603  402885 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 11:49:43.855094  402885 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0805 11:49:43.855162  402885 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0805 11:49:43.864668  402885 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0805 11:49:43.864697  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0805 11:49:43.864756  402885 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0805 11:49:43.864668  402885 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0805 11:49:43.864674  402885 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0805 11:49:43.864790  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0805 11:49:43.864827  402885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 11:49:43.864880  402885 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0805 11:49:43.868983  402885 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0805 11:49:43.869002  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0805 11:49:43.907054  402885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0805 11:49:43.907106  402885 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0805 11:49:43.907135  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0805 11:49:43.907177  402885 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0805 11:49:43.965501  402885 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0805 11:49:43.965554  402885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0805 11:49:44.719819  402885 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0805 11:49:44.730257  402885 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0805 11:49:44.747268  402885 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 11:49:44.763768  402885 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0805 11:49:44.782179  402885 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0805 11:49:44.786179  402885 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 11:49:44.799255  402885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 11:49:44.919104  402885 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 11:49:44.937494  402885 host.go:66] Checking if "ha-672593" exists ...
	I0805 11:49:44.937870  402885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:49:44.937915  402885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:49:44.953791  402885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42141
	I0805 11:49:44.954174  402885 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:49:44.954651  402885 main.go:141] libmachine: Using API Version  1
	I0805 11:49:44.954681  402885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:49:44.954995  402885 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:49:44.955244  402885 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:49:44.955389  402885 start.go:317] joinCluster: &{Name:ha-672593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-672593 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 11:49:44.955515  402885 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0805 11:49:44.955535  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:49:44.958550  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:49:44.959052  402885 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:49:44.959079  402885 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:49:44.959226  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:49:44.959407  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:49:44.959582  402885 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:49:44.959772  402885 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/id_rsa Username:docker}
	I0805 11:49:45.117881  402885 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 11:49:45.117941  402885 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vtxzg0.uer1bhotyz2fxpnt --discovery-token-ca-cert-hash sha256:d5d31a77e9c4cbf19599d2fca5d8f2345e115b01301fa4b841f92bcfec86ddc6 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-672593-m03 --control-plane --apiserver-advertise-address=192.168.39.210 --apiserver-bind-port=8443"
	I0805 11:50:08.729790  402885 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vtxzg0.uer1bhotyz2fxpnt --discovery-token-ca-cert-hash sha256:d5d31a77e9c4cbf19599d2fca5d8f2345e115b01301fa4b841f92bcfec86ddc6 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-672593-m03 --control-plane --apiserver-advertise-address=192.168.39.210 --apiserver-bind-port=8443": (23.611805037s)
	I0805 11:50:08.729835  402885 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0805 11:50:09.417094  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-672593-m03 minikube.k8s.io/updated_at=2024_08_05T11_50_09_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f minikube.k8s.io/name=ha-672593 minikube.k8s.io/primary=false
	I0805 11:50:09.555213  402885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-672593-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0805 11:50:09.677594  402885 start.go:319] duration metric: took 24.722198513s to joinCluster
	I0805 11:50:09.677673  402885 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 11:50:09.677971  402885 config.go:182] Loaded profile config "ha-672593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 11:50:09.679027  402885 out.go:177] * Verifying Kubernetes components...
	I0805 11:50:09.680646  402885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 11:50:09.942761  402885 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 11:50:10.023334  402885 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 11:50:10.023698  402885 kapi.go:59] client config for ha-672593: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/client.crt", KeyFile:"/home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/client.key", CAFile:"/home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0805 11:50:10.023834  402885 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.102:8443
	I0805 11:50:10.024130  402885 node_ready.go:35] waiting up to 6m0s for node "ha-672593-m03" to be "Ready" ...
	I0805 11:50:10.024226  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:10.024236  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:10.024246  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:10.024256  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:10.028165  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:10.524842  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:10.524871  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:10.524883  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:10.524890  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:10.529120  402885 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 11:50:11.024991  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:11.025013  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:11.025021  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:11.025027  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:11.028905  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:11.524966  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:11.524996  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:11.525009  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:11.525015  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:11.528136  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:12.024883  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:12.024907  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:12.024916  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:12.024921  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:12.027768  402885 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 11:50:12.028384  402885 node_ready.go:53] node "ha-672593-m03" has status "Ready":"False"
	I0805 11:50:12.524450  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:12.524476  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:12.524488  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:12.524496  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:12.528335  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:13.025222  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:13.025246  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:13.025255  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:13.025260  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:13.029083  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:13.524572  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:13.524607  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:13.524615  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:13.524627  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:13.527835  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:14.024390  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:14.024413  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:14.024424  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:14.024429  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:14.028290  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:14.029078  402885 node_ready.go:53] node "ha-672593-m03" has status "Ready":"False"
	I0805 11:50:14.525035  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:14.525055  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:14.525066  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:14.525072  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:14.528859  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:15.024339  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:15.024362  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:15.024370  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:15.024376  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:15.027697  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:15.524691  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:15.524720  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:15.524728  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:15.524733  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:15.527918  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:16.024536  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:16.024561  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:16.024570  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:16.024574  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:16.028995  402885 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 11:50:16.030417  402885 node_ready.go:53] node "ha-672593-m03" has status "Ready":"False"
	I0805 11:50:16.525306  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:16.525335  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:16.525363  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:16.525373  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:16.530664  402885 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0805 11:50:17.025263  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:17.025292  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:17.025303  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:17.025309  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:17.028817  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:17.524687  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:17.524711  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:17.524718  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:17.524722  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:17.528498  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:18.024565  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:18.024589  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:18.024598  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:18.024603  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:18.027864  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:18.525024  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:18.525048  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:18.525056  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:18.525061  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:18.528139  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:18.528928  402885 node_ready.go:53] node "ha-672593-m03" has status "Ready":"False"
	I0805 11:50:19.025141  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:19.025166  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:19.025174  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:19.025178  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:19.028526  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:19.524651  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:19.524683  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:19.524696  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:19.524704  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:19.528218  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:20.024501  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:20.024524  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:20.024534  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:20.024538  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:20.027848  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:20.525108  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:20.525149  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:20.525163  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:20.525170  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:20.528777  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:20.529374  402885 node_ready.go:53] node "ha-672593-m03" has status "Ready":"False"
	I0805 11:50:21.024348  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:21.024369  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:21.024377  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:21.024382  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:21.028029  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:21.524857  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:21.524881  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:21.524889  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:21.524892  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:21.528187  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:22.025326  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:22.025348  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:22.025357  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:22.025362  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:22.028438  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:22.525068  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:22.525095  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:22.525107  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:22.525114  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:22.528962  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:22.529725  402885 node_ready.go:53] node "ha-672593-m03" has status "Ready":"False"
	I0805 11:50:23.024957  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:23.024979  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:23.024988  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:23.024992  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:23.028631  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:23.525048  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:23.525071  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:23.525087  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:23.525092  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:23.528562  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:24.024433  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:24.024461  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:24.024495  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:24.024500  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:24.027667  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:24.524622  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:24.524649  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:24.524661  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:24.524667  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:24.528997  402885 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 11:50:25.025366  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:25.025394  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:25.025405  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:25.025412  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:25.028714  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:25.029361  402885 node_ready.go:53] node "ha-672593-m03" has status "Ready":"False"
	I0805 11:50:25.524394  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:25.524419  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:25.524426  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:25.524430  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:25.528468  402885 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 11:50:26.024479  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:26.024501  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:26.024530  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:26.024535  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:26.027751  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:26.524434  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:26.524515  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:26.524533  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:26.524540  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:26.529314  402885 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 11:50:27.025202  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:27.025227  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:27.025239  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:27.025246  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:27.028787  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:27.029468  402885 node_ready.go:53] node "ha-672593-m03" has status "Ready":"False"
	I0805 11:50:27.524823  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:27.524851  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:27.524861  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:27.524869  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:27.528579  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:28.025351  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:28.025376  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:28.025385  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:28.025390  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:28.028596  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:28.525015  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:28.525038  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:28.525047  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:28.525051  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:28.528823  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:28.529486  402885 node_ready.go:49] node "ha-672593-m03" has status "Ready":"True"
	I0805 11:50:28.529505  402885 node_ready.go:38] duration metric: took 18.505353861s for node "ha-672593-m03" to be "Ready" ...
	I0805 11:50:28.529514  402885 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 11:50:28.529585  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0805 11:50:28.529594  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:28.529601  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:28.529605  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:28.536212  402885 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0805 11:50:28.544069  402885 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-sfh7c" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:28.544156  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sfh7c
	I0805 11:50:28.544166  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:28.544173  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:28.544180  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:28.546731  402885 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 11:50:28.547413  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593
	I0805 11:50:28.547427  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:28.547435  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:28.547439  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:28.550078  402885 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 11:50:28.550479  402885 pod_ready.go:92] pod "coredns-7db6d8ff4d-sfh7c" in "kube-system" namespace has status "Ready":"True"
	I0805 11:50:28.550496  402885 pod_ready.go:81] duration metric: took 6.406258ms for pod "coredns-7db6d8ff4d-sfh7c" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:28.550506  402885 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-sgd4v" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:28.550578  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sgd4v
	I0805 11:50:28.550589  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:28.550599  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:28.550605  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:28.553192  402885 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 11:50:28.555721  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593
	I0805 11:50:28.555751  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:28.555762  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:28.555768  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:28.558455  402885 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 11:50:28.559075  402885 pod_ready.go:92] pod "coredns-7db6d8ff4d-sgd4v" in "kube-system" namespace has status "Ready":"True"
	I0805 11:50:28.559096  402885 pod_ready.go:81] duration metric: took 8.581234ms for pod "coredns-7db6d8ff4d-sgd4v" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:28.559108  402885 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-672593" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:28.559181  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-672593
	I0805 11:50:28.559190  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:28.559199  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:28.559204  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:28.562010  402885 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 11:50:28.562791  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593
	I0805 11:50:28.562805  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:28.562811  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:28.562815  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:28.565381  402885 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 11:50:28.565930  402885 pod_ready.go:92] pod "etcd-ha-672593" in "kube-system" namespace has status "Ready":"True"
	I0805 11:50:28.565945  402885 pod_ready.go:81] duration metric: took 6.830097ms for pod "etcd-ha-672593" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:28.565959  402885 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-672593-m02" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:28.566023  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-672593-m02
	I0805 11:50:28.566031  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:28.566038  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:28.566045  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:28.568834  402885 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 11:50:28.569482  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:50:28.569495  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:28.569502  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:28.569505  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:28.572587  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:28.573289  402885 pod_ready.go:92] pod "etcd-ha-672593-m02" in "kube-system" namespace has status "Ready":"True"
	I0805 11:50:28.573311  402885 pod_ready.go:81] duration metric: took 7.339266ms for pod "etcd-ha-672593-m02" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:28.573323  402885 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-672593-m03" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:28.725742  402885 request.go:629] Waited for 152.340768ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-672593-m03
	I0805 11:50:28.725827  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-672593-m03
	I0805 11:50:28.725833  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:28.725841  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:28.725847  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:28.729252  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:28.925847  402885 request.go:629] Waited for 195.914849ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:28.925936  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:28.925946  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:28.925957  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:28.925965  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:28.929403  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:28.929892  402885 pod_ready.go:92] pod "etcd-ha-672593-m03" in "kube-system" namespace has status "Ready":"True"
	I0805 11:50:28.929914  402885 pod_ready.go:81] duration metric: took 356.582949ms for pod "etcd-ha-672593-m03" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:28.929937  402885 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-672593" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:29.126071  402885 request.go:629] Waited for 196.051705ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-672593
	I0805 11:50:29.126148  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-672593
	I0805 11:50:29.126154  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:29.126164  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:29.126182  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:29.129507  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:29.325716  402885 request.go:629] Waited for 195.378242ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-672593
	I0805 11:50:29.325793  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593
	I0805 11:50:29.325804  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:29.325815  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:29.325823  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:29.329462  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:29.330002  402885 pod_ready.go:92] pod "kube-apiserver-ha-672593" in "kube-system" namespace has status "Ready":"True"
	I0805 11:50:29.330023  402885 pod_ready.go:81] duration metric: took 400.076496ms for pod "kube-apiserver-ha-672593" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:29.330038  402885 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-672593-m02" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:29.526046  402885 request.go:629] Waited for 195.91009ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-672593-m02
	I0805 11:50:29.526146  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-672593-m02
	I0805 11:50:29.526157  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:29.526169  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:29.526177  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:29.529498  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:29.725557  402885 request.go:629] Waited for 195.364105ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:50:29.725617  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:50:29.725622  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:29.725630  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:29.725634  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:29.729006  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:29.729651  402885 pod_ready.go:92] pod "kube-apiserver-ha-672593-m02" in "kube-system" namespace has status "Ready":"True"
	I0805 11:50:29.729675  402885 pod_ready.go:81] duration metric: took 399.625672ms for pod "kube-apiserver-ha-672593-m02" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:29.729685  402885 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-672593-m03" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:29.925776  402885 request.go:629] Waited for 195.98755ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-672593-m03
	I0805 11:50:29.925853  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-672593-m03
	I0805 11:50:29.925858  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:29.925866  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:29.925871  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:29.929515  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:30.125672  402885 request.go:629] Waited for 195.388467ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:30.125758  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:30.125767  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:30.125783  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:30.125792  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:30.128922  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:30.129650  402885 pod_ready.go:92] pod "kube-apiserver-ha-672593-m03" in "kube-system" namespace has status "Ready":"True"
	I0805 11:50:30.129684  402885 pod_ready.go:81] duration metric: took 399.992597ms for pod "kube-apiserver-ha-672593-m03" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:30.129695  402885 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-672593" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:30.325756  402885 request.go:629] Waited for 195.988109ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-672593
	I0805 11:50:30.325875  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-672593
	I0805 11:50:30.325886  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:30.325911  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:30.325919  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:30.329291  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:30.525991  402885 request.go:629] Waited for 196.004967ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-672593
	I0805 11:50:30.526071  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593
	I0805 11:50:30.526079  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:30.526086  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:30.526094  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:30.529610  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:30.530247  402885 pod_ready.go:92] pod "kube-controller-manager-ha-672593" in "kube-system" namespace has status "Ready":"True"
	I0805 11:50:30.530267  402885 pod_ready.go:81] duration metric: took 400.565722ms for pod "kube-controller-manager-ha-672593" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:30.530278  402885 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-672593-m02" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:30.725335  402885 request.go:629] Waited for 194.965338ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-672593-m02
	I0805 11:50:30.725416  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-672593-m02
	I0805 11:50:30.725423  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:30.725433  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:30.725438  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:30.729104  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:30.925016  402885 request.go:629] Waited for 195.311921ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:50:30.925096  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:50:30.925104  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:30.925116  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:30.925127  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:30.928500  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:30.928949  402885 pod_ready.go:92] pod "kube-controller-manager-ha-672593-m02" in "kube-system" namespace has status "Ready":"True"
	I0805 11:50:30.928990  402885 pod_ready.go:81] duration metric: took 398.695012ms for pod "kube-controller-manager-ha-672593-m02" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:30.929005  402885 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-672593-m03" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:31.125084  402885 request.go:629] Waited for 195.981924ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-672593-m03
	I0805 11:50:31.125154  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-672593-m03
	I0805 11:50:31.125160  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:31.125168  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:31.125172  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:31.128777  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:31.326045  402885 request.go:629] Waited for 196.356047ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:31.326145  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:31.326157  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:31.326170  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:31.326178  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:31.329154  402885 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 11:50:31.329655  402885 pod_ready.go:92] pod "kube-controller-manager-ha-672593-m03" in "kube-system" namespace has status "Ready":"True"
	I0805 11:50:31.329687  402885 pod_ready.go:81] duration metric: took 400.672393ms for pod "kube-controller-manager-ha-672593-m03" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:31.329709  402885 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4q4tq" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:31.525634  402885 request.go:629] Waited for 195.841646ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4q4tq
	I0805 11:50:31.525698  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4q4tq
	I0805 11:50:31.525704  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:31.525711  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:31.525716  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:31.530351  402885 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 11:50:31.725340  402885 request.go:629] Waited for 194.093948ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:31.725402  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:31.725410  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:31.725418  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:31.725425  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:31.728593  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:31.729392  402885 pod_ready.go:92] pod "kube-proxy-4q4tq" in "kube-system" namespace has status "Ready":"True"
	I0805 11:50:31.729413  402885 pod_ready.go:81] duration metric: took 399.693493ms for pod "kube-proxy-4q4tq" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:31.729422  402885 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mdwh2" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:31.925942  402885 request.go:629] Waited for 196.449987ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mdwh2
	I0805 11:50:31.926015  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mdwh2
	I0805 11:50:31.926020  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:31.926027  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:31.926035  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:31.929371  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:32.125350  402885 request.go:629] Waited for 195.285703ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:50:32.125432  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:50:32.125443  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:32.125454  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:32.125466  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:32.128650  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:32.129297  402885 pod_ready.go:92] pod "kube-proxy-mdwh2" in "kube-system" namespace has status "Ready":"True"
	I0805 11:50:32.129317  402885 pod_ready.go:81] duration metric: took 399.886843ms for pod "kube-proxy-mdwh2" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:32.129329  402885 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wtsdt" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:32.325417  402885 request.go:629] Waited for 196.006397ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wtsdt
	I0805 11:50:32.325498  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wtsdt
	I0805 11:50:32.325504  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:32.325511  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:32.325516  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:32.329140  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:32.525094  402885 request.go:629] Waited for 195.277586ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-672593
	I0805 11:50:32.525184  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593
	I0805 11:50:32.525194  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:32.525203  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:32.525210  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:32.528764  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:32.529396  402885 pod_ready.go:92] pod "kube-proxy-wtsdt" in "kube-system" namespace has status "Ready":"True"
	I0805 11:50:32.529413  402885 pod_ready.go:81] duration metric: took 400.078107ms for pod "kube-proxy-wtsdt" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:32.529423  402885 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-672593" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:32.725553  402885 request.go:629] Waited for 196.049403ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-672593
	I0805 11:50:32.725649  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-672593
	I0805 11:50:32.725661  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:32.725671  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:32.725681  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:32.728972  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:32.926030  402885 request.go:629] Waited for 196.35358ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-672593
	I0805 11:50:32.926106  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593
	I0805 11:50:32.926113  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:32.926130  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:32.926141  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:32.929853  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:32.930549  402885 pod_ready.go:92] pod "kube-scheduler-ha-672593" in "kube-system" namespace has status "Ready":"True"
	I0805 11:50:32.930571  402885 pod_ready.go:81] duration metric: took 401.138815ms for pod "kube-scheduler-ha-672593" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:32.930584  402885 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-672593-m02" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:33.125705  402885 request.go:629] Waited for 195.022367ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-672593-m02
	I0805 11:50:33.125778  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-672593-m02
	I0805 11:50:33.125787  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:33.125801  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:33.125810  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:33.129480  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:33.325298  402885 request.go:629] Waited for 195.162835ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:50:33.325358  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m02
	I0805 11:50:33.325363  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:33.325371  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:33.325375  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:33.328055  402885 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 11:50:33.328850  402885 pod_ready.go:92] pod "kube-scheduler-ha-672593-m02" in "kube-system" namespace has status "Ready":"True"
	I0805 11:50:33.328867  402885 pod_ready.go:81] duration metric: took 398.275917ms for pod "kube-scheduler-ha-672593-m02" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:33.328877  402885 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-672593-m03" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:33.525913  402885 request.go:629] Waited for 196.95991ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-672593-m03
	I0805 11:50:33.525994  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-672593-m03
	I0805 11:50:33.526003  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:33.526037  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:33.526049  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:33.529495  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:33.725425  402885 request.go:629] Waited for 195.362958ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:33.725514  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-672593-m03
	I0805 11:50:33.725530  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:33.725554  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:33.725567  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:33.731244  402885 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0805 11:50:33.731686  402885 pod_ready.go:92] pod "kube-scheduler-ha-672593-m03" in "kube-system" namespace has status "Ready":"True"
	I0805 11:50:33.731706  402885 pod_ready.go:81] duration metric: took 402.821942ms for pod "kube-scheduler-ha-672593-m03" in "kube-system" namespace to be "Ready" ...
	I0805 11:50:33.731716  402885 pod_ready.go:38] duration metric: took 5.202193895s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 11:50:33.731731  402885 api_server.go:52] waiting for apiserver process to appear ...
	I0805 11:50:33.731796  402885 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 11:50:33.748061  402885 api_server.go:72] duration metric: took 24.070351029s to wait for apiserver process to appear ...
	I0805 11:50:33.748096  402885 api_server.go:88] waiting for apiserver healthz status ...
	I0805 11:50:33.748115  402885 api_server.go:253] Checking apiserver healthz at https://192.168.39.102:8443/healthz ...
	I0805 11:50:33.752440  402885 api_server.go:279] https://192.168.39.102:8443/healthz returned 200:
	ok
	I0805 11:50:33.752511  402885 round_trippers.go:463] GET https://192.168.39.102:8443/version
	I0805 11:50:33.752519  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:33.752528  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:33.752532  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:33.753367  402885 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 11:50:33.753438  402885 api_server.go:141] control plane version: v1.30.3
	I0805 11:50:33.753452  402885 api_server.go:131] duration metric: took 5.350181ms to wait for apiserver health ...
	I0805 11:50:33.753461  402885 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 11:50:33.925639  402885 request.go:629] Waited for 172.091982ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0805 11:50:33.925722  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0805 11:50:33.925730  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:33.925742  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:33.925750  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:33.932854  402885 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0805 11:50:33.940101  402885 system_pods.go:59] 24 kube-system pods found
	I0805 11:50:33.940129  402885 system_pods.go:61] "coredns-7db6d8ff4d-sfh7c" [98c09423-e24f-4d26-b7f9-3da3986d538b] Running
	I0805 11:50:33.940136  402885 system_pods.go:61] "coredns-7db6d8ff4d-sgd4v" [58ff9d45-f09f-4213-b1c3-d568ee5ab68a] Running
	I0805 11:50:33.940141  402885 system_pods.go:61] "etcd-ha-672593" [379ffb87-5649-41f5-8095-d7196c401f79] Running
	I0805 11:50:33.940147  402885 system_pods.go:61] "etcd-ha-672593-m02" [ea52f3ac-f7d5-407e-ba4e-a01e5effbf97] Running
	I0805 11:50:33.940153  402885 system_pods.go:61] "etcd-ha-672593-m03" [6091761d-b610-4448-a853-274433bb59d0] Running
	I0805 11:50:33.940157  402885 system_pods.go:61] "kindnet-7fndz" [6bdb2b4a-e7c6-4e03-80f8-cf80501095c4] Running
	I0805 11:50:33.940161  402885 system_pods.go:61] "kindnet-85fm7" [404455ee-e31a-4c52-bf6f-f16546652f70] Running
	I0805 11:50:33.940166  402885 system_pods.go:61] "kindnet-wnbr8" [351b5e1b-5da4-442d-96b7-213e3e9a74aa] Running
	I0805 11:50:33.940171  402885 system_pods.go:61] "kube-apiserver-ha-672593" [6c6d5c3e-1d9e-4a8b-8a63-792a94e826a5] Running
	I0805 11:50:33.940175  402885 system_pods.go:61] "kube-apiserver-ha-672593-m02" [f40f5797-3916-467c-a42f-eb18f909121b] Running
	I0805 11:50:33.940180  402885 system_pods.go:61] "kube-apiserver-ha-672593-m03" [1e3694f4-9bc0-4e9a-8e1c-179bbb1c78ca] Running
	I0805 11:50:33.940188  402885 system_pods.go:61] "kube-controller-manager-ha-672593" [515f7a5c-1f0f-40e9-91ec-1921ec498f03] Running
	I0805 11:50:33.940195  402885 system_pods.go:61] "kube-controller-manager-ha-672593-m02" [60e41780-9ffd-49ea-b9ee-3bbf4dc3ad62] Running
	I0805 11:50:33.940200  402885 system_pods.go:61] "kube-controller-manager-ha-672593-m03" [c30415ed-5173-4283-9174-72d05ed227cc] Running
	I0805 11:50:33.940205  402885 system_pods.go:61] "kube-proxy-4q4tq" [44cceade-cf8b-4c4d-b06e-c83c3f20bd3a] Running
	I0805 11:50:33.940210  402885 system_pods.go:61] "kube-proxy-mdwh2" [93a2ab4f-2393-49f1-b185-97b90da38595] Running
	I0805 11:50:33.940215  402885 system_pods.go:61] "kube-proxy-wtsdt" [9a1664bb-e0a8-496e-a74d-3c25080dca8e] Running
	I0805 11:50:33.940223  402885 system_pods.go:61] "kube-scheduler-ha-672593" [5b680e35-89cc-4a77-a100-2feeccfa4b4b] Running
	I0805 11:50:33.940228  402885 system_pods.go:61] "kube-scheduler-ha-672593-m02" [beba4210-14b0-4bc3-a256-e61d47037355] Running
	I0805 11:50:33.940232  402885 system_pods.go:61] "kube-scheduler-ha-672593-m03" [9734cd6a-7e2a-4a7e-99e9-87b72c55a073] Running
	I0805 11:50:33.940237  402885 system_pods.go:61] "kube-vip-ha-672593" [36928548-a08e-49a4-a82a-6c6c3fb52b48] Running
	I0805 11:50:33.940244  402885 system_pods.go:61] "kube-vip-ha-672593-m02" [662dd07b-4ec6-471e-8209-6d25bac5459c] Running
	I0805 11:50:33.940249  402885 system_pods.go:61] "kube-vip-ha-672593-m03" [abc05dea-8108-4a5e-a223-1410c903fccc] Running
	I0805 11:50:33.940256  402885 system_pods.go:61] "storage-provisioner" [9c3a4e49-f517-40e4-bd83-1e69b6a7550c] Running
	I0805 11:50:33.940266  402885 system_pods.go:74] duration metric: took 186.796553ms to wait for pod list to return data ...
	I0805 11:50:33.940278  402885 default_sa.go:34] waiting for default service account to be created ...
	I0805 11:50:34.125710  402885 request.go:629] Waited for 185.326504ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I0805 11:50:34.125770  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I0805 11:50:34.125775  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:34.125783  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:34.125791  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:34.129318  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:34.129445  402885 default_sa.go:45] found service account: "default"
	I0805 11:50:34.129459  402885 default_sa.go:55] duration metric: took 189.171631ms for default service account to be created ...
	I0805 11:50:34.129467  402885 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 11:50:34.325892  402885 request.go:629] Waited for 196.35086ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0805 11:50:34.325994  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0805 11:50:34.326006  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:34.326016  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:34.326022  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:34.332670  402885 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0805 11:50:34.338575  402885 system_pods.go:86] 24 kube-system pods found
	I0805 11:50:34.338603  402885 system_pods.go:89] "coredns-7db6d8ff4d-sfh7c" [98c09423-e24f-4d26-b7f9-3da3986d538b] Running
	I0805 11:50:34.338609  402885 system_pods.go:89] "coredns-7db6d8ff4d-sgd4v" [58ff9d45-f09f-4213-b1c3-d568ee5ab68a] Running
	I0805 11:50:34.338613  402885 system_pods.go:89] "etcd-ha-672593" [379ffb87-5649-41f5-8095-d7196c401f79] Running
	I0805 11:50:34.338617  402885 system_pods.go:89] "etcd-ha-672593-m02" [ea52f3ac-f7d5-407e-ba4e-a01e5effbf97] Running
	I0805 11:50:34.338621  402885 system_pods.go:89] "etcd-ha-672593-m03" [6091761d-b610-4448-a853-274433bb59d0] Running
	I0805 11:50:34.338626  402885 system_pods.go:89] "kindnet-7fndz" [6bdb2b4a-e7c6-4e03-80f8-cf80501095c4] Running
	I0805 11:50:34.338630  402885 system_pods.go:89] "kindnet-85fm7" [404455ee-e31a-4c52-bf6f-f16546652f70] Running
	I0805 11:50:34.338634  402885 system_pods.go:89] "kindnet-wnbr8" [351b5e1b-5da4-442d-96b7-213e3e9a74aa] Running
	I0805 11:50:34.338638  402885 system_pods.go:89] "kube-apiserver-ha-672593" [6c6d5c3e-1d9e-4a8b-8a63-792a94e826a5] Running
	I0805 11:50:34.338646  402885 system_pods.go:89] "kube-apiserver-ha-672593-m02" [f40f5797-3916-467c-a42f-eb18f909121b] Running
	I0805 11:50:34.338650  402885 system_pods.go:89] "kube-apiserver-ha-672593-m03" [1e3694f4-9bc0-4e9a-8e1c-179bbb1c78ca] Running
	I0805 11:50:34.338657  402885 system_pods.go:89] "kube-controller-manager-ha-672593" [515f7a5c-1f0f-40e9-91ec-1921ec498f03] Running
	I0805 11:50:34.338662  402885 system_pods.go:89] "kube-controller-manager-ha-672593-m02" [60e41780-9ffd-49ea-b9ee-3bbf4dc3ad62] Running
	I0805 11:50:34.338668  402885 system_pods.go:89] "kube-controller-manager-ha-672593-m03" [c30415ed-5173-4283-9174-72d05ed227cc] Running
	I0805 11:50:34.338673  402885 system_pods.go:89] "kube-proxy-4q4tq" [44cceade-cf8b-4c4d-b06e-c83c3f20bd3a] Running
	I0805 11:50:34.338679  402885 system_pods.go:89] "kube-proxy-mdwh2" [93a2ab4f-2393-49f1-b185-97b90da38595] Running
	I0805 11:50:34.338683  402885 system_pods.go:89] "kube-proxy-wtsdt" [9a1664bb-e0a8-496e-a74d-3c25080dca8e] Running
	I0805 11:50:34.338689  402885 system_pods.go:89] "kube-scheduler-ha-672593" [5b680e35-89cc-4a77-a100-2feeccfa4b4b] Running
	I0805 11:50:34.338693  402885 system_pods.go:89] "kube-scheduler-ha-672593-m02" [beba4210-14b0-4bc3-a256-e61d47037355] Running
	I0805 11:50:34.338699  402885 system_pods.go:89] "kube-scheduler-ha-672593-m03" [9734cd6a-7e2a-4a7e-99e9-87b72c55a073] Running
	I0805 11:50:34.338703  402885 system_pods.go:89] "kube-vip-ha-672593" [36928548-a08e-49a4-a82a-6c6c3fb52b48] Running
	I0805 11:50:34.338707  402885 system_pods.go:89] "kube-vip-ha-672593-m02" [662dd07b-4ec6-471e-8209-6d25bac5459c] Running
	I0805 11:50:34.338711  402885 system_pods.go:89] "kube-vip-ha-672593-m03" [abc05dea-8108-4a5e-a223-1410c903fccc] Running
	I0805 11:50:34.338714  402885 system_pods.go:89] "storage-provisioner" [9c3a4e49-f517-40e4-bd83-1e69b6a7550c] Running
	I0805 11:50:34.338725  402885 system_pods.go:126] duration metric: took 209.252622ms to wait for k8s-apps to be running ...
	I0805 11:50:34.338734  402885 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 11:50:34.338784  402885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 11:50:34.353831  402885 system_svc.go:56] duration metric: took 15.083924ms WaitForService to wait for kubelet
	I0805 11:50:34.353862  402885 kubeadm.go:582] duration metric: took 24.676155589s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 11:50:34.353884  402885 node_conditions.go:102] verifying NodePressure condition ...
	I0805 11:50:34.525311  402885 request.go:629] Waited for 171.329016ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes
	I0805 11:50:34.525377  402885 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes
	I0805 11:50:34.525389  402885 round_trippers.go:469] Request Headers:
	I0805 11:50:34.525404  402885 round_trippers.go:473]     Accept: application/json, */*
	I0805 11:50:34.525412  402885 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 11:50:34.529093  402885 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 11:50:34.530327  402885 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 11:50:34.530352  402885 node_conditions.go:123] node cpu capacity is 2
	I0805 11:50:34.530368  402885 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 11:50:34.530373  402885 node_conditions.go:123] node cpu capacity is 2
	I0805 11:50:34.530380  402885 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 11:50:34.530385  402885 node_conditions.go:123] node cpu capacity is 2
	I0805 11:50:34.530392  402885 node_conditions.go:105] duration metric: took 176.501612ms to run NodePressure ...
	I0805 11:50:34.530410  402885 start.go:241] waiting for startup goroutines ...
	I0805 11:50:34.530439  402885 start.go:255] writing updated cluster config ...
	I0805 11:50:34.530795  402885 ssh_runner.go:195] Run: rm -f paused
	I0805 11:50:34.585518  402885 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0805 11:50:34.587986  402885 out.go:177] * Done! kubectl is now configured to use "ha-672593" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 05 11:55:38 ha-672593 crio[682]: time="2024-08-05 11:55:38.696209991Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722858938696187230,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8d10625c-9578-4fb8-904c-68220da69f9f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 11:55:38 ha-672593 crio[682]: time="2024-08-05 11:55:38.696705168Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7b05670a-d0e4-4b40-82eb-6036bc0f0e5f name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 11:55:38 ha-672593 crio[682]: time="2024-08-05 11:55:38.696773569Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7b05670a-d0e4-4b40-82eb-6036bc0f0e5f name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 11:55:38 ha-672593 crio[682]: time="2024-08-05 11:55:38.697161757Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f332a2eefb38a7643f5eabdc4c3795fdf9fc7faa3025977758afda4965c4d06f,PodSandboxId:96a63340a808e8f1d3c8938db5651c8ba9a84b0066e04495da70a33af565d687,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722858640390335720,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xx72g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b4aad5e1-e3ed-450f-b0c6-fa690e21632b,},Annotations:map[string]string{io.kubernetes.container.hash: f49c7961,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e556c9ba49f5fe264685a2408b26a61c8c5c8836f0a38b89b776f338b8b0cd22,PodSandboxId:9d62d6071098f73247871066016f164f3ba1e01a8dea16d9e20b8de1b97aafd3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722858498473117166,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c3a4e49-f517-40e4-bd83-1e69b6a7550c,},Annotations:map[string]string{io.kubernetes.container.hash: 907c955b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73fd9ef1948379bdfd834218bee29f227bc55765a421d994bcc5bbfe373658c1,PodSandboxId:162aab1f9af67e7a7875d7f44424f7edaa5b1aa74a891b3a0e84709da26c69fe,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722858498489254054,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sgd4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ff9d45-f09f-4213-b1c3-d568ee5ab68a,},Annotations:map[string]string{io.kubernetes.container.hash: d7a5fe30,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6354e702fe80a5a9853cdd48f89dde467f1f7359bb495c8a4f6a49048f151d94,PodSandboxId:60a5e5f93bb15c3691c3fccd5be1c38de24355d307d1217ada049b281288a7b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722858498409195705,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sfh7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98c09423-e2
4f-4d26-b7f9-3da3986d538b,},Annotations:map[string]string{io.kubernetes.container.hash: 3a333149,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57cec2b511aa8ca1b171b7dfff39ecb51cb11d9cd4efd552598fcc0054488c46,PodSandboxId:214360f7ff706f37f1cd346a7910caa4b07da7a0f1b94fd4af2eb9609e49369b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722858486486455082,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fndz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bdb2b4a-e7c6-4e03-80f8-cf80501095c4,},Annotations:map[string]string{io.kubernetes.container.hash: 96fd5c22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c4e00c9ba78ff0cfb337d7435931f39fe7ccd42145fa6670487d190cacee48,PodSandboxId:b824fdfadbf52a8243b61b3c55556272c3d50bd4fafe70328531a35defcf2fc9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172285848
1390523938,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wtsdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a1664bb-e0a8-496e-a74d-3c25080dca8e,},Annotations:map[string]string{io.kubernetes.container.hash: ff2ee446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:019abd676baf2985a3bf77641c1032cae7b3c22eb67fff535a25d9860b394bfd,PodSandboxId:1d9da1cd788cad95304542b24cf401a422744353696579d0d29bd98eb8653eaa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17228584649
24084638,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99d4f33bf7a3af916699b26dbf5430d3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1019d9e10074631835690fa0d372f2c043a64f237e1ddf9e22bcbd18d59fa6cd,PodSandboxId:1c9e20b33b7b7424aca33506f1a815c58190e9875a108206c654e048992f391f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722858461888444722,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddda5be0e77a9b07805ce43249e5859e,},Annotations:map[string]string{io.kubernetes.container.hash: f024b421,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50907082bdeb824e9a80122033ed1df5631143e152751f066a7bdfba1156e565,PodSandboxId:de38455447227b34bf7963342042ed5499630d6d5c6482c1c0aac94f9ce1a8d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722858461871321787,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.ku
bernetes.pod.name: kube-apiserver-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a381773e823990c7e015983b07a0d8,},Annotations:map[string]string{io.kubernetes.container.hash: caa197,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9839b56e3e62d7ac6b88dc20149da25f586b4033e03a09844938e5b85b6334,PodSandboxId:c7429b1a8552f574f21cc855aa6bf767680c56d05bb1df8b83c28a59cd561fb1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722858461852093132,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube
-scheduler-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96b70bfddf8dc93c8b8709942f15d00b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b17d8131f0edcc3018bb9d820f56a29a7806d7d57a91b849fc1350d6a8465775,PodSandboxId:116d38bae0e1d9ea33ddac0f1847ec8bd262f0dfda40d19beb5ce58d9dfc120c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722858461788289725,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-co
ntroller-manager-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b48534ca818552de6101946d7c7932fd,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7b05670a-d0e4-4b40-82eb-6036bc0f0e5f name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 11:55:38 ha-672593 crio[682]: time="2024-08-05 11:55:38.733937345Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a0057026-7de3-4caf-afa1-807f084ba825 name=/runtime.v1.RuntimeService/Version
	Aug 05 11:55:38 ha-672593 crio[682]: time="2024-08-05 11:55:38.734172982Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a0057026-7de3-4caf-afa1-807f084ba825 name=/runtime.v1.RuntimeService/Version
	Aug 05 11:55:38 ha-672593 crio[682]: time="2024-08-05 11:55:38.735289589Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f1ec5239-9f7c-421d-80ad-af4fb8efe2e5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 11:55:38 ha-672593 crio[682]: time="2024-08-05 11:55:38.735715904Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722858938735697328,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f1ec5239-9f7c-421d-80ad-af4fb8efe2e5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 11:55:38 ha-672593 crio[682]: time="2024-08-05 11:55:38.737372848Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cebf03e3-f97f-4425-bb57-5d9d30690383 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 11:55:38 ha-672593 crio[682]: time="2024-08-05 11:55:38.737426293Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cebf03e3-f97f-4425-bb57-5d9d30690383 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 11:55:38 ha-672593 crio[682]: time="2024-08-05 11:55:38.737643159Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f332a2eefb38a7643f5eabdc4c3795fdf9fc7faa3025977758afda4965c4d06f,PodSandboxId:96a63340a808e8f1d3c8938db5651c8ba9a84b0066e04495da70a33af565d687,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722858640390335720,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xx72g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b4aad5e1-e3ed-450f-b0c6-fa690e21632b,},Annotations:map[string]string{io.kubernetes.container.hash: f49c7961,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e556c9ba49f5fe264685a2408b26a61c8c5c8836f0a38b89b776f338b8b0cd22,PodSandboxId:9d62d6071098f73247871066016f164f3ba1e01a8dea16d9e20b8de1b97aafd3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722858498473117166,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c3a4e49-f517-40e4-bd83-1e69b6a7550c,},Annotations:map[string]string{io.kubernetes.container.hash: 907c955b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73fd9ef1948379bdfd834218bee29f227bc55765a421d994bcc5bbfe373658c1,PodSandboxId:162aab1f9af67e7a7875d7f44424f7edaa5b1aa74a891b3a0e84709da26c69fe,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722858498489254054,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sgd4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ff9d45-f09f-4213-b1c3-d568ee5ab68a,},Annotations:map[string]string{io.kubernetes.container.hash: d7a5fe30,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6354e702fe80a5a9853cdd48f89dde467f1f7359bb495c8a4f6a49048f151d94,PodSandboxId:60a5e5f93bb15c3691c3fccd5be1c38de24355d307d1217ada049b281288a7b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722858498409195705,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sfh7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98c09423-e2
4f-4d26-b7f9-3da3986d538b,},Annotations:map[string]string{io.kubernetes.container.hash: 3a333149,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57cec2b511aa8ca1b171b7dfff39ecb51cb11d9cd4efd552598fcc0054488c46,PodSandboxId:214360f7ff706f37f1cd346a7910caa4b07da7a0f1b94fd4af2eb9609e49369b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722858486486455082,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fndz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bdb2b4a-e7c6-4e03-80f8-cf80501095c4,},Annotations:map[string]string{io.kubernetes.container.hash: 96fd5c22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c4e00c9ba78ff0cfb337d7435931f39fe7ccd42145fa6670487d190cacee48,PodSandboxId:b824fdfadbf52a8243b61b3c55556272c3d50bd4fafe70328531a35defcf2fc9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172285848
1390523938,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wtsdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a1664bb-e0a8-496e-a74d-3c25080dca8e,},Annotations:map[string]string{io.kubernetes.container.hash: ff2ee446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:019abd676baf2985a3bf77641c1032cae7b3c22eb67fff535a25d9860b394bfd,PodSandboxId:1d9da1cd788cad95304542b24cf401a422744353696579d0d29bd98eb8653eaa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17228584649
24084638,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99d4f33bf7a3af916699b26dbf5430d3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1019d9e10074631835690fa0d372f2c043a64f237e1ddf9e22bcbd18d59fa6cd,PodSandboxId:1c9e20b33b7b7424aca33506f1a815c58190e9875a108206c654e048992f391f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722858461888444722,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddda5be0e77a9b07805ce43249e5859e,},Annotations:map[string]string{io.kubernetes.container.hash: f024b421,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50907082bdeb824e9a80122033ed1df5631143e152751f066a7bdfba1156e565,PodSandboxId:de38455447227b34bf7963342042ed5499630d6d5c6482c1c0aac94f9ce1a8d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722858461871321787,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.ku
bernetes.pod.name: kube-apiserver-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a381773e823990c7e015983b07a0d8,},Annotations:map[string]string{io.kubernetes.container.hash: caa197,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9839b56e3e62d7ac6b88dc20149da25f586b4033e03a09844938e5b85b6334,PodSandboxId:c7429b1a8552f574f21cc855aa6bf767680c56d05bb1df8b83c28a59cd561fb1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722858461852093132,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube
-scheduler-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96b70bfddf8dc93c8b8709942f15d00b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b17d8131f0edcc3018bb9d820f56a29a7806d7d57a91b849fc1350d6a8465775,PodSandboxId:116d38bae0e1d9ea33ddac0f1847ec8bd262f0dfda40d19beb5ce58d9dfc120c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722858461788289725,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-co
ntroller-manager-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b48534ca818552de6101946d7c7932fd,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cebf03e3-f97f-4425-bb57-5d9d30690383 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 11:55:38 ha-672593 crio[682]: time="2024-08-05 11:55:38.773228800Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3523aa6b-37af-4ae8-be7d-331aa7d33332 name=/runtime.v1.RuntimeService/Version
	Aug 05 11:55:38 ha-672593 crio[682]: time="2024-08-05 11:55:38.773299396Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3523aa6b-37af-4ae8-be7d-331aa7d33332 name=/runtime.v1.RuntimeService/Version
	Aug 05 11:55:38 ha-672593 crio[682]: time="2024-08-05 11:55:38.774696401Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=53db5302-fb53-4040-8f06-446de1423e54 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 11:55:38 ha-672593 crio[682]: time="2024-08-05 11:55:38.775185333Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722858938775164379,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=53db5302-fb53-4040-8f06-446de1423e54 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 11:55:38 ha-672593 crio[682]: time="2024-08-05 11:55:38.775614498Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4c0c6878-60ec-4a11-844c-66417f9875c0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 11:55:38 ha-672593 crio[682]: time="2024-08-05 11:55:38.775663061Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4c0c6878-60ec-4a11-844c-66417f9875c0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 11:55:38 ha-672593 crio[682]: time="2024-08-05 11:55:38.775882020Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f332a2eefb38a7643f5eabdc4c3795fdf9fc7faa3025977758afda4965c4d06f,PodSandboxId:96a63340a808e8f1d3c8938db5651c8ba9a84b0066e04495da70a33af565d687,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722858640390335720,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xx72g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b4aad5e1-e3ed-450f-b0c6-fa690e21632b,},Annotations:map[string]string{io.kubernetes.container.hash: f49c7961,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e556c9ba49f5fe264685a2408b26a61c8c5c8836f0a38b89b776f338b8b0cd22,PodSandboxId:9d62d6071098f73247871066016f164f3ba1e01a8dea16d9e20b8de1b97aafd3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722858498473117166,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c3a4e49-f517-40e4-bd83-1e69b6a7550c,},Annotations:map[string]string{io.kubernetes.container.hash: 907c955b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73fd9ef1948379bdfd834218bee29f227bc55765a421d994bcc5bbfe373658c1,PodSandboxId:162aab1f9af67e7a7875d7f44424f7edaa5b1aa74a891b3a0e84709da26c69fe,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722858498489254054,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sgd4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ff9d45-f09f-4213-b1c3-d568ee5ab68a,},Annotations:map[string]string{io.kubernetes.container.hash: d7a5fe30,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6354e702fe80a5a9853cdd48f89dde467f1f7359bb495c8a4f6a49048f151d94,PodSandboxId:60a5e5f93bb15c3691c3fccd5be1c38de24355d307d1217ada049b281288a7b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722858498409195705,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sfh7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98c09423-e2
4f-4d26-b7f9-3da3986d538b,},Annotations:map[string]string{io.kubernetes.container.hash: 3a333149,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57cec2b511aa8ca1b171b7dfff39ecb51cb11d9cd4efd552598fcc0054488c46,PodSandboxId:214360f7ff706f37f1cd346a7910caa4b07da7a0f1b94fd4af2eb9609e49369b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722858486486455082,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fndz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bdb2b4a-e7c6-4e03-80f8-cf80501095c4,},Annotations:map[string]string{io.kubernetes.container.hash: 96fd5c22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c4e00c9ba78ff0cfb337d7435931f39fe7ccd42145fa6670487d190cacee48,PodSandboxId:b824fdfadbf52a8243b61b3c55556272c3d50bd4fafe70328531a35defcf2fc9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172285848
1390523938,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wtsdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a1664bb-e0a8-496e-a74d-3c25080dca8e,},Annotations:map[string]string{io.kubernetes.container.hash: ff2ee446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:019abd676baf2985a3bf77641c1032cae7b3c22eb67fff535a25d9860b394bfd,PodSandboxId:1d9da1cd788cad95304542b24cf401a422744353696579d0d29bd98eb8653eaa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17228584649
24084638,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99d4f33bf7a3af916699b26dbf5430d3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1019d9e10074631835690fa0d372f2c043a64f237e1ddf9e22bcbd18d59fa6cd,PodSandboxId:1c9e20b33b7b7424aca33506f1a815c58190e9875a108206c654e048992f391f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722858461888444722,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddda5be0e77a9b07805ce43249e5859e,},Annotations:map[string]string{io.kubernetes.container.hash: f024b421,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50907082bdeb824e9a80122033ed1df5631143e152751f066a7bdfba1156e565,PodSandboxId:de38455447227b34bf7963342042ed5499630d6d5c6482c1c0aac94f9ce1a8d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722858461871321787,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.ku
bernetes.pod.name: kube-apiserver-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a381773e823990c7e015983b07a0d8,},Annotations:map[string]string{io.kubernetes.container.hash: caa197,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9839b56e3e62d7ac6b88dc20149da25f586b4033e03a09844938e5b85b6334,PodSandboxId:c7429b1a8552f574f21cc855aa6bf767680c56d05bb1df8b83c28a59cd561fb1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722858461852093132,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube
-scheduler-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96b70bfddf8dc93c8b8709942f15d00b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b17d8131f0edcc3018bb9d820f56a29a7806d7d57a91b849fc1350d6a8465775,PodSandboxId:116d38bae0e1d9ea33ddac0f1847ec8bd262f0dfda40d19beb5ce58d9dfc120c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722858461788289725,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-co
ntroller-manager-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b48534ca818552de6101946d7c7932fd,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4c0c6878-60ec-4a11-844c-66417f9875c0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 11:55:38 ha-672593 crio[682]: time="2024-08-05 11:55:38.813750806Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=10f256a9-7178-4a43-a087-91da2f07d310 name=/runtime.v1.RuntimeService/Version
	Aug 05 11:55:38 ha-672593 crio[682]: time="2024-08-05 11:55:38.813820070Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=10f256a9-7178-4a43-a087-91da2f07d310 name=/runtime.v1.RuntimeService/Version
	Aug 05 11:55:38 ha-672593 crio[682]: time="2024-08-05 11:55:38.814674335Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=135d6456-374f-4df7-84f3-249c8c849f71 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 11:55:38 ha-672593 crio[682]: time="2024-08-05 11:55:38.815714002Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722858938815687286,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=135d6456-374f-4df7-84f3-249c8c849f71 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 11:55:38 ha-672593 crio[682]: time="2024-08-05 11:55:38.817216317Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c4bb9ad5-5c60-445d-9ef7-e57e1037e895 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 11:55:38 ha-672593 crio[682]: time="2024-08-05 11:55:38.817275529Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c4bb9ad5-5c60-445d-9ef7-e57e1037e895 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 11:55:38 ha-672593 crio[682]: time="2024-08-05 11:55:38.817566151Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f332a2eefb38a7643f5eabdc4c3795fdf9fc7faa3025977758afda4965c4d06f,PodSandboxId:96a63340a808e8f1d3c8938db5651c8ba9a84b0066e04495da70a33af565d687,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722858640390335720,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xx72g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b4aad5e1-e3ed-450f-b0c6-fa690e21632b,},Annotations:map[string]string{io.kubernetes.container.hash: f49c7961,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e556c9ba49f5fe264685a2408b26a61c8c5c8836f0a38b89b776f338b8b0cd22,PodSandboxId:9d62d6071098f73247871066016f164f3ba1e01a8dea16d9e20b8de1b97aafd3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722858498473117166,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c3a4e49-f517-40e4-bd83-1e69b6a7550c,},Annotations:map[string]string{io.kubernetes.container.hash: 907c955b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73fd9ef1948379bdfd834218bee29f227bc55765a421d994bcc5bbfe373658c1,PodSandboxId:162aab1f9af67e7a7875d7f44424f7edaa5b1aa74a891b3a0e84709da26c69fe,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722858498489254054,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sgd4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ff9d45-f09f-4213-b1c3-d568ee5ab68a,},Annotations:map[string]string{io.kubernetes.container.hash: d7a5fe30,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6354e702fe80a5a9853cdd48f89dde467f1f7359bb495c8a4f6a49048f151d94,PodSandboxId:60a5e5f93bb15c3691c3fccd5be1c38de24355d307d1217ada049b281288a7b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722858498409195705,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sfh7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98c09423-e2
4f-4d26-b7f9-3da3986d538b,},Annotations:map[string]string{io.kubernetes.container.hash: 3a333149,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57cec2b511aa8ca1b171b7dfff39ecb51cb11d9cd4efd552598fcc0054488c46,PodSandboxId:214360f7ff706f37f1cd346a7910caa4b07da7a0f1b94fd4af2eb9609e49369b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722858486486455082,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fndz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bdb2b4a-e7c6-4e03-80f8-cf80501095c4,},Annotations:map[string]string{io.kubernetes.container.hash: 96fd5c22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c4e00c9ba78ff0cfb337d7435931f39fe7ccd42145fa6670487d190cacee48,PodSandboxId:b824fdfadbf52a8243b61b3c55556272c3d50bd4fafe70328531a35defcf2fc9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172285848
1390523938,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wtsdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a1664bb-e0a8-496e-a74d-3c25080dca8e,},Annotations:map[string]string{io.kubernetes.container.hash: ff2ee446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:019abd676baf2985a3bf77641c1032cae7b3c22eb67fff535a25d9860b394bfd,PodSandboxId:1d9da1cd788cad95304542b24cf401a422744353696579d0d29bd98eb8653eaa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17228584649
24084638,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99d4f33bf7a3af916699b26dbf5430d3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1019d9e10074631835690fa0d372f2c043a64f237e1ddf9e22bcbd18d59fa6cd,PodSandboxId:1c9e20b33b7b7424aca33506f1a815c58190e9875a108206c654e048992f391f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722858461888444722,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddda5be0e77a9b07805ce43249e5859e,},Annotations:map[string]string{io.kubernetes.container.hash: f024b421,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50907082bdeb824e9a80122033ed1df5631143e152751f066a7bdfba1156e565,PodSandboxId:de38455447227b34bf7963342042ed5499630d6d5c6482c1c0aac94f9ce1a8d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722858461871321787,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.ku
bernetes.pod.name: kube-apiserver-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a381773e823990c7e015983b07a0d8,},Annotations:map[string]string{io.kubernetes.container.hash: caa197,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9839b56e3e62d7ac6b88dc20149da25f586b4033e03a09844938e5b85b6334,PodSandboxId:c7429b1a8552f574f21cc855aa6bf767680c56d05bb1df8b83c28a59cd561fb1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722858461852093132,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube
-scheduler-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96b70bfddf8dc93c8b8709942f15d00b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b17d8131f0edcc3018bb9d820f56a29a7806d7d57a91b849fc1350d6a8465775,PodSandboxId:116d38bae0e1d9ea33ddac0f1847ec8bd262f0dfda40d19beb5ce58d9dfc120c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722858461788289725,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-co
ntroller-manager-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b48534ca818552de6101946d7c7932fd,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c4bb9ad5-5c60-445d-9ef7-e57e1037e895 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f332a2eefb38a       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   96a63340a808e       busybox-fc5497c4f-xx72g
	73fd9ef194837       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   162aab1f9af67       coredns-7db6d8ff4d-sgd4v
	e556c9ba49f5f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Running             storage-provisioner       0                   9d62d6071098f       storage-provisioner
	6354e702fe80a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   60a5e5f93bb15       coredns-7db6d8ff4d-sfh7c
	57cec2b511aa8       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    7 minutes ago       Running             kindnet-cni               0                   214360f7ff706       kindnet-7fndz
	11c4e00c9ba78       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      7 minutes ago       Running             kube-proxy                0                   b824fdfadbf52       kube-proxy-wtsdt
	019abd676baf2       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   1d9da1cd788ca       kube-vip-ha-672593
	1019d9e100746       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   1c9e20b33b7b7       etcd-ha-672593
	50907082bdeb8       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      7 minutes ago       Running             kube-apiserver            0                   de38455447227       kube-apiserver-ha-672593
	ca9839b56e3e6       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      7 minutes ago       Running             kube-scheduler            0                   c7429b1a8552f       kube-scheduler-ha-672593
	b17d8131f0edc       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      7 minutes ago       Running             kube-controller-manager   0                   116d38bae0e1d       kube-controller-manager-ha-672593
	
	
	==> coredns [6354e702fe80a5a9853cdd48f89dde467f1f7359bb495c8a4f6a49048f151d94] <==
	[INFO] 10.244.0.4:39677 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002029758s
	[INFO] 10.244.2.2:53990 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169975s
	[INFO] 10.244.2.2:39764 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000234053s
	[INFO] 10.244.2.2:43842 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000223142s
	[INFO] 10.244.1.2:42884 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000137522s
	[INFO] 10.244.1.2:35448 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147398s
	[INFO] 10.244.1.2:52034 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000147397s
	[INFO] 10.244.0.4:50553 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110672s
	[INFO] 10.244.0.4:47698 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000069619s
	[INFO] 10.244.0.4:39504 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000139191s
	[INFO] 10.244.0.4:35787 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000065087s
	[INFO] 10.244.2.2:57478 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118877s
	[INFO] 10.244.2.2:44657 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000121159s
	[INFO] 10.244.2.2:33599 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126768s
	[INFO] 10.244.1.2:54159 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000179418s
	[INFO] 10.244.1.2:49562 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092072s
	[INFO] 10.244.0.4:42290 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077914s
	[INFO] 10.244.2.2:59634 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000164343s
	[INFO] 10.244.2.2:43784 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000159677s
	[INFO] 10.244.1.2:49443 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000173465s
	[INFO] 10.244.1.2:58280 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00015744s
	[INFO] 10.244.0.4:52050 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111584s
	[INFO] 10.244.0.4:42223 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000078636s
	[INFO] 10.244.0.4:42616 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000084454s
	[INFO] 10.244.0.4:49723 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000087038s
	
	
	==> coredns [73fd9ef1948379bdfd834218bee29f227bc55765a421d994bcc5bbfe373658c1] <==
	[INFO] 10.244.0.4:55666 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.00061709s
	[INFO] 10.244.2.2:58579 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003537694s
	[INFO] 10.244.2.2:57289 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000170189s
	[INFO] 10.244.2.2:42256 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.011807351s
	[INFO] 10.244.2.2:32771 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000276871s
	[INFO] 10.244.2.2:34794 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00013565s
	[INFO] 10.244.1.2:33425 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168612s
	[INFO] 10.244.1.2:49339 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001876895s
	[INFO] 10.244.1.2:41345 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001388007s
	[INFO] 10.244.1.2:39680 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097906s
	[INFO] 10.244.1.2:38660 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000162674s
	[INFO] 10.244.0.4:37518 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001828264s
	[INFO] 10.244.0.4:43389 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000136081s
	[INFO] 10.244.0.4:58226 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000071105s
	[INFO] 10.244.0.4:43658 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001098104s
	[INFO] 10.244.2.2:40561 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000109999s
	[INFO] 10.244.1.2:41071 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120854s
	[INFO] 10.244.1.2:40710 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080783s
	[INFO] 10.244.0.4:54672 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011185s
	[INFO] 10.244.0.4:55288 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117161s
	[INFO] 10.244.0.4:41744 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068123s
	[INFO] 10.244.2.2:60620 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013916s
	[INFO] 10.244.2.2:52672 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000153187s
	[INFO] 10.244.1.2:36870 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144481s
	[INFO] 10.244.1.2:43017 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000166959s
	
	
	==> describe nodes <==
	Name:               ha-672593
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-672593
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f
	                    minikube.k8s.io/name=ha-672593
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_05T11_47_49_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 11:47:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-672593
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 11:55:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 11:50:51 +0000   Mon, 05 Aug 2024 11:47:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 11:50:51 +0000   Mon, 05 Aug 2024 11:47:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 11:50:51 +0000   Mon, 05 Aug 2024 11:47:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 11:50:51 +0000   Mon, 05 Aug 2024 11:48:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.102
	  Hostname:    ha-672593
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fb8829a6b1d145d6aee2ea0e80194fe4
	  System UUID:                fb8829a6-b1d1-45d6-aee2-ea0e80194fe4
	  Boot ID:                    ecb22512-bcb2-43ab-b502-fc0c346e754f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xx72g              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	  kube-system                 coredns-7db6d8ff4d-sfh7c             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m38s
	  kube-system                 coredns-7db6d8ff4d-sgd4v             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m38s
	  kube-system                 etcd-ha-672593                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m51s
	  kube-system                 kindnet-7fndz                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m39s
	  kube-system                 kube-apiserver-ha-672593             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m51s
	  kube-system                 kube-controller-manager-ha-672593    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m54s
	  kube-system                 kube-proxy-wtsdt                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m39s
	  kube-system                 kube-scheduler-ha-672593             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m51s
	  kube-system                 kube-vip-ha-672593                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m53s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m37s  kube-proxy       
	  Normal  Starting                 7m52s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m51s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m51s  kubelet          Node ha-672593 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m51s  kubelet          Node ha-672593 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m51s  kubelet          Node ha-672593 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m40s  node-controller  Node ha-672593 event: Registered Node ha-672593 in Controller
	  Normal  NodeReady                7m22s  kubelet          Node ha-672593 status is now: NodeReady
	  Normal  RegisteredNode           6m29s  node-controller  Node ha-672593 event: Registered Node ha-672593 in Controller
	  Normal  RegisteredNode           5m16s  node-controller  Node ha-672593 event: Registered Node ha-672593 in Controller
	
	
	Name:               ha-672593-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-672593-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f
	                    minikube.k8s.io/name=ha-672593
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_05T11_48_56_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 11:48:52 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-672593-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 11:52:16 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 05 Aug 2024 11:50:55 +0000   Mon, 05 Aug 2024 11:52:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 05 Aug 2024 11:50:55 +0000   Mon, 05 Aug 2024 11:52:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 05 Aug 2024 11:50:55 +0000   Mon, 05 Aug 2024 11:52:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 05 Aug 2024 11:50:55 +0000   Mon, 05 Aug 2024 11:52:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.68
	  Hostname:    ha-672593-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8aa3c6ca9e9a439e91c6c120c9ce9ce7
	  System UUID:                8aa3c6ca-9e9a-439e-91c6-c120c9ce9ce7
	  Boot ID:                    38ffe74c-4439-4306-9791-6e268f90d149
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-vn64j                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	  kube-system                 etcd-ha-672593-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m45s
	  kube-system                 kindnet-85fm7                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m47s
	  kube-system                 kube-apiserver-ha-672593-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m46s
	  kube-system                 kube-controller-manager-ha-672593-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m46s
	  kube-system                 kube-proxy-mdwh2                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m47s
	  kube-system                 kube-scheduler-ha-672593-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m46s
	  kube-system                 kube-vip-ha-672593-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m42s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m47s (x8 over 6m47s)  kubelet          Node ha-672593-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m47s (x8 over 6m47s)  kubelet          Node ha-672593-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m47s (x7 over 6m47s)  kubelet          Node ha-672593-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m45s                  node-controller  Node ha-672593-m02 event: Registered Node ha-672593-m02 in Controller
	  Normal  RegisteredNode           6m29s                  node-controller  Node ha-672593-m02 event: Registered Node ha-672593-m02 in Controller
	  Normal  RegisteredNode           5m16s                  node-controller  Node ha-672593-m02 event: Registered Node ha-672593-m02 in Controller
	  Normal  NodeNotReady             2m41s                  node-controller  Node ha-672593-m02 status is now: NodeNotReady
	
	
	Name:               ha-672593-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-672593-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f
	                    minikube.k8s.io/name=ha-672593
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_05T11_50_09_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 11:50:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-672593-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 11:55:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 11:51:07 +0000   Mon, 05 Aug 2024 11:50:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 11:51:07 +0000   Mon, 05 Aug 2024 11:50:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 11:51:07 +0000   Mon, 05 Aug 2024 11:50:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 11:51:07 +0000   Mon, 05 Aug 2024 11:50:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.210
	  Hostname:    ha-672593-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 95bc9a27650e44d8882cc62883736cdc
	  System UUID:                95bc9a27-650e-44d8-882c-c62883736cdc
	  Boot ID:                    ab03c867-4435-497f-a3f3-21dc7ccd0744
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-dq7jg                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	  kube-system                 etcd-ha-672593-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m31s
	  kube-system                 kindnet-wnbr8                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m33s
	  kube-system                 kube-apiserver-ha-672593-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m31s
	  kube-system                 kube-controller-manager-ha-672593-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m23s
	  kube-system                 kube-proxy-4q4tq                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m33s
	  kube-system                 kube-scheduler-ha-672593-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m31s
	  kube-system                 kube-vip-ha-672593-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m29s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m33s (x8 over 5m33s)  kubelet          Node ha-672593-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m33s (x8 over 5m33s)  kubelet          Node ha-672593-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m33s (x7 over 5m33s)  kubelet          Node ha-672593-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m29s                  node-controller  Node ha-672593-m03 event: Registered Node ha-672593-m03 in Controller
	  Normal  RegisteredNode           5m29s                  node-controller  Node ha-672593-m03 event: Registered Node ha-672593-m03 in Controller
	  Normal  RegisteredNode           5m16s                  node-controller  Node ha-672593-m03 event: Registered Node ha-672593-m03 in Controller
	
	
	Name:               ha-672593-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-672593-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f
	                    minikube.k8s.io/name=ha-672593
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_05T11_51_15_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 11:51:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-672593-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 11:55:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 11:52:02 +0000   Mon, 05 Aug 2024 11:51:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 11:52:02 +0000   Mon, 05 Aug 2024 11:51:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 11:52:02 +0000   Mon, 05 Aug 2024 11:51:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 11:52:02 +0000   Mon, 05 Aug 2024 11:52:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.4
	  Hostname:    ha-672593-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f5561d3ea391496e983c8078f06ff6c0
	  System UUID:                f5561d3e-a391-496e-983c-8078f06ff6c0
	  Boot ID:                    8c3ba653-2f5c-4f6b-97cc-3874b6ca2e6f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-6dfc5       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m25s
	  kube-system                 kube-proxy-lpp7n    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m15s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m25s (x2 over 4m25s)  kubelet          Node ha-672593-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m25s (x2 over 4m25s)  kubelet          Node ha-672593-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m25s (x2 over 4m25s)  kubelet          Node ha-672593-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m24s                  node-controller  Node ha-672593-m04 event: Registered Node ha-672593-m04 in Controller
	  Normal  RegisteredNode           4m24s                  node-controller  Node ha-672593-m04 event: Registered Node ha-672593-m04 in Controller
	  Normal  RegisteredNode           4m21s                  node-controller  Node ha-672593-m04 event: Registered Node ha-672593-m04 in Controller
	  Normal  NodeReady                3m37s                  kubelet          Node ha-672593-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug 5 11:47] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050074] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039249] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.753225] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.440209] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.588179] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +14.056383] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.055926] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054067] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.198790] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.118095] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.300760] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.230253] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +4.263666] systemd-fstab-generator[942]: Ignoring "noauto" option for root device
	[  +0.055709] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.688004] kauditd_printk_skb: 79 callbacks suppressed
	[  +1.472104] systemd-fstab-generator[1356]: Ignoring "noauto" option for root device
	[Aug 5 11:48] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.212364] kauditd_printk_skb: 29 callbacks suppressed
	[ +52.825644] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [1019d9e10074631835690fa0d372f2c043a64f237e1ddf9e22bcbd18d59fa6cd] <==
	{"level":"warn","ts":"2024-08-05T11:55:38.784903Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T11:55:38.824798Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T11:55:38.852411Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T11:55:38.885502Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T11:55:39.086801Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T11:55:39.09116Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T11:55:39.109859Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T11:55:39.117155Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T11:55:39.123131Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T11:55:39.126592Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T11:55:39.129788Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T11:55:39.138052Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T11:55:39.143925Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T11:55:39.150381Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T11:55:39.153446Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T11:55:39.156429Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T11:55:39.163889Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T11:55:39.169729Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T11:55:39.175332Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T11:55:39.178806Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T11:55:39.181462Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T11:55:39.18615Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T11:55:39.188214Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T11:55:39.1953Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T11:55:39.201276Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 11:55:39 up 8 min,  0 users,  load average: 0.31, 0.30, 0.17
	Linux ha-672593 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [57cec2b511aa8ca1b171b7dfff39ecb51cb11d9cd4efd552598fcc0054488c46] <==
	I0805 11:55:07.435129       1 main.go:322] Node ha-672593-m04 has CIDR [10.244.3.0/24] 
	I0805 11:55:17.442223       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0805 11:55:17.442339       1 main.go:299] handling current node
	I0805 11:55:17.442375       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0805 11:55:17.442395       1 main.go:322] Node ha-672593-m02 has CIDR [10.244.1.0/24] 
	I0805 11:55:17.442561       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0805 11:55:17.442588       1 main.go:322] Node ha-672593-m03 has CIDR [10.244.2.0/24] 
	I0805 11:55:17.442680       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0805 11:55:17.442700       1 main.go:322] Node ha-672593-m04 has CIDR [10.244.3.0/24] 
	I0805 11:55:27.436786       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0805 11:55:27.437055       1 main.go:299] handling current node
	I0805 11:55:27.437120       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0805 11:55:27.437131       1 main.go:322] Node ha-672593-m02 has CIDR [10.244.1.0/24] 
	I0805 11:55:27.437561       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0805 11:55:27.437635       1 main.go:322] Node ha-672593-m03 has CIDR [10.244.2.0/24] 
	I0805 11:55:27.437790       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0805 11:55:27.437813       1 main.go:322] Node ha-672593-m04 has CIDR [10.244.3.0/24] 
	I0805 11:55:37.439447       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0805 11:55:37.439546       1 main.go:322] Node ha-672593-m04 has CIDR [10.244.3.0/24] 
	I0805 11:55:37.439710       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0805 11:55:37.439735       1 main.go:299] handling current node
	I0805 11:55:37.439757       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0805 11:55:37.439772       1 main.go:322] Node ha-672593-m02 has CIDR [10.244.1.0/24] 
	I0805 11:55:37.439835       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0805 11:55:37.439855       1 main.go:322] Node ha-672593-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [50907082bdeb824e9a80122033ed1df5631143e152751f066a7bdfba1156e565] <==
	E0805 11:50:43.789269       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34086: use of closed network connection
	E0805 11:50:43.976322       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34114: use of closed network connection
	E0805 11:51:14.956507       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0805 11:51:14.957049       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0805 11:51:14.956693       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 8.074µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0805 11:51:14.958274       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0805 11:51:14.958403       1 timeout.go:142] post-timeout activity - time-elapsed: 1.974459ms, POST "/api/v1/namespaces/default/events" result: <nil>
	I0805 11:51:15.355884       1 trace.go:236] Trace[2057187306]: "Delete" accept:application/vnd.kubernetes.protobuf, */*,audit-id:d69f909f-0d36-4268-b370-405a73ba5a2d,client:192.168.39.102,api-group:,api-version:v1,name:kindnet-b7k4j,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kindnet-b7k4j,user-agent:kube-controller-manager/v1.30.3 (linux/amd64) kubernetes/6fc0a69/system:serviceaccount:kube-system:daemon-set-controller,verb:DELETE (05-Aug-2024 11:51:14.488) (total time: 867ms):
	Trace[2057187306]: ["GuaranteedUpdate etcd3" audit-id:d69f909f-0d36-4268-b370-405a73ba5a2d,key:/pods/kube-system/kindnet-b7k4j,type:*core.Pod,resource:pods 830ms (11:51:14.525)
	Trace[2057187306]:  ---"Txn call completed" 830ms (11:51:15.355)]
	Trace[2057187306]: [867.121964ms] [867.121964ms] END
	I0805 11:51:15.356091       1 trace.go:236] Trace[1882335687]: "Patch" accept:application/json, */*,audit-id:6a7ce633-7c2c-4b1f-ab10-58dd4b60ca48,client:192.168.39.4,api-group:,api-version:v1,name:ha-672593-m04,subresource:,namespace:,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/ha-672593-m04,user-agent:kubeadm/v1.30.3 (linux/amd64) kubernetes/6fc0a69,verb:PATCH (05-Aug-2024 11:51:14.525) (total time: 830ms):
	Trace[1882335687]: ["GuaranteedUpdate etcd3" audit-id:6a7ce633-7c2c-4b1f-ab10-58dd4b60ca48,key:/minions/ha-672593-m04,type:*core.Node,resource:nodes 830ms (11:51:14.525)
	Trace[1882335687]:  ---"Txn call completed" 827ms (11:51:15.355)]
	Trace[1882335687]: ---"Object stored in database" 828ms (11:51:15.355)
	Trace[1882335687]: [830.979361ms] [830.979361ms] END
	I0805 11:51:15.373191       1 trace.go:236] Trace[1292202851]: "Delete" accept:application/vnd.kubernetes.protobuf, */*,audit-id:e8bdf6a1-f936-44aa-a3e3-f2dcf8ca002f,client:192.168.39.102,api-group:,api-version:v1,name:kindnet-fgzhp,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kindnet-fgzhp,user-agent:kube-controller-manager/v1.30.3 (linux/amd64) kubernetes/6fc0a69/system:serviceaccount:kube-system:daemon-set-controller,verb:DELETE (05-Aug-2024 11:51:14.488) (total time: 885ms):
	Trace[1292202851]: ["GuaranteedUpdate etcd3" audit-id:e8bdf6a1-f936-44aa-a3e3-f2dcf8ca002f,key:/pods/kube-system/kindnet-fgzhp,type:*core.Pod,resource:pods 851ms (11:51:14.521)
	Trace[1292202851]:  ---"Txn call completed" 244ms (11:51:14.765)
	Trace[1292202851]:  ---"Txn call completed" 606ms (11:51:15.372)]
	Trace[1292202851]: [885.062494ms] [885.062494ms] END
	I0805 11:51:15.374273       1 trace.go:236] Trace[1503873603]: "Delete" accept:application/vnd.kubernetes.protobuf, */*,audit-id:1f8a93b4-7913-4c1b-b500-1e3c1a25153d,client:192.168.39.102,api-group:,api-version:v1,name:kube-proxy-rzj75,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kube-proxy-rzj75,user-agent:kube-controller-manager/v1.30.3 (linux/amd64) kubernetes/6fc0a69/system:serviceaccount:kube-system:daemon-set-controller,verb:DELETE (05-Aug-2024 11:51:14.446) (total time: 928ms):
	Trace[1503873603]: ---"Object deleted from database" 854ms (11:51:15.374)
	Trace[1503873603]: [928.175411ms] [928.175411ms] END
	W0805 11:52:36.535304       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.102 192.168.39.210]
	
	
	==> kube-controller-manager [b17d8131f0edcc3018bb9d820f56a29a7806d7d57a91b849fc1350d6a8465775] <==
	I0805 11:50:06.241131       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-672593-m03" podCIDRs=["10.244.2.0/24"]
	I0805 11:50:10.019449       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-672593-m03"
	I0805 11:50:35.511614       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="96.312251ms"
	I0805 11:50:35.543228       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.545622ms"
	I0805 11:50:35.718444       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="175.00124ms"
	I0805 11:50:35.886337       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="167.824683ms"
	E0805 11:50:35.886402       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0805 11:50:35.886582       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="116.447µs"
	I0805 11:50:35.892381       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="122.907µs"
	I0805 11:50:36.181404       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.778µs"
	I0805 11:50:38.032301       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="101.33µs"
	I0805 11:50:39.301373       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.703269ms"
	I0805 11:50:39.302794       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="96.516µs"
	I0805 11:50:39.398444       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.845419ms"
	I0805 11:50:39.398657       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.043µs"
	I0805 11:50:40.772924       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.173027ms"
	I0805 11:50:40.773129       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.867µs"
	E0805 11:51:13.986714       1 certificate_controller.go:146] Sync csr-l8sbz failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-l8sbz": the object has been modified; please apply your changes to the latest version and try again
	I0805 11:51:14.287098       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-672593-m04\" does not exist"
	I0805 11:51:14.324037       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-672593-m04" podCIDRs=["10.244.3.0/24"]
	I0805 11:51:15.030692       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-672593-m04"
	I0805 11:52:02.782810       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-672593-m04"
	I0805 11:52:58.822861       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-672593-m04"
	I0805 11:52:58.914362       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.130266ms"
	I0805 11:52:58.914624       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="105.357µs"
	
	
	==> kube-proxy [11c4e00c9ba78ff0cfb337d7435931f39fe7ccd42145fa6670487d190cacee48] <==
	I0805 11:48:01.596014       1 server_linux.go:69] "Using iptables proxy"
	I0805 11:48:01.611292       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.102"]
	I0805 11:48:01.688564       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0805 11:48:01.688703       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0805 11:48:01.688807       1 server_linux.go:165] "Using iptables Proxier"
	I0805 11:48:01.692611       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0805 11:48:01.693822       1 server.go:872] "Version info" version="v1.30.3"
	I0805 11:48:01.693932       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 11:48:01.695740       1 config.go:192] "Starting service config controller"
	I0805 11:48:01.696023       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 11:48:01.696086       1 config.go:101] "Starting endpoint slice config controller"
	I0805 11:48:01.696106       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 11:48:01.697101       1 config.go:319] "Starting node config controller"
	I0805 11:48:01.697140       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 11:48:01.796613       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0805 11:48:01.796745       1 shared_informer.go:320] Caches are synced for service config
	I0805 11:48:01.797314       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [ca9839b56e3e62d7ac6b88dc20149da25f586b4033e03a09844938e5b85b6334] <==
	E0805 11:47:45.748460       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0805 11:47:45.750500       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0805 11:47:45.750619       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0805 11:47:45.752782       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0805 11:47:45.752912       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0805 11:47:45.760425       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0805 11:47:45.760473       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0805 11:47:45.823044       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0805 11:47:45.823236       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0805 11:47:45.870442       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0805 11:47:45.870650       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0805 11:47:45.877420       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0805 11:47:45.877634       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0805 11:47:46.070799       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0805 11:47:46.070918       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0805 11:47:46.073847       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0805 11:47:46.074127       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0805 11:47:46.087219       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0805 11:47:46.087263       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0805 11:47:46.159418       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0805 11:47:46.159494       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0805 11:47:48.317360       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0805 11:50:06.340620       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-6lh4q\": pod kindnet-6lh4q is being deleted, cannot be assigned to a host" plugin="DefaultBinder" pod="kube-system/kindnet-6lh4q" node="ha-672593-m03"
	E0805 11:50:06.341001       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-6lh4q\": pod kindnet-6lh4q is being deleted, cannot be assigned to a host" pod="kube-system/kindnet-6lh4q"
	E0805 11:50:06.359326       1 schedule_one.go:1095] "Error updating pod" err="pods \"kindnet-6lh4q\" not found" pod="kube-system/kindnet-6lh4q"
	
	
	==> kubelet <==
	Aug 05 11:50:48 ha-672593 kubelet[1363]: E0805 11:50:48.028465    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 11:50:48 ha-672593 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 11:50:48 ha-672593 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 11:50:48 ha-672593 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 11:50:48 ha-672593 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 11:51:48 ha-672593 kubelet[1363]: E0805 11:51:48.029347    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 11:51:48 ha-672593 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 11:51:48 ha-672593 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 11:51:48 ha-672593 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 11:51:48 ha-672593 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 11:52:48 ha-672593 kubelet[1363]: E0805 11:52:48.034253    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 11:52:48 ha-672593 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 11:52:48 ha-672593 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 11:52:48 ha-672593 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 11:52:48 ha-672593 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 11:53:48 ha-672593 kubelet[1363]: E0805 11:53:48.030232    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 11:53:48 ha-672593 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 11:53:48 ha-672593 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 11:53:48 ha-672593 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 11:53:48 ha-672593 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 11:54:48 ha-672593 kubelet[1363]: E0805 11:54:48.033434    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 11:54:48 ha-672593 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 11:54:48 ha-672593 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 11:54:48 ha-672593 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 11:54:48 ha-672593 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-672593 -n ha-672593
helpers_test.go:261: (dbg) Run:  kubectl --context ha-672593 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (51.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (372.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-672593 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-672593 -v=7 --alsologtostderr
E0805 11:56:50.804895  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-672593 -v=7 --alsologtostderr: exit status 82 (2m1.918846419s)

                                                
                                                
-- stdout --
	* Stopping node "ha-672593-m04"  ...
	* Stopping node "ha-672593-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 11:55:40.662480  408761 out.go:291] Setting OutFile to fd 1 ...
	I0805 11:55:40.662600  408761 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 11:55:40.662612  408761 out.go:304] Setting ErrFile to fd 2...
	I0805 11:55:40.662618  408761 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 11:55:40.663353  408761 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
	I0805 11:55:40.663854  408761 out.go:298] Setting JSON to false
	I0805 11:55:40.664021  408761 mustload.go:65] Loading cluster: ha-672593
	I0805 11:55:40.664418  408761 config.go:182] Loaded profile config "ha-672593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 11:55:40.664508  408761 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/config.json ...
	I0805 11:55:40.664703  408761 mustload.go:65] Loading cluster: ha-672593
	I0805 11:55:40.664841  408761 config.go:182] Loaded profile config "ha-672593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 11:55:40.664873  408761 stop.go:39] StopHost: ha-672593-m04
	I0805 11:55:40.665237  408761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:40.665302  408761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:40.680351  408761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39345
	I0805 11:55:40.680897  408761 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:40.681532  408761 main.go:141] libmachine: Using API Version  1
	I0805 11:55:40.681553  408761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:40.681913  408761 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:40.684386  408761 out.go:177] * Stopping node "ha-672593-m04"  ...
	I0805 11:55:40.685927  408761 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0805 11:55:40.685982  408761 main.go:141] libmachine: (ha-672593-m04) Calling .DriverName
	I0805 11:55:40.686262  408761 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0805 11:55:40.686291  408761 main.go:141] libmachine: (ha-672593-m04) Calling .GetSSHHostname
	I0805 11:55:40.689024  408761 main.go:141] libmachine: (ha-672593-m04) DBG | domain ha-672593-m04 has defined MAC address 52:54:00:23:8c:55 in network mk-ha-672593
	I0805 11:55:40.689504  408761 main.go:141] libmachine: (ha-672593-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:8c:55", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:50:59 +0000 UTC Type:0 Mac:52:54:00:23:8c:55 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-672593-m04 Clientid:01:52:54:00:23:8c:55}
	I0805 11:55:40.689530  408761 main.go:141] libmachine: (ha-672593-m04) DBG | domain ha-672593-m04 has defined IP address 192.168.39.4 and MAC address 52:54:00:23:8c:55 in network mk-ha-672593
	I0805 11:55:40.689712  408761 main.go:141] libmachine: (ha-672593-m04) Calling .GetSSHPort
	I0805 11:55:40.689871  408761 main.go:141] libmachine: (ha-672593-m04) Calling .GetSSHKeyPath
	I0805 11:55:40.690079  408761 main.go:141] libmachine: (ha-672593-m04) Calling .GetSSHUsername
	I0805 11:55:40.690235  408761 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m04/id_rsa Username:docker}
	I0805 11:55:40.769999  408761 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0805 11:55:40.823498  408761 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0805 11:55:40.877524  408761 main.go:141] libmachine: Stopping "ha-672593-m04"...
	I0805 11:55:40.877561  408761 main.go:141] libmachine: (ha-672593-m04) Calling .GetState
	I0805 11:55:40.879270  408761 main.go:141] libmachine: (ha-672593-m04) Calling .Stop
	I0805 11:55:40.882956  408761 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 0/120
	I0805 11:55:42.114711  408761 main.go:141] libmachine: (ha-672593-m04) Calling .GetState
	I0805 11:55:42.115933  408761 main.go:141] libmachine: Machine "ha-672593-m04" was stopped.
	I0805 11:55:42.115953  408761 stop.go:75] duration metric: took 1.430031543s to stop
	I0805 11:55:42.115994  408761 stop.go:39] StopHost: ha-672593-m03
	I0805 11:55:42.116301  408761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:55:42.116353  408761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:55:42.132937  408761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45525
	I0805 11:55:42.133385  408761 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:55:42.133860  408761 main.go:141] libmachine: Using API Version  1
	I0805 11:55:42.133886  408761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:55:42.134234  408761 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:55:42.136293  408761 out.go:177] * Stopping node "ha-672593-m03"  ...
	I0805 11:55:42.137592  408761 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0805 11:55:42.137617  408761 main.go:141] libmachine: (ha-672593-m03) Calling .DriverName
	I0805 11:55:42.137883  408761 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0805 11:55:42.137916  408761 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHHostname
	I0805 11:55:42.140666  408761 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:55:42.141093  408761 main.go:141] libmachine: (ha-672593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:2e:1f", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:49:33 +0000 UTC Type:0 Mac:52:54:00:3d:2e:1f Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-672593-m03 Clientid:01:52:54:00:3d:2e:1f}
	I0805 11:55:42.141130  408761 main.go:141] libmachine: (ha-672593-m03) DBG | domain ha-672593-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:3d:2e:1f in network mk-ha-672593
	I0805 11:55:42.141232  408761 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHPort
	I0805 11:55:42.141394  408761 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHKeyPath
	I0805 11:55:42.141564  408761 main.go:141] libmachine: (ha-672593-m03) Calling .GetSSHUsername
	I0805 11:55:42.141719  408761 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m03/id_rsa Username:docker}
	I0805 11:55:42.231762  408761 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0805 11:55:42.285021  408761 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0805 11:55:42.339847  408761 main.go:141] libmachine: Stopping "ha-672593-m03"...
	I0805 11:55:42.339882  408761 main.go:141] libmachine: (ha-672593-m03) Calling .GetState
	I0805 11:55:42.341447  408761 main.go:141] libmachine: (ha-672593-m03) Calling .Stop
	I0805 11:55:42.344822  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 0/120
	I0805 11:55:43.346352  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 1/120
	I0805 11:55:44.347774  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 2/120
	I0805 11:55:45.349061  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 3/120
	I0805 11:55:46.350715  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 4/120
	I0805 11:55:47.352758  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 5/120
	I0805 11:55:48.354376  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 6/120
	I0805 11:55:49.356077  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 7/120
	I0805 11:55:50.357309  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 8/120
	I0805 11:55:51.358949  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 9/120
	I0805 11:55:52.361126  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 10/120
	I0805 11:55:53.362887  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 11/120
	I0805 11:55:54.364431  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 12/120
	I0805 11:55:55.365744  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 13/120
	I0805 11:55:56.367280  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 14/120
	I0805 11:55:57.368858  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 15/120
	I0805 11:55:58.370147  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 16/120
	I0805 11:55:59.371919  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 17/120
	I0805 11:56:00.373445  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 18/120
	I0805 11:56:01.374941  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 19/120
	I0805 11:56:02.376711  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 20/120
	I0805 11:56:03.378233  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 21/120
	I0805 11:56:04.379946  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 22/120
	I0805 11:56:05.381540  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 23/120
	I0805 11:56:06.382913  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 24/120
	I0805 11:56:07.384545  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 25/120
	I0805 11:56:08.386042  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 26/120
	I0805 11:56:09.388238  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 27/120
	I0805 11:56:10.389719  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 28/120
	I0805 11:56:11.391147  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 29/120
	I0805 11:56:12.392951  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 30/120
	I0805 11:56:13.394512  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 31/120
	I0805 11:56:14.395963  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 32/120
	I0805 11:56:15.397608  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 33/120
	I0805 11:56:16.398893  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 34/120
	I0805 11:56:17.400682  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 35/120
	I0805 11:56:18.402023  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 36/120
	I0805 11:56:19.403562  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 37/120
	I0805 11:56:20.404904  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 38/120
	I0805 11:56:21.406351  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 39/120
	I0805 11:56:22.408209  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 40/120
	I0805 11:56:23.409647  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 41/120
	I0805 11:56:24.411316  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 42/120
	I0805 11:56:25.412859  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 43/120
	I0805 11:56:26.414225  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 44/120
	I0805 11:56:27.415860  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 45/120
	I0805 11:56:28.417372  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 46/120
	I0805 11:56:29.418876  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 47/120
	I0805 11:56:30.420386  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 48/120
	I0805 11:56:31.421997  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 49/120
	I0805 11:56:32.423731  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 50/120
	I0805 11:56:33.425169  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 51/120
	I0805 11:56:34.426418  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 52/120
	I0805 11:56:35.428127  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 53/120
	I0805 11:56:36.429512  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 54/120
	I0805 11:56:37.430935  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 55/120
	I0805 11:56:38.432629  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 56/120
	I0805 11:56:39.433828  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 57/120
	I0805 11:56:40.435433  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 58/120
	I0805 11:56:41.436650  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 59/120
	I0805 11:56:42.438190  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 60/120
	I0805 11:56:43.439697  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 61/120
	I0805 11:56:44.441013  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 62/120
	I0805 11:56:45.442456  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 63/120
	I0805 11:56:46.443637  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 64/120
	I0805 11:56:47.445135  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 65/120
	I0805 11:56:48.446507  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 66/120
	I0805 11:56:49.447814  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 67/120
	I0805 11:56:50.449227  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 68/120
	I0805 11:56:51.450562  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 69/120
	I0805 11:56:52.452437  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 70/120
	I0805 11:56:53.453920  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 71/120
	I0805 11:56:54.455231  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 72/120
	I0805 11:56:55.456644  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 73/120
	I0805 11:56:56.458010  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 74/120
	I0805 11:56:57.459771  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 75/120
	I0805 11:56:58.461352  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 76/120
	I0805 11:56:59.462913  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 77/120
	I0805 11:57:00.464216  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 78/120
	I0805 11:57:01.465741  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 79/120
	I0805 11:57:02.467462  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 80/120
	I0805 11:57:03.468877  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 81/120
	I0805 11:57:04.470418  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 82/120
	I0805 11:57:05.471885  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 83/120
	I0805 11:57:06.473233  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 84/120
	I0805 11:57:07.475127  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 85/120
	I0805 11:57:08.476715  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 86/120
	I0805 11:57:09.478401  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 87/120
	I0805 11:57:10.479932  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 88/120
	I0805 11:57:11.481286  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 89/120
	I0805 11:57:12.483127  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 90/120
	I0805 11:57:13.484605  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 91/120
	I0805 11:57:14.485934  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 92/120
	I0805 11:57:15.487310  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 93/120
	I0805 11:57:16.488591  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 94/120
	I0805 11:57:17.490163  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 95/120
	I0805 11:57:18.491631  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 96/120
	I0805 11:57:19.493108  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 97/120
	I0805 11:57:20.494403  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 98/120
	I0805 11:57:21.495995  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 99/120
	I0805 11:57:22.497654  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 100/120
	I0805 11:57:23.499088  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 101/120
	I0805 11:57:24.500609  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 102/120
	I0805 11:57:25.502259  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 103/120
	I0805 11:57:26.504385  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 104/120
	I0805 11:57:27.506667  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 105/120
	I0805 11:57:28.508342  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 106/120
	I0805 11:57:29.510047  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 107/120
	I0805 11:57:30.511500  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 108/120
	I0805 11:57:31.513122  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 109/120
	I0805 11:57:32.514984  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 110/120
	I0805 11:57:33.516824  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 111/120
	I0805 11:57:34.518489  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 112/120
	I0805 11:57:35.520026  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 113/120
	I0805 11:57:36.521525  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 114/120
	I0805 11:57:37.523088  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 115/120
	I0805 11:57:38.524624  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 116/120
	I0805 11:57:39.525964  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 117/120
	I0805 11:57:40.527454  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 118/120
	I0805 11:57:41.528936  408761 main.go:141] libmachine: (ha-672593-m03) Waiting for machine to stop 119/120
	I0805 11:57:42.529875  408761 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0805 11:57:42.529954  408761 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0805 11:57:42.531809  408761 out.go:177] 
	W0805 11:57:42.533143  408761 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0805 11:57:42.533155  408761 out.go:239] * 
	* 
	W0805 11:57:42.536289  408761 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 11:57:42.537570  408761 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-672593 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-672593 --wait=true -v=7 --alsologtostderr
E0805 11:57:52.927184  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/functional-014296/client.crt: no such file or directory
E0805 12:00:27.753807  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-672593 --wait=true -v=7 --alsologtostderr: (4m7.470578889s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-672593
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-672593 -n ha-672593
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-672593 logs -n 25: (1.94433082s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-672593 cp ha-672593-m03:/home/docker/cp-test.txt                              | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593-m02:/home/docker/cp-test_ha-672593-m03_ha-672593-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n                                                                 | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n ha-672593-m02 sudo cat                                          | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | /home/docker/cp-test_ha-672593-m03_ha-672593-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-672593 cp ha-672593-m03:/home/docker/cp-test.txt                              | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593-m04:/home/docker/cp-test_ha-672593-m03_ha-672593-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n                                                                 | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n ha-672593-m04 sudo cat                                          | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | /home/docker/cp-test_ha-672593-m03_ha-672593-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-672593 cp testdata/cp-test.txt                                                | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n                                                                 | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-672593 cp ha-672593-m04:/home/docker/cp-test.txt                              | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2308329850/001/cp-test_ha-672593-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n                                                                 | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-672593 cp ha-672593-m04:/home/docker/cp-test.txt                              | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593:/home/docker/cp-test_ha-672593-m04_ha-672593.txt                       |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n                                                                 | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n ha-672593 sudo cat                                              | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | /home/docker/cp-test_ha-672593-m04_ha-672593.txt                                 |           |         |         |                     |                     |
	| cp      | ha-672593 cp ha-672593-m04:/home/docker/cp-test.txt                              | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593-m02:/home/docker/cp-test_ha-672593-m04_ha-672593-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n                                                                 | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n ha-672593-m02 sudo cat                                          | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | /home/docker/cp-test_ha-672593-m04_ha-672593-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-672593 cp ha-672593-m04:/home/docker/cp-test.txt                              | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593-m03:/home/docker/cp-test_ha-672593-m04_ha-672593-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n                                                                 | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n ha-672593-m03 sudo cat                                          | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | /home/docker/cp-test_ha-672593-m04_ha-672593-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-672593 node stop m02 -v=7                                                     | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-672593 node start m02 -v=7                                                    | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:54 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-672593 -v=7                                                           | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:55 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-672593 -v=7                                                                | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:55 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-672593 --wait=true -v=7                                                    | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:57 UTC | 05 Aug 24 12:01 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-672593                                                                | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 12:01 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 11:57:42
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 11:57:42.587028  409225 out.go:291] Setting OutFile to fd 1 ...
	I0805 11:57:42.587187  409225 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 11:57:42.587198  409225 out.go:304] Setting ErrFile to fd 2...
	I0805 11:57:42.587202  409225 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 11:57:42.587365  409225 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
	I0805 11:57:42.588019  409225 out.go:298] Setting JSON to false
	I0805 11:57:42.588988  409225 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":6010,"bootTime":1722853053,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0805 11:57:42.589064  409225 start.go:139] virtualization: kvm guest
	I0805 11:57:42.591367  409225 out.go:177] * [ha-672593] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0805 11:57:42.592840  409225 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 11:57:42.592881  409225 notify.go:220] Checking for updates...
	I0805 11:57:42.595289  409225 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 11:57:42.596561  409225 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 11:57:42.597698  409225 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 11:57:42.598774  409225 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0805 11:57:42.599903  409225 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 11:57:42.601461  409225 config.go:182] Loaded profile config "ha-672593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 11:57:42.601566  409225 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 11:57:42.601971  409225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:57:42.602059  409225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:57:42.617236  409225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39551
	I0805 11:57:42.617734  409225 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:57:42.618353  409225 main.go:141] libmachine: Using API Version  1
	I0805 11:57:42.618385  409225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:57:42.618708  409225 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:57:42.618889  409225 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:57:42.653788  409225 out.go:177] * Using the kvm2 driver based on existing profile
	I0805 11:57:42.654954  409225 start.go:297] selected driver: kvm2
	I0805 11:57:42.654967  409225 start.go:901] validating driver "kvm2" against &{Name:ha-672593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-672593 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.4 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:
false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 11:57:42.655104  409225 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 11:57:42.655421  409225 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 11:57:42.655486  409225 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19377-383955/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0805 11:57:42.670195  409225 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0805 11:57:42.671104  409225 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 11:57:42.671204  409225 cni.go:84] Creating CNI manager for ""
	I0805 11:57:42.671223  409225 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0805 11:57:42.671324  409225 start.go:340] cluster config:
	{Name:ha-672593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-672593 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.4 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tille
r:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 11:57:42.671545  409225 iso.go:125] acquiring lock: {Name:mk78a4988ea0dfb86bb6f7367e362683a39fd912 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 11:57:42.673317  409225 out.go:177] * Starting "ha-672593" primary control-plane node in "ha-672593" cluster
	I0805 11:57:42.674345  409225 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 11:57:42.674376  409225 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0805 11:57:42.674387  409225 cache.go:56] Caching tarball of preloaded images
	I0805 11:57:42.674468  409225 preload.go:172] Found /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0805 11:57:42.674478  409225 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0805 11:57:42.674589  409225 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/config.json ...
	I0805 11:57:42.674765  409225 start.go:360] acquireMachinesLock for ha-672593: {Name:mk3babe91d55c30c0b650587cdec6489eb3a7ed6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 11:57:42.674801  409225 start.go:364] duration metric: took 19.936µs to acquireMachinesLock for "ha-672593"
	I0805 11:57:42.674814  409225 start.go:96] Skipping create...Using existing machine configuration
	I0805 11:57:42.674822  409225 fix.go:54] fixHost starting: 
	I0805 11:57:42.675069  409225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:57:42.675109  409225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:57:42.689143  409225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40879
	I0805 11:57:42.689687  409225 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:57:42.690346  409225 main.go:141] libmachine: Using API Version  1
	I0805 11:57:42.690379  409225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:57:42.690694  409225 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:57:42.690897  409225 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:57:42.691063  409225 main.go:141] libmachine: (ha-672593) Calling .GetState
	I0805 11:57:42.692602  409225 fix.go:112] recreateIfNeeded on ha-672593: state=Running err=<nil>
	W0805 11:57:42.692625  409225 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 11:57:42.694522  409225 out.go:177] * Updating the running kvm2 "ha-672593" VM ...
	I0805 11:57:42.695700  409225 machine.go:94] provisionDockerMachine start ...
	I0805 11:57:42.695717  409225 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:57:42.695918  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:57:42.698252  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:57:42.698651  409225 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:57:42.698680  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:57:42.698804  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:57:42.698968  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:57:42.699125  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:57:42.699226  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:57:42.699367  409225 main.go:141] libmachine: Using SSH client type: native
	I0805 11:57:42.699583  409225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0805 11:57:42.699596  409225 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 11:57:42.820713  409225 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-672593
	
	I0805 11:57:42.820762  409225 main.go:141] libmachine: (ha-672593) Calling .GetMachineName
	I0805 11:57:42.821057  409225 buildroot.go:166] provisioning hostname "ha-672593"
	I0805 11:57:42.821091  409225 main.go:141] libmachine: (ha-672593) Calling .GetMachineName
	I0805 11:57:42.821297  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:57:42.823868  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:57:42.824262  409225 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:57:42.824288  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:57:42.824396  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:57:42.824541  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:57:42.824692  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:57:42.824816  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:57:42.824939  409225 main.go:141] libmachine: Using SSH client type: native
	I0805 11:57:42.825109  409225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0805 11:57:42.825125  409225 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-672593 && echo "ha-672593" | sudo tee /etc/hostname
	I0805 11:57:42.960025  409225 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-672593
	
	I0805 11:57:42.960054  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:57:42.962798  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:57:42.963138  409225 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:57:42.963169  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:57:42.963350  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:57:42.963536  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:57:42.963671  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:57:42.963844  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:57:42.964002  409225 main.go:141] libmachine: Using SSH client type: native
	I0805 11:57:42.964190  409225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0805 11:57:42.964216  409225 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-672593' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-672593/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-672593' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 11:57:43.076996  409225 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 11:57:43.077027  409225 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19377-383955/.minikube CaCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19377-383955/.minikube}
	I0805 11:57:43.077047  409225 buildroot.go:174] setting up certificates
	I0805 11:57:43.077055  409225 provision.go:84] configureAuth start
	I0805 11:57:43.077064  409225 main.go:141] libmachine: (ha-672593) Calling .GetMachineName
	I0805 11:57:43.077344  409225 main.go:141] libmachine: (ha-672593) Calling .GetIP
	I0805 11:57:43.079951  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:57:43.080247  409225 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:57:43.080272  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:57:43.080382  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:57:43.082607  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:57:43.082951  409225 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:57:43.082975  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:57:43.083106  409225 provision.go:143] copyHostCerts
	I0805 11:57:43.083158  409225 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 11:57:43.083191  409225 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem, removing ...
	I0805 11:57:43.083200  409225 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 11:57:43.083269  409225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem (1082 bytes)
	I0805 11:57:43.083355  409225 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 11:57:43.083375  409225 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem, removing ...
	I0805 11:57:43.083379  409225 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 11:57:43.083402  409225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem (1123 bytes)
	I0805 11:57:43.083459  409225 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 11:57:43.083474  409225 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem, removing ...
	I0805 11:57:43.083478  409225 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 11:57:43.083498  409225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem (1675 bytes)
	I0805 11:57:43.083557  409225 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem org=jenkins.ha-672593 san=[127.0.0.1 192.168.39.102 ha-672593 localhost minikube]
	I0805 11:57:43.185719  409225 provision.go:177] copyRemoteCerts
	I0805 11:57:43.185783  409225 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 11:57:43.185811  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:57:43.188495  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:57:43.188838  409225 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:57:43.188866  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:57:43.189030  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:57:43.189224  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:57:43.189392  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:57:43.189530  409225 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/id_rsa Username:docker}
	I0805 11:57:43.275306  409225 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 11:57:43.275386  409225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 11:57:43.301685  409225 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 11:57:43.301752  409225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0805 11:57:43.325360  409225 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 11:57:43.325408  409225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 11:57:43.358227  409225 provision.go:87] duration metric: took 281.156227ms to configureAuth
	I0805 11:57:43.358257  409225 buildroot.go:189] setting minikube options for container-runtime
	I0805 11:57:43.358567  409225 config.go:182] Loaded profile config "ha-672593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 11:57:43.358672  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:57:43.361266  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:57:43.361697  409225 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:57:43.361734  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:57:43.361940  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:57:43.362145  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:57:43.362318  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:57:43.362420  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:57:43.362583  409225 main.go:141] libmachine: Using SSH client type: native
	I0805 11:57:43.362803  409225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0805 11:57:43.362843  409225 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 11:59:14.130177  409225 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 11:59:14.130212  409225 machine.go:97] duration metric: took 1m31.434498748s to provisionDockerMachine
	I0805 11:59:14.130232  409225 start.go:293] postStartSetup for "ha-672593" (driver="kvm2")
	I0805 11:59:14.130258  409225 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 11:59:14.130278  409225 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:59:14.130639  409225 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 11:59:14.130679  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:59:14.134195  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:59:14.134682  409225 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:59:14.134712  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:59:14.134864  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:59:14.135035  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:59:14.135158  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:59:14.135357  409225 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/id_rsa Username:docker}
	I0805 11:59:14.224728  409225 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 11:59:14.229288  409225 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 11:59:14.229317  409225 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/addons for local assets ...
	I0805 11:59:14.229394  409225 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/files for local assets ...
	I0805 11:59:14.229472  409225 filesync.go:149] local asset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> 3912192.pem in /etc/ssl/certs
	I0805 11:59:14.229482  409225 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> /etc/ssl/certs/3912192.pem
	I0805 11:59:14.229582  409225 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 11:59:14.239141  409225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 11:59:14.265593  409225 start.go:296] duration metric: took 135.34256ms for postStartSetup
	I0805 11:59:14.265640  409225 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:59:14.265977  409225 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0805 11:59:14.266013  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:59:14.268773  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:59:14.269216  409225 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:59:14.269248  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:59:14.269424  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:59:14.269626  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:59:14.269755  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:59:14.269896  409225 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/id_rsa Username:docker}
	W0805 11:59:14.358779  409225 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0805 11:59:14.358807  409225 fix.go:56] duration metric: took 1m31.683985789s for fixHost
	I0805 11:59:14.358834  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:59:14.361640  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:59:14.361963  409225 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:59:14.361993  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:59:14.362145  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:59:14.362366  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:59:14.362580  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:59:14.362743  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:59:14.362898  409225 main.go:141] libmachine: Using SSH client type: native
	I0805 11:59:14.363079  409225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0805 11:59:14.363089  409225 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 11:59:14.476567  409225 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722859154.442870085
	
	I0805 11:59:14.476588  409225 fix.go:216] guest clock: 1722859154.442870085
	I0805 11:59:14.476598  409225 fix.go:229] Guest: 2024-08-05 11:59:14.442870085 +0000 UTC Remote: 2024-08-05 11:59:14.358818403 +0000 UTC m=+91.810048921 (delta=84.051682ms)
	I0805 11:59:14.476625  409225 fix.go:200] guest clock delta is within tolerance: 84.051682ms
	I0805 11:59:14.476641  409225 start.go:83] releasing machines lock for "ha-672593", held for 1m31.801822706s
	I0805 11:59:14.476663  409225 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:59:14.476981  409225 main.go:141] libmachine: (ha-672593) Calling .GetIP
	I0805 11:59:14.479780  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:59:14.480170  409225 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:59:14.480251  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:59:14.480373  409225 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:59:14.481022  409225 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:59:14.481309  409225 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:59:14.481438  409225 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 11:59:14.481506  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:59:14.481582  409225 ssh_runner.go:195] Run: cat /version.json
	I0805 11:59:14.481609  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:59:14.484392  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:59:14.484552  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:59:14.484826  409225 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:59:14.484853  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:59:14.484952  409225 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:59:14.484975  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:59:14.485008  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:59:14.485205  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:59:14.485221  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:59:14.485403  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:59:14.485414  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:59:14.485578  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:59:14.485643  409225 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/id_rsa Username:docker}
	I0805 11:59:14.485695  409225 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/id_rsa Username:docker}
	I0805 11:59:14.587349  409225 ssh_runner.go:195] Run: systemctl --version
	I0805 11:59:14.593964  409225 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 11:59:14.758447  409225 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 11:59:14.765274  409225 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 11:59:14.765354  409225 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 11:59:14.775199  409225 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0805 11:59:14.775222  409225 start.go:495] detecting cgroup driver to use...
	I0805 11:59:14.775300  409225 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 11:59:14.791786  409225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 11:59:14.805731  409225 docker.go:217] disabling cri-docker service (if available) ...
	I0805 11:59:14.805785  409225 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 11:59:14.821619  409225 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 11:59:14.835613  409225 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 11:59:14.981307  409225 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 11:59:15.130708  409225 docker.go:233] disabling docker service ...
	I0805 11:59:15.130769  409225 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 11:59:15.148232  409225 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 11:59:15.162795  409225 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 11:59:15.306533  409225 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 11:59:15.465286  409225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 11:59:15.481194  409225 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 11:59:15.501248  409225 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0805 11:59:15.501336  409225 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:59:15.512467  409225 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 11:59:15.512571  409225 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:59:15.523614  409225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:59:15.546097  409225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:59:15.569776  409225 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 11:59:15.583644  409225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:59:15.600072  409225 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:59:15.625275  409225 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:59:15.651666  409225 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 11:59:15.681002  409225 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 11:59:15.696579  409225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 11:59:15.873112  409225 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 11:59:16.178598  409225 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 11:59:16.178685  409225 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 11:59:16.184758  409225 start.go:563] Will wait 60s for crictl version
	I0805 11:59:16.184863  409225 ssh_runner.go:195] Run: which crictl
	I0805 11:59:16.189800  409225 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 11:59:16.231086  409225 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 11:59:16.231170  409225 ssh_runner.go:195] Run: crio --version
	I0805 11:59:16.260514  409225 ssh_runner.go:195] Run: crio --version
	I0805 11:59:16.292228  409225 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0805 11:59:16.293998  409225 main.go:141] libmachine: (ha-672593) Calling .GetIP
	I0805 11:59:16.296980  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:59:16.297365  409225 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:59:16.297392  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:59:16.297623  409225 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0805 11:59:16.302583  409225 kubeadm.go:883] updating cluster {Name:ha-672593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-672593 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.4 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 11:59:16.302738  409225 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 11:59:16.302794  409225 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 11:59:16.349503  409225 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 11:59:16.349528  409225 crio.go:433] Images already preloaded, skipping extraction
	I0805 11:59:16.349580  409225 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 11:59:16.383917  409225 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 11:59:16.383949  409225 cache_images.go:84] Images are preloaded, skipping loading
	I0805 11:59:16.383972  409225 kubeadm.go:934] updating node { 192.168.39.102 8443 v1.30.3 crio true true} ...
	I0805 11:59:16.384136  409225 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-672593 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-672593 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 11:59:16.384214  409225 ssh_runner.go:195] Run: crio config
	I0805 11:59:16.434959  409225 cni.go:84] Creating CNI manager for ""
	I0805 11:59:16.434988  409225 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0805 11:59:16.435000  409225 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 11:59:16.435028  409225 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.102 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-672593 NodeName:ha-672593 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.102"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.102 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 11:59:16.435208  409225 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.102
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-672593"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.102
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.102"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 11:59:16.435232  409225 kube-vip.go:115] generating kube-vip config ...
	I0805 11:59:16.435288  409225 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0805 11:59:16.447203  409225 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0805 11:59:16.447329  409225 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0805 11:59:16.447412  409225 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 11:59:16.457108  409225 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 11:59:16.457173  409225 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0805 11:59:16.466373  409225 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0805 11:59:16.482492  409225 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 11:59:16.499667  409225 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0805 11:59:16.516586  409225 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0805 11:59:16.533860  409225 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0805 11:59:16.538836  409225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 11:59:16.694264  409225 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 11:59:16.708817  409225 certs.go:68] Setting up /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593 for IP: 192.168.39.102
	I0805 11:59:16.708844  409225 certs.go:194] generating shared ca certs ...
	I0805 11:59:16.708861  409225 certs.go:226] acquiring lock for ca certs: {Name:mk0abfcaff3883fbb5243c47b487f9200d9166d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:59:16.709053  409225 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key
	I0805 11:59:16.709105  409225 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key
	I0805 11:59:16.709115  409225 certs.go:256] generating profile certs ...
	I0805 11:59:16.709220  409225 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/client.key
	I0805 11:59:16.709257  409225 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key.b7561881
	I0805 11:59:16.709276  409225 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt.b7561881 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.68 192.168.39.210 192.168.39.254]
	I0805 11:59:16.939065  409225 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt.b7561881 ...
	I0805 11:59:16.939097  409225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt.b7561881: {Name:mk4630f3d373fbbfb12205370c4cc37346a5beb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:59:16.939312  409225 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key.b7561881 ...
	I0805 11:59:16.939339  409225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key.b7561881: {Name:mk2a397318a5ae0d98183fe8333bffc64ceab241 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:59:16.939448  409225 certs.go:381] copying /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt.b7561881 -> /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt
	I0805 11:59:16.939645  409225 certs.go:385] copying /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key.b7561881 -> /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key
	I0805 11:59:16.939845  409225 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/proxy-client.key
	I0805 11:59:16.939866  409225 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0805 11:59:16.939885  409225 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0805 11:59:16.939904  409225 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0805 11:59:16.939926  409225 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0805 11:59:16.939944  409225 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0805 11:59:16.939973  409225 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0805 11:59:16.939997  409225 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0805 11:59:16.940015  409225 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0805 11:59:16.940085  409225 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem (1338 bytes)
	W0805 11:59:16.940130  409225 certs.go:480] ignoring /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219_empty.pem, impossibly tiny 0 bytes
	I0805 11:59:16.940140  409225 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 11:59:16.940172  409225 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem (1082 bytes)
	I0805 11:59:16.940204  409225 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem (1123 bytes)
	I0805 11:59:16.940234  409225 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem (1675 bytes)
	I0805 11:59:16.940295  409225 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 11:59:16.940343  409225 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0805 11:59:16.940364  409225 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem -> /usr/share/ca-certificates/391219.pem
	I0805 11:59:16.940382  409225 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> /usr/share/ca-certificates/3912192.pem
	I0805 11:59:16.941001  409225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 11:59:16.965830  409225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 11:59:16.989761  409225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 11:59:17.013299  409225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 11:59:17.036348  409225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0805 11:59:17.060299  409225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0805 11:59:17.085221  409225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 11:59:17.110250  409225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 11:59:17.134049  409225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 11:59:17.157720  409225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem --> /usr/share/ca-certificates/391219.pem (1338 bytes)
	I0805 11:59:17.182218  409225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /usr/share/ca-certificates/3912192.pem (1708 bytes)
	I0805 11:59:17.205877  409225 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 11:59:17.222839  409225 ssh_runner.go:195] Run: openssl version
	I0805 11:59:17.228779  409225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 11:59:17.239931  409225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 11:59:17.244638  409225 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0805 11:59:17.244689  409225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 11:59:17.250195  409225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 11:59:17.259832  409225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391219.pem && ln -fs /usr/share/ca-certificates/391219.pem /etc/ssl/certs/391219.pem"
	I0805 11:59:17.270601  409225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/391219.pem
	I0805 11:59:17.274970  409225 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 11:39 /usr/share/ca-certificates/391219.pem
	I0805 11:59:17.275010  409225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391219.pem
	I0805 11:59:17.280569  409225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391219.pem /etc/ssl/certs/51391683.0"
	I0805 11:59:17.290487  409225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3912192.pem && ln -fs /usr/share/ca-certificates/3912192.pem /etc/ssl/certs/3912192.pem"
	I0805 11:59:17.301457  409225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3912192.pem
	I0805 11:59:17.306343  409225 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 11:39 /usr/share/ca-certificates/3912192.pem
	I0805 11:59:17.306396  409225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3912192.pem
	I0805 11:59:17.312215  409225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3912192.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 11:59:17.321333  409225 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 11:59:17.325874  409225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 11:59:17.331201  409225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 11:59:17.336751  409225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 11:59:17.342281  409225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 11:59:17.347907  409225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 11:59:17.353472  409225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 11:59:17.358988  409225 kubeadm.go:392] StartCluster: {Name:ha-672593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-672593 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.4 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod
:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 11:59:17.359126  409225 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 11:59:17.359176  409225 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 11:59:17.396955  409225 cri.go:89] found id: "dc4782d00c50bdef5b88780aeef63d08dab7808a96f6ba156107be9f56bc1800"
	I0805 11:59:17.396976  409225 cri.go:89] found id: "1ca52772afb357ffac550e72c9e900f36dcea579d77d6e84c69b53c8af4cc510"
	I0805 11:59:17.396980  409225 cri.go:89] found id: "d4fe200ecf9dca4b1fb3959ea6baebccce67dd34d879297098a2909724d8d3df"
	I0805 11:59:17.396983  409225 cri.go:89] found id: "73fd9ef1948379bdfd834218bee29f227bc55765a421d994bcc5bbfe373658c1"
	I0805 11:59:17.396985  409225 cri.go:89] found id: "e556c9ba49f5fe264685a2408b26a61c8c5c8836f0a38b89b776f338b8b0cd22"
	I0805 11:59:17.396988  409225 cri.go:89] found id: "6354e702fe80a5a9853cdd48f89dde467f1f7359bb495c8a4f6a49048f151d94"
	I0805 11:59:17.396991  409225 cri.go:89] found id: "57cec2b511aa8ca1b171b7dfff39ecb51cb11d9cd4efd552598fcc0054488c46"
	I0805 11:59:17.396993  409225 cri.go:89] found id: "11c4e00c9ba78ff0cfb337d7435931f39fe7ccd42145fa6670487d190cacee48"
	I0805 11:59:17.396996  409225 cri.go:89] found id: "019abd676baf2985a3bf77641c1032cae7b3c22eb67fff535a25d9860b394bfd"
	I0805 11:59:17.397003  409225 cri.go:89] found id: "1019d9e10074631835690fa0d372f2c043a64f237e1ddf9e22bcbd18d59fa6cd"
	I0805 11:59:17.397009  409225 cri.go:89] found id: "50907082bdeb824e9a80122033ed1df5631143e152751f066a7bdfba1156e565"
	I0805 11:59:17.397011  409225 cri.go:89] found id: "ca9839b56e3e62d7ac6b88dc20149da25f586b4033e03a09844938e5b85b6334"
	I0805 11:59:17.397016  409225 cri.go:89] found id: "b17d8131f0edcc3018bb9d820f56a29a7806d7d57a91b849fc1350d6a8465775"
	I0805 11:59:17.397019  409225 cri.go:89] found id: ""
	I0805 11:59:17.397075  409225 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 05 12:01:50 ha-672593 crio[3759]: time="2024-08-05 12:01:50.826758236Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722859310826730713,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d5ce72de-354e-43cc-a162-60e91caee13a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 12:01:50 ha-672593 crio[3759]: time="2024-08-05 12:01:50.827500480Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=23acd318-906f-40c4-b840-95b54e396eaa name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:01:50 ha-672593 crio[3759]: time="2024-08-05 12:01:50.827580895Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=23acd318-906f-40c4-b840-95b54e396eaa name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:01:50 ha-672593 crio[3759]: time="2024-08-05 12:01:50.829435468Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:def4e84760678cd0dbb4d7e068c88e72abae9153d09fa973dbf47fa862a37689,PodSandboxId:63b2e119430f5ebdaed8ab7d4c84474c2731a2502dcc9d8a5a2115671edeaabf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722859245020269631,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c3a4e49-f517-40e4-bd83-1e69b6a7550c,},Annotations:map[string]string{io.kubernetes.container.hash: 907c955b,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e579894cd5595ae28bba0f23c22901f5d6e2d2234c275c125c4866264f111567,PodSandboxId:4773bf48efb8a64d2aefce07c25c72dc9d826019fbd8b8219d20872f63fe0412,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722859204025017735,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b48534ca818552de6101946d7c7932fd,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b85ae9f8969ae4a6662656f5f6e5aa97c0dd6b966396a243445b4b71fb627f7b,PodSandboxId:be0aa43da318c85ae7e6f88d2cc94f9993168d381f086f4e06e40431a8b91078,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722859202024655857,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a381773e823990c7e015983b07a0d8,},Annotations:map[string]string{io.kubernetes.container.hash: caa197,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f9974eaa7c2760a20ad8a8c9dc89a8413990b6fe0097548a42ed3a7d75ca3e0,PodSandboxId:dd5b9cacb5cc537cfc77786f8abc1ac6b5cdd30bdbbdec5896b201390799176d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722859194283466371,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xx72g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b4aad5e1-e3ed-450f-b0c6-fa690e21632b,},Annotations:map[string]string{io.kubernetes.container.hash: f49c7961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9b74412829253a4ea936bd3b48c7091e031227a4153b3d9a160a98a0a0dba97,PodSandboxId:63b2e119430f5ebdaed8ab7d4c84474c2731a2502dcc9d8a5a2115671edeaabf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722859193019645818,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c3a4e49-f517-40e4-bd83-1e69b6a7550c,},Annotations:map[string]string{io.kubernetes.container.hash: 907c955b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0842e66122440248aa898aded1a79fc0724d7cacadd74bb85c99e26c4eecc856,PodSandboxId:6407ea95878ee64061230ee9994f6229411978fd82ce4c54061dea268c21eca7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722859176000628569,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32cb2283b62a6c10b45ae9ad5cf72bc4,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kub
ernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a210faa6df5512e0f8133b7eff1626430c8da65c8a544702154ec3c88d40fc7,PodSandboxId:84d195aa46f38b46a8f6ac426c3b0c075426cc727d65eb609be0b552c71abf25,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722859161511700734,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wtsdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a1664bb-e0a8-496e-a74d-3c25080dca8e,},Annotations:map[string]string{io.kubernetes.container.hash: ff2ee446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:880c87361835da744d26ac324a55105244b712a8e5007091996ee17dbd6ad829,PodSandboxId:f2f20b5872376a4eeec955d07006691b0a58a1c6934af8f278f606fdc7a3c9e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722859161235708441,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fndz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bdb2b4a-e7c6-4e03-80f8-cf80501095c4,},Annotations:map[string]string{io.kubernetes.container.hash: 96fd5c22,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c480179de
34e5729f3b0321c619752bbead36d7a7d95b4a68d00c63e4dc8824,PodSandboxId:c80856b97ef6df258f187ac4e8a84db6d4494999840bd530df4b4fd127004b44,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722859161296898618,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sgd4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ff9d45-f09f-4213-b1c3-d568ee5ab68a,},Annotations:map[string]string{io.kubernetes.container.hash: d7a5fe30,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:249df3a7b9531a6e24de2b21e3cd4e78b7a85a9cec75e1af8b622e6721ed40ca,PodSandboxId:43e06c30e0e1aedcf2ac03742e2deb25fc5e402e97df22ab09adfe540de58015,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722859161095558137,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sfh7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98c09423-e24f-4d26-b7f9-3da3986d538b,},Annotations:map[string]string{io.kubernetes.container.hash: 3a333149,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c47baf951c9feb2c1c30ec178c6363465fbd3163873df192a22e627c24c7248,PodSandboxId:0ab38c276a2d5829d4a01113de1ddc36f6c2b5b953642b8d868cb4dc77609591,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722859161031340558,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-672593,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: 96b70bfddf8dc93c8b8709942f15d00b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfd2da4a0337d86874231815a57a712b3d7ccf013158a47e27592b9428e6543a,PodSandboxId:05390a861ed7f354a26bf4d2372549f29e05df52acbed5d24a76c3d944268504,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722859160977296403,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddda5be0e77a9b07805ce43
249e5859e,},Annotations:map[string]string{io.kubernetes.container.hash: f024b421,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:053ea64b0f339759d8b134779bddc3a6f5df793cf2ae89b324e84a4f4fe8f12c,PodSandboxId:be0aa43da318c85ae7e6f88d2cc94f9993168d381f086f4e06e40431a8b91078,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722859160840698876,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a381773e823990c7e015983b07a0d8,},Annotation
s:map[string]string{io.kubernetes.container.hash: caa197,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ffeb8fa20fc0dccefd8cd88750035d83a8fd93120b43abda0098f4ca62da858,PodSandboxId:4773bf48efb8a64d2aefce07c25c72dc9d826019fbd8b8219d20872f63fe0412,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722859160774392590,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b48534ca818552de6101946d7c7932fd,},Annotat
ions:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f332a2eefb38a7643f5eabdc4c3795fdf9fc7faa3025977758afda4965c4d06f,PodSandboxId:96a63340a808e8f1d3c8938db5651c8ba9a84b0066e04495da70a33af565d687,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722858640390384315,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xx72g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b4aad5e1-e3ed-450f-b0c6-fa690e21632b,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: f49c7961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73fd9ef1948379bdfd834218bee29f227bc55765a421d994bcc5bbfe373658c1,PodSandboxId:162aab1f9af67e7a7875d7f44424f7edaa5b1aa74a891b3a0e84709da26c69fe,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722858498489320838,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sgd4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ff9d45-f09f-4213-b1c3-d568ee5ab68a,},Annotations:map[string]string{io.kubernet
es.container.hash: d7a5fe30,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6354e702fe80a5a9853cdd48f89dde467f1f7359bb495c8a4f6a49048f151d94,PodSandboxId:60a5e5f93bb15c3691c3fccd5be1c38de24355d307d1217ada049b281288a7b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722858498409265151,Labels:map[string]string{io.kubernetes.container.name: coredns
,io.kubernetes.pod.name: coredns-7db6d8ff4d-sfh7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98c09423-e24f-4d26-b7f9-3da3986d538b,},Annotations:map[string]string{io.kubernetes.container.hash: 3a333149,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57cec2b511aa8ca1b171b7dfff39ecb51cb11d9cd4efd552598fcc0054488c46,PodSandboxId:214360f7ff706f37f1cd346a7910caa4b07da7a0f1b94fd4af2eb9609e49369b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722858486486501073,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fndz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bdb2b4a-e7c6-4e03-80f8-cf80501095c4,},Annotations:map[string]string{io.kubernetes.container.hash: 96fd5c22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c4e00c9ba78ff0cfb337d7435931f39fe7ccd42145fa6670487d190cacee48,PodSandboxId:b824fdfadbf52a8243b61b3c55556272c3d50bd4fafe70328531a35defcf2fc9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722858481390532133,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wtsdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a1664bb-e0a8-496e-a74d-3c25080dca8e,},Annotations:map[string]string{io.kubernetes.container.hash: ff2ee446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1019d9e10074631835690fa0d372f2c043a64f237e1ddf9e22bcbd18d59fa6cd,PodSandboxId:1c9e20b33b7b7424aca33506f1a815c58190e9875a108206c654e048992f391f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788
eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722858461888541864,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddda5be0e77a9b07805ce43249e5859e,},Annotations:map[string]string{io.kubernetes.container.hash: f024b421,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9839b56e3e62d7ac6b88dc20149da25f586b4033e03a09844938e5b85b6334,PodSandboxId:c7429b1a8552f574f21cc855aa6bf767680c56d05bb1df8b83c28a59cd561fb1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,S
tate:CONTAINER_EXITED,CreatedAt:1722858461852184959,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96b70bfddf8dc93c8b8709942f15d00b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=23acd318-906f-40c4-b840-95b54e396eaa name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:01:50 ha-672593 crio[3759]: time="2024-08-05 12:01:50.887036160Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b60c14a2-a62d-484e-b7e5-92d21be0c724 name=/runtime.v1.RuntimeService/Version
	Aug 05 12:01:50 ha-672593 crio[3759]: time="2024-08-05 12:01:50.887120970Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b60c14a2-a62d-484e-b7e5-92d21be0c724 name=/runtime.v1.RuntimeService/Version
	Aug 05 12:01:50 ha-672593 crio[3759]: time="2024-08-05 12:01:50.888520520Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=56b049c3-0713-47c6-a76a-aa7b4ebba73f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 12:01:50 ha-672593 crio[3759]: time="2024-08-05 12:01:50.889040540Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722859310889012531,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=56b049c3-0713-47c6-a76a-aa7b4ebba73f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 12:01:50 ha-672593 crio[3759]: time="2024-08-05 12:01:50.889519605Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=65359f57-08d0-45b0-805b-1378107fdc33 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:01:50 ha-672593 crio[3759]: time="2024-08-05 12:01:50.889573929Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=65359f57-08d0-45b0-805b-1378107fdc33 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:01:50 ha-672593 crio[3759]: time="2024-08-05 12:01:50.890072489Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:def4e84760678cd0dbb4d7e068c88e72abae9153d09fa973dbf47fa862a37689,PodSandboxId:63b2e119430f5ebdaed8ab7d4c84474c2731a2502dcc9d8a5a2115671edeaabf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722859245020269631,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c3a4e49-f517-40e4-bd83-1e69b6a7550c,},Annotations:map[string]string{io.kubernetes.container.hash: 907c955b,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e579894cd5595ae28bba0f23c22901f5d6e2d2234c275c125c4866264f111567,PodSandboxId:4773bf48efb8a64d2aefce07c25c72dc9d826019fbd8b8219d20872f63fe0412,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722859204025017735,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b48534ca818552de6101946d7c7932fd,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b85ae9f8969ae4a6662656f5f6e5aa97c0dd6b966396a243445b4b71fb627f7b,PodSandboxId:be0aa43da318c85ae7e6f88d2cc94f9993168d381f086f4e06e40431a8b91078,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722859202024655857,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a381773e823990c7e015983b07a0d8,},Annotations:map[string]string{io.kubernetes.container.hash: caa197,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f9974eaa7c2760a20ad8a8c9dc89a8413990b6fe0097548a42ed3a7d75ca3e0,PodSandboxId:dd5b9cacb5cc537cfc77786f8abc1ac6b5cdd30bdbbdec5896b201390799176d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722859194283466371,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xx72g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b4aad5e1-e3ed-450f-b0c6-fa690e21632b,},Annotations:map[string]string{io.kubernetes.container.hash: f49c7961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9b74412829253a4ea936bd3b48c7091e031227a4153b3d9a160a98a0a0dba97,PodSandboxId:63b2e119430f5ebdaed8ab7d4c84474c2731a2502dcc9d8a5a2115671edeaabf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722859193019645818,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c3a4e49-f517-40e4-bd83-1e69b6a7550c,},Annotations:map[string]string{io.kubernetes.container.hash: 907c955b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0842e66122440248aa898aded1a79fc0724d7cacadd74bb85c99e26c4eecc856,PodSandboxId:6407ea95878ee64061230ee9994f6229411978fd82ce4c54061dea268c21eca7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722859176000628569,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32cb2283b62a6c10b45ae9ad5cf72bc4,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kub
ernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a210faa6df5512e0f8133b7eff1626430c8da65c8a544702154ec3c88d40fc7,PodSandboxId:84d195aa46f38b46a8f6ac426c3b0c075426cc727d65eb609be0b552c71abf25,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722859161511700734,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wtsdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a1664bb-e0a8-496e-a74d-3c25080dca8e,},Annotations:map[string]string{io.kubernetes.container.hash: ff2ee446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:880c87361835da744d26ac324a55105244b712a8e5007091996ee17dbd6ad829,PodSandboxId:f2f20b5872376a4eeec955d07006691b0a58a1c6934af8f278f606fdc7a3c9e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722859161235708441,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fndz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bdb2b4a-e7c6-4e03-80f8-cf80501095c4,},Annotations:map[string]string{io.kubernetes.container.hash: 96fd5c22,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c480179de
34e5729f3b0321c619752bbead36d7a7d95b4a68d00c63e4dc8824,PodSandboxId:c80856b97ef6df258f187ac4e8a84db6d4494999840bd530df4b4fd127004b44,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722859161296898618,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sgd4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ff9d45-f09f-4213-b1c3-d568ee5ab68a,},Annotations:map[string]string{io.kubernetes.container.hash: d7a5fe30,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:249df3a7b9531a6e24de2b21e3cd4e78b7a85a9cec75e1af8b622e6721ed40ca,PodSandboxId:43e06c30e0e1aedcf2ac03742e2deb25fc5e402e97df22ab09adfe540de58015,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722859161095558137,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sfh7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98c09423-e24f-4d26-b7f9-3da3986d538b,},Annotations:map[string]string{io.kubernetes.container.hash: 3a333149,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c47baf951c9feb2c1c30ec178c6363465fbd3163873df192a22e627c24c7248,PodSandboxId:0ab38c276a2d5829d4a01113de1ddc36f6c2b5b953642b8d868cb4dc77609591,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722859161031340558,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-672593,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: 96b70bfddf8dc93c8b8709942f15d00b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfd2da4a0337d86874231815a57a712b3d7ccf013158a47e27592b9428e6543a,PodSandboxId:05390a861ed7f354a26bf4d2372549f29e05df52acbed5d24a76c3d944268504,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722859160977296403,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddda5be0e77a9b07805ce43
249e5859e,},Annotations:map[string]string{io.kubernetes.container.hash: f024b421,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:053ea64b0f339759d8b134779bddc3a6f5df793cf2ae89b324e84a4f4fe8f12c,PodSandboxId:be0aa43da318c85ae7e6f88d2cc94f9993168d381f086f4e06e40431a8b91078,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722859160840698876,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a381773e823990c7e015983b07a0d8,},Annotation
s:map[string]string{io.kubernetes.container.hash: caa197,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ffeb8fa20fc0dccefd8cd88750035d83a8fd93120b43abda0098f4ca62da858,PodSandboxId:4773bf48efb8a64d2aefce07c25c72dc9d826019fbd8b8219d20872f63fe0412,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722859160774392590,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b48534ca818552de6101946d7c7932fd,},Annotat
ions:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f332a2eefb38a7643f5eabdc4c3795fdf9fc7faa3025977758afda4965c4d06f,PodSandboxId:96a63340a808e8f1d3c8938db5651c8ba9a84b0066e04495da70a33af565d687,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722858640390384315,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xx72g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b4aad5e1-e3ed-450f-b0c6-fa690e21632b,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: f49c7961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73fd9ef1948379bdfd834218bee29f227bc55765a421d994bcc5bbfe373658c1,PodSandboxId:162aab1f9af67e7a7875d7f44424f7edaa5b1aa74a891b3a0e84709da26c69fe,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722858498489320838,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sgd4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ff9d45-f09f-4213-b1c3-d568ee5ab68a,},Annotations:map[string]string{io.kubernet
es.container.hash: d7a5fe30,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6354e702fe80a5a9853cdd48f89dde467f1f7359bb495c8a4f6a49048f151d94,PodSandboxId:60a5e5f93bb15c3691c3fccd5be1c38de24355d307d1217ada049b281288a7b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722858498409265151,Labels:map[string]string{io.kubernetes.container.name: coredns
,io.kubernetes.pod.name: coredns-7db6d8ff4d-sfh7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98c09423-e24f-4d26-b7f9-3da3986d538b,},Annotations:map[string]string{io.kubernetes.container.hash: 3a333149,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57cec2b511aa8ca1b171b7dfff39ecb51cb11d9cd4efd552598fcc0054488c46,PodSandboxId:214360f7ff706f37f1cd346a7910caa4b07da7a0f1b94fd4af2eb9609e49369b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722858486486501073,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fndz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bdb2b4a-e7c6-4e03-80f8-cf80501095c4,},Annotations:map[string]string{io.kubernetes.container.hash: 96fd5c22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c4e00c9ba78ff0cfb337d7435931f39fe7ccd42145fa6670487d190cacee48,PodSandboxId:b824fdfadbf52a8243b61b3c55556272c3d50bd4fafe70328531a35defcf2fc9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722858481390532133,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wtsdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a1664bb-e0a8-496e-a74d-3c25080dca8e,},Annotations:map[string]string{io.kubernetes.container.hash: ff2ee446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1019d9e10074631835690fa0d372f2c043a64f237e1ddf9e22bcbd18d59fa6cd,PodSandboxId:1c9e20b33b7b7424aca33506f1a815c58190e9875a108206c654e048992f391f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788
eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722858461888541864,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddda5be0e77a9b07805ce43249e5859e,},Annotations:map[string]string{io.kubernetes.container.hash: f024b421,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9839b56e3e62d7ac6b88dc20149da25f586b4033e03a09844938e5b85b6334,PodSandboxId:c7429b1a8552f574f21cc855aa6bf767680c56d05bb1df8b83c28a59cd561fb1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,S
tate:CONTAINER_EXITED,CreatedAt:1722858461852184959,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96b70bfddf8dc93c8b8709942f15d00b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=65359f57-08d0-45b0-805b-1378107fdc33 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:01:50 ha-672593 crio[3759]: time="2024-08-05 12:01:50.956036988Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6039667d-6ff5-4c87-9f9a-bc5b630b1ef4 name=/runtime.v1.RuntimeService/Version
	Aug 05 12:01:50 ha-672593 crio[3759]: time="2024-08-05 12:01:50.956116566Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6039667d-6ff5-4c87-9f9a-bc5b630b1ef4 name=/runtime.v1.RuntimeService/Version
	Aug 05 12:01:50 ha-672593 crio[3759]: time="2024-08-05 12:01:50.957262437Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0f4f6c98-2dbc-42f6-a2da-78a5ce76af44 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 12:01:50 ha-672593 crio[3759]: time="2024-08-05 12:01:50.958152067Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722859310958125902,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0f4f6c98-2dbc-42f6-a2da-78a5ce76af44 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 12:01:50 ha-672593 crio[3759]: time="2024-08-05 12:01:50.959121488Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6d88c3e6-3607-4cbc-83d7-adba9e2ab4ae name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:01:50 ha-672593 crio[3759]: time="2024-08-05 12:01:50.959192001Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6d88c3e6-3607-4cbc-83d7-adba9e2ab4ae name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:01:50 ha-672593 crio[3759]: time="2024-08-05 12:01:50.960073318Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:def4e84760678cd0dbb4d7e068c88e72abae9153d09fa973dbf47fa862a37689,PodSandboxId:63b2e119430f5ebdaed8ab7d4c84474c2731a2502dcc9d8a5a2115671edeaabf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722859245020269631,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c3a4e49-f517-40e4-bd83-1e69b6a7550c,},Annotations:map[string]string{io.kubernetes.container.hash: 907c955b,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e579894cd5595ae28bba0f23c22901f5d6e2d2234c275c125c4866264f111567,PodSandboxId:4773bf48efb8a64d2aefce07c25c72dc9d826019fbd8b8219d20872f63fe0412,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722859204025017735,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b48534ca818552de6101946d7c7932fd,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b85ae9f8969ae4a6662656f5f6e5aa97c0dd6b966396a243445b4b71fb627f7b,PodSandboxId:be0aa43da318c85ae7e6f88d2cc94f9993168d381f086f4e06e40431a8b91078,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722859202024655857,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a381773e823990c7e015983b07a0d8,},Annotations:map[string]string{io.kubernetes.container.hash: caa197,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f9974eaa7c2760a20ad8a8c9dc89a8413990b6fe0097548a42ed3a7d75ca3e0,PodSandboxId:dd5b9cacb5cc537cfc77786f8abc1ac6b5cdd30bdbbdec5896b201390799176d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722859194283466371,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xx72g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b4aad5e1-e3ed-450f-b0c6-fa690e21632b,},Annotations:map[string]string{io.kubernetes.container.hash: f49c7961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9b74412829253a4ea936bd3b48c7091e031227a4153b3d9a160a98a0a0dba97,PodSandboxId:63b2e119430f5ebdaed8ab7d4c84474c2731a2502dcc9d8a5a2115671edeaabf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722859193019645818,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c3a4e49-f517-40e4-bd83-1e69b6a7550c,},Annotations:map[string]string{io.kubernetes.container.hash: 907c955b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0842e66122440248aa898aded1a79fc0724d7cacadd74bb85c99e26c4eecc856,PodSandboxId:6407ea95878ee64061230ee9994f6229411978fd82ce4c54061dea268c21eca7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722859176000628569,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32cb2283b62a6c10b45ae9ad5cf72bc4,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kub
ernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a210faa6df5512e0f8133b7eff1626430c8da65c8a544702154ec3c88d40fc7,PodSandboxId:84d195aa46f38b46a8f6ac426c3b0c075426cc727d65eb609be0b552c71abf25,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722859161511700734,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wtsdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a1664bb-e0a8-496e-a74d-3c25080dca8e,},Annotations:map[string]string{io.kubernetes.container.hash: ff2ee446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:880c87361835da744d26ac324a55105244b712a8e5007091996ee17dbd6ad829,PodSandboxId:f2f20b5872376a4eeec955d07006691b0a58a1c6934af8f278f606fdc7a3c9e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722859161235708441,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fndz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bdb2b4a-e7c6-4e03-80f8-cf80501095c4,},Annotations:map[string]string{io.kubernetes.container.hash: 96fd5c22,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c480179de
34e5729f3b0321c619752bbead36d7a7d95b4a68d00c63e4dc8824,PodSandboxId:c80856b97ef6df258f187ac4e8a84db6d4494999840bd530df4b4fd127004b44,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722859161296898618,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sgd4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ff9d45-f09f-4213-b1c3-d568ee5ab68a,},Annotations:map[string]string{io.kubernetes.container.hash: d7a5fe30,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:249df3a7b9531a6e24de2b21e3cd4e78b7a85a9cec75e1af8b622e6721ed40ca,PodSandboxId:43e06c30e0e1aedcf2ac03742e2deb25fc5e402e97df22ab09adfe540de58015,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722859161095558137,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sfh7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98c09423-e24f-4d26-b7f9-3da3986d538b,},Annotations:map[string]string{io.kubernetes.container.hash: 3a333149,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c47baf951c9feb2c1c30ec178c6363465fbd3163873df192a22e627c24c7248,PodSandboxId:0ab38c276a2d5829d4a01113de1ddc36f6c2b5b953642b8d868cb4dc77609591,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722859161031340558,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-672593,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: 96b70bfddf8dc93c8b8709942f15d00b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfd2da4a0337d86874231815a57a712b3d7ccf013158a47e27592b9428e6543a,PodSandboxId:05390a861ed7f354a26bf4d2372549f29e05df52acbed5d24a76c3d944268504,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722859160977296403,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddda5be0e77a9b07805ce43
249e5859e,},Annotations:map[string]string{io.kubernetes.container.hash: f024b421,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:053ea64b0f339759d8b134779bddc3a6f5df793cf2ae89b324e84a4f4fe8f12c,PodSandboxId:be0aa43da318c85ae7e6f88d2cc94f9993168d381f086f4e06e40431a8b91078,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722859160840698876,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a381773e823990c7e015983b07a0d8,},Annotation
s:map[string]string{io.kubernetes.container.hash: caa197,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ffeb8fa20fc0dccefd8cd88750035d83a8fd93120b43abda0098f4ca62da858,PodSandboxId:4773bf48efb8a64d2aefce07c25c72dc9d826019fbd8b8219d20872f63fe0412,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722859160774392590,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b48534ca818552de6101946d7c7932fd,},Annotat
ions:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f332a2eefb38a7643f5eabdc4c3795fdf9fc7faa3025977758afda4965c4d06f,PodSandboxId:96a63340a808e8f1d3c8938db5651c8ba9a84b0066e04495da70a33af565d687,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722858640390384315,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xx72g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b4aad5e1-e3ed-450f-b0c6-fa690e21632b,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: f49c7961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73fd9ef1948379bdfd834218bee29f227bc55765a421d994bcc5bbfe373658c1,PodSandboxId:162aab1f9af67e7a7875d7f44424f7edaa5b1aa74a891b3a0e84709da26c69fe,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722858498489320838,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sgd4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ff9d45-f09f-4213-b1c3-d568ee5ab68a,},Annotations:map[string]string{io.kubernet
es.container.hash: d7a5fe30,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6354e702fe80a5a9853cdd48f89dde467f1f7359bb495c8a4f6a49048f151d94,PodSandboxId:60a5e5f93bb15c3691c3fccd5be1c38de24355d307d1217ada049b281288a7b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722858498409265151,Labels:map[string]string{io.kubernetes.container.name: coredns
,io.kubernetes.pod.name: coredns-7db6d8ff4d-sfh7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98c09423-e24f-4d26-b7f9-3da3986d538b,},Annotations:map[string]string{io.kubernetes.container.hash: 3a333149,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57cec2b511aa8ca1b171b7dfff39ecb51cb11d9cd4efd552598fcc0054488c46,PodSandboxId:214360f7ff706f37f1cd346a7910caa4b07da7a0f1b94fd4af2eb9609e49369b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722858486486501073,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fndz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bdb2b4a-e7c6-4e03-80f8-cf80501095c4,},Annotations:map[string]string{io.kubernetes.container.hash: 96fd5c22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c4e00c9ba78ff0cfb337d7435931f39fe7ccd42145fa6670487d190cacee48,PodSandboxId:b824fdfadbf52a8243b61b3c55556272c3d50bd4fafe70328531a35defcf2fc9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722858481390532133,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wtsdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a1664bb-e0a8-496e-a74d-3c25080dca8e,},Annotations:map[string]string{io.kubernetes.container.hash: ff2ee446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1019d9e10074631835690fa0d372f2c043a64f237e1ddf9e22bcbd18d59fa6cd,PodSandboxId:1c9e20b33b7b7424aca33506f1a815c58190e9875a108206c654e048992f391f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788
eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722858461888541864,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddda5be0e77a9b07805ce43249e5859e,},Annotations:map[string]string{io.kubernetes.container.hash: f024b421,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9839b56e3e62d7ac6b88dc20149da25f586b4033e03a09844938e5b85b6334,PodSandboxId:c7429b1a8552f574f21cc855aa6bf767680c56d05bb1df8b83c28a59cd561fb1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,S
tate:CONTAINER_EXITED,CreatedAt:1722858461852184959,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96b70bfddf8dc93c8b8709942f15d00b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6d88c3e6-3607-4cbc-83d7-adba9e2ab4ae name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:01:51 ha-672593 crio[3759]: time="2024-08-05 12:01:51.013375179Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e8c09847-860c-4269-97cf-11657dba73db name=/runtime.v1.RuntimeService/Version
	Aug 05 12:01:51 ha-672593 crio[3759]: time="2024-08-05 12:01:51.013479254Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e8c09847-860c-4269-97cf-11657dba73db name=/runtime.v1.RuntimeService/Version
	Aug 05 12:01:51 ha-672593 crio[3759]: time="2024-08-05 12:01:51.014663156Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6d66e465-4abe-4891-991f-7196f044f08d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 12:01:51 ha-672593 crio[3759]: time="2024-08-05 12:01:51.015228894Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722859311015202237,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6d66e465-4abe-4891-991f-7196f044f08d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 12:01:51 ha-672593 crio[3759]: time="2024-08-05 12:01:51.016049038Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b15440f0-42c7-44e7-ac27-0903fcd622ec name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:01:51 ha-672593 crio[3759]: time="2024-08-05 12:01:51.016105414Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b15440f0-42c7-44e7-ac27-0903fcd622ec name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:01:51 ha-672593 crio[3759]: time="2024-08-05 12:01:51.016924260Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:def4e84760678cd0dbb4d7e068c88e72abae9153d09fa973dbf47fa862a37689,PodSandboxId:63b2e119430f5ebdaed8ab7d4c84474c2731a2502dcc9d8a5a2115671edeaabf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722859245020269631,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c3a4e49-f517-40e4-bd83-1e69b6a7550c,},Annotations:map[string]string{io.kubernetes.container.hash: 907c955b,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e579894cd5595ae28bba0f23c22901f5d6e2d2234c275c125c4866264f111567,PodSandboxId:4773bf48efb8a64d2aefce07c25c72dc9d826019fbd8b8219d20872f63fe0412,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722859204025017735,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b48534ca818552de6101946d7c7932fd,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b85ae9f8969ae4a6662656f5f6e5aa97c0dd6b966396a243445b4b71fb627f7b,PodSandboxId:be0aa43da318c85ae7e6f88d2cc94f9993168d381f086f4e06e40431a8b91078,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722859202024655857,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a381773e823990c7e015983b07a0d8,},Annotations:map[string]string{io.kubernetes.container.hash: caa197,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f9974eaa7c2760a20ad8a8c9dc89a8413990b6fe0097548a42ed3a7d75ca3e0,PodSandboxId:dd5b9cacb5cc537cfc77786f8abc1ac6b5cdd30bdbbdec5896b201390799176d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722859194283466371,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xx72g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b4aad5e1-e3ed-450f-b0c6-fa690e21632b,},Annotations:map[string]string{io.kubernetes.container.hash: f49c7961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9b74412829253a4ea936bd3b48c7091e031227a4153b3d9a160a98a0a0dba97,PodSandboxId:63b2e119430f5ebdaed8ab7d4c84474c2731a2502dcc9d8a5a2115671edeaabf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722859193019645818,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c3a4e49-f517-40e4-bd83-1e69b6a7550c,},Annotations:map[string]string{io.kubernetes.container.hash: 907c955b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0842e66122440248aa898aded1a79fc0724d7cacadd74bb85c99e26c4eecc856,PodSandboxId:6407ea95878ee64061230ee9994f6229411978fd82ce4c54061dea268c21eca7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722859176000628569,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32cb2283b62a6c10b45ae9ad5cf72bc4,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kub
ernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a210faa6df5512e0f8133b7eff1626430c8da65c8a544702154ec3c88d40fc7,PodSandboxId:84d195aa46f38b46a8f6ac426c3b0c075426cc727d65eb609be0b552c71abf25,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722859161511700734,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wtsdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a1664bb-e0a8-496e-a74d-3c25080dca8e,},Annotations:map[string]string{io.kubernetes.container.hash: ff2ee446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:880c87361835da744d26ac324a55105244b712a8e5007091996ee17dbd6ad829,PodSandboxId:f2f20b5872376a4eeec955d07006691b0a58a1c6934af8f278f606fdc7a3c9e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722859161235708441,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fndz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bdb2b4a-e7c6-4e03-80f8-cf80501095c4,},Annotations:map[string]string{io.kubernetes.container.hash: 96fd5c22,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c480179de
34e5729f3b0321c619752bbead36d7a7d95b4a68d00c63e4dc8824,PodSandboxId:c80856b97ef6df258f187ac4e8a84db6d4494999840bd530df4b4fd127004b44,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722859161296898618,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sgd4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ff9d45-f09f-4213-b1c3-d568ee5ab68a,},Annotations:map[string]string{io.kubernetes.container.hash: d7a5fe30,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:249df3a7b9531a6e24de2b21e3cd4e78b7a85a9cec75e1af8b622e6721ed40ca,PodSandboxId:43e06c30e0e1aedcf2ac03742e2deb25fc5e402e97df22ab09adfe540de58015,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722859161095558137,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sfh7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98c09423-e24f-4d26-b7f9-3da3986d538b,},Annotations:map[string]string{io.kubernetes.container.hash: 3a333149,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c47baf951c9feb2c1c30ec178c6363465fbd3163873df192a22e627c24c7248,PodSandboxId:0ab38c276a2d5829d4a01113de1ddc36f6c2b5b953642b8d868cb4dc77609591,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722859161031340558,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-672593,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: 96b70bfddf8dc93c8b8709942f15d00b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfd2da4a0337d86874231815a57a712b3d7ccf013158a47e27592b9428e6543a,PodSandboxId:05390a861ed7f354a26bf4d2372549f29e05df52acbed5d24a76c3d944268504,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722859160977296403,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddda5be0e77a9b07805ce43
249e5859e,},Annotations:map[string]string{io.kubernetes.container.hash: f024b421,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:053ea64b0f339759d8b134779bddc3a6f5df793cf2ae89b324e84a4f4fe8f12c,PodSandboxId:be0aa43da318c85ae7e6f88d2cc94f9993168d381f086f4e06e40431a8b91078,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722859160840698876,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a381773e823990c7e015983b07a0d8,},Annotation
s:map[string]string{io.kubernetes.container.hash: caa197,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ffeb8fa20fc0dccefd8cd88750035d83a8fd93120b43abda0098f4ca62da858,PodSandboxId:4773bf48efb8a64d2aefce07c25c72dc9d826019fbd8b8219d20872f63fe0412,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722859160774392590,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b48534ca818552de6101946d7c7932fd,},Annotat
ions:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f332a2eefb38a7643f5eabdc4c3795fdf9fc7faa3025977758afda4965c4d06f,PodSandboxId:96a63340a808e8f1d3c8938db5651c8ba9a84b0066e04495da70a33af565d687,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722858640390384315,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xx72g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b4aad5e1-e3ed-450f-b0c6-fa690e21632b,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: f49c7961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73fd9ef1948379bdfd834218bee29f227bc55765a421d994bcc5bbfe373658c1,PodSandboxId:162aab1f9af67e7a7875d7f44424f7edaa5b1aa74a891b3a0e84709da26c69fe,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722858498489320838,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sgd4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ff9d45-f09f-4213-b1c3-d568ee5ab68a,},Annotations:map[string]string{io.kubernet
es.container.hash: d7a5fe30,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6354e702fe80a5a9853cdd48f89dde467f1f7359bb495c8a4f6a49048f151d94,PodSandboxId:60a5e5f93bb15c3691c3fccd5be1c38de24355d307d1217ada049b281288a7b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722858498409265151,Labels:map[string]string{io.kubernetes.container.name: coredns
,io.kubernetes.pod.name: coredns-7db6d8ff4d-sfh7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98c09423-e24f-4d26-b7f9-3da3986d538b,},Annotations:map[string]string{io.kubernetes.container.hash: 3a333149,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57cec2b511aa8ca1b171b7dfff39ecb51cb11d9cd4efd552598fcc0054488c46,PodSandboxId:214360f7ff706f37f1cd346a7910caa4b07da7a0f1b94fd4af2eb9609e49369b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722858486486501073,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fndz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bdb2b4a-e7c6-4e03-80f8-cf80501095c4,},Annotations:map[string]string{io.kubernetes.container.hash: 96fd5c22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c4e00c9ba78ff0cfb337d7435931f39fe7ccd42145fa6670487d190cacee48,PodSandboxId:b824fdfadbf52a8243b61b3c55556272c3d50bd4fafe70328531a35defcf2fc9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722858481390532133,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wtsdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a1664bb-e0a8-496e-a74d-3c25080dca8e,},Annotations:map[string]string{io.kubernetes.container.hash: ff2ee446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1019d9e10074631835690fa0d372f2c043a64f237e1ddf9e22bcbd18d59fa6cd,PodSandboxId:1c9e20b33b7b7424aca33506f1a815c58190e9875a108206c654e048992f391f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788
eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722858461888541864,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddda5be0e77a9b07805ce43249e5859e,},Annotations:map[string]string{io.kubernetes.container.hash: f024b421,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9839b56e3e62d7ac6b88dc20149da25f586b4033e03a09844938e5b85b6334,PodSandboxId:c7429b1a8552f574f21cc855aa6bf767680c56d05bb1df8b83c28a59cd561fb1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,S
tate:CONTAINER_EXITED,CreatedAt:1722858461852184959,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96b70bfddf8dc93c8b8709942f15d00b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b15440f0-42c7-44e7-ac27-0903fcd622ec name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	def4e84760678       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   63b2e119430f5       storage-provisioner
	e579894cd5595       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   2                   4773bf48efb8a       kube-controller-manager-ha-672593
	b85ae9f8969ae       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            3                   be0aa43da318c       kube-apiserver-ha-672593
	5f9974eaa7c27       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   dd5b9cacb5cc5       busybox-fc5497c4f-xx72g
	f9b7441282925       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Exited              storage-provisioner       3                   63b2e119430f5       storage-provisioner
	0842e66122440       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   6407ea95878ee       kube-vip-ha-672593
	1a210faa6df55       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      2 minutes ago        Running             kube-proxy                1                   84d195aa46f38       kube-proxy-wtsdt
	1c480179de34e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   c80856b97ef6d       coredns-7db6d8ff4d-sgd4v
	880c87361835d       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      2 minutes ago        Running             kindnet-cni               1                   f2f20b5872376       kindnet-7fndz
	249df3a7b9531       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   43e06c30e0e1a       coredns-7db6d8ff4d-sfh7c
	2c47baf951c9f       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      2 minutes ago        Running             kube-scheduler            1                   0ab38c276a2d5       kube-scheduler-ha-672593
	bfd2da4a0337d       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   05390a861ed7f       etcd-ha-672593
	053ea64b0f339       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      2 minutes ago        Exited              kube-apiserver            2                   be0aa43da318c       kube-apiserver-ha-672593
	5ffeb8fa20fc0       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      2 minutes ago        Exited              kube-controller-manager   1                   4773bf48efb8a       kube-controller-manager-ha-672593
	f332a2eefb38a       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   11 minutes ago       Exited              busybox                   0                   96a63340a808e       busybox-fc5497c4f-xx72g
	73fd9ef194837       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   162aab1f9af67       coredns-7db6d8ff4d-sgd4v
	6354e702fe80a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   60a5e5f93bb15       coredns-7db6d8ff4d-sfh7c
	57cec2b511aa8       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    13 minutes ago       Exited              kindnet-cni               0                   214360f7ff706       kindnet-7fndz
	11c4e00c9ba78       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      13 minutes ago       Exited              kube-proxy                0                   b824fdfadbf52       kube-proxy-wtsdt
	1019d9e100746       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      14 minutes ago       Exited              etcd                      0                   1c9e20b33b7b7       etcd-ha-672593
	ca9839b56e3e6       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      14 minutes ago       Exited              kube-scheduler            0                   c7429b1a8552f       kube-scheduler-ha-672593
	
	
	==> coredns [1c480179de34e5729f3b0321c619752bbead36d7a7d95b4a68d00c63e4dc8824] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:51772->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[853896924]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (05-Aug-2024 11:59:32.827) (total time: 10479ms):
	Trace[853896924]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:51772->10.96.0.1:443: read: connection reset by peer 10477ms (11:59:43.304)
	Trace[853896924]: [10.479008471s] [10.479008471s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:51772->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:51782->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:51782->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [249df3a7b9531a6e24de2b21e3cd4e78b7a85a9cec75e1af8b622e6721ed40ca] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:52562->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1899813604]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (05-Aug-2024 11:59:32.691) (total time: 10614ms):
	Trace[1899813604]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:52562->10.96.0.1:443: read: connection reset by peer 10613ms (11:59:43.305)
	Trace[1899813604]: [10.614071628s] [10.614071628s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:52562->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:51080->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:51080->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [6354e702fe80a5a9853cdd48f89dde467f1f7359bb495c8a4f6a49048f151d94] <==
	[INFO] 10.244.1.2:35448 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147398s
	[INFO] 10.244.1.2:52034 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000147397s
	[INFO] 10.244.0.4:50553 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110672s
	[INFO] 10.244.0.4:47698 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000069619s
	[INFO] 10.244.0.4:39504 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000139191s
	[INFO] 10.244.0.4:35787 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000065087s
	[INFO] 10.244.2.2:57478 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118877s
	[INFO] 10.244.2.2:44657 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000121159s
	[INFO] 10.244.2.2:33599 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126768s
	[INFO] 10.244.1.2:54159 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000179418s
	[INFO] 10.244.1.2:49562 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092072s
	[INFO] 10.244.0.4:42290 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077914s
	[INFO] 10.244.2.2:59634 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000164343s
	[INFO] 10.244.2.2:43784 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000159677s
	[INFO] 10.244.1.2:49443 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000173465s
	[INFO] 10.244.1.2:58280 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00015744s
	[INFO] 10.244.0.4:52050 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111584s
	[INFO] 10.244.0.4:42223 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000078636s
	[INFO] 10.244.0.4:42616 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000084454s
	[INFO] 10.244.0.4:49723 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000087038s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [73fd9ef1948379bdfd834218bee29f227bc55765a421d994bcc5bbfe373658c1] <==
	[INFO] 10.244.2.2:34794 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00013565s
	[INFO] 10.244.1.2:33425 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168612s
	[INFO] 10.244.1.2:49339 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001876895s
	[INFO] 10.244.1.2:41345 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001388007s
	[INFO] 10.244.1.2:39680 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097906s
	[INFO] 10.244.1.2:38660 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000162674s
	[INFO] 10.244.0.4:37518 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001828264s
	[INFO] 10.244.0.4:43389 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000136081s
	[INFO] 10.244.0.4:58226 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000071105s
	[INFO] 10.244.0.4:43658 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001098104s
	[INFO] 10.244.2.2:40561 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000109999s
	[INFO] 10.244.1.2:41071 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120854s
	[INFO] 10.244.1.2:40710 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080783s
	[INFO] 10.244.0.4:54672 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011185s
	[INFO] 10.244.0.4:55288 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117161s
	[INFO] 10.244.0.4:41744 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068123s
	[INFO] 10.244.2.2:60620 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013916s
	[INFO] 10.244.2.2:52672 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000153187s
	[INFO] 10.244.1.2:36870 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144481s
	[INFO] 10.244.1.2:43017 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000166959s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1936&timeout=5m55s&timeoutSeconds=355&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1877&timeout=6m1s&timeoutSeconds=361&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1893&timeout=8m8s&timeoutSeconds=488&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-672593
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-672593
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f
	                    minikube.k8s.io/name=ha-672593
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_05T11_47_49_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 11:47:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-672593
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 12:01:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 12:00:06 +0000   Mon, 05 Aug 2024 11:47:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 12:00:06 +0000   Mon, 05 Aug 2024 11:47:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 12:00:06 +0000   Mon, 05 Aug 2024 11:47:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 12:00:06 +0000   Mon, 05 Aug 2024 11:48:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.102
	  Hostname:    ha-672593
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fb8829a6b1d145d6aee2ea0e80194fe4
	  System UUID:                fb8829a6-b1d1-45d6-aee2-ea0e80194fe4
	  Boot ID:                    ecb22512-bcb2-43ab-b502-fc0c346e754f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xx72g              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-7db6d8ff4d-sfh7c             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-sgd4v             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-672593                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-7fndz                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-672593             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-672593    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-wtsdt                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-672593             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-672593                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 104s                 kube-proxy       
	  Normal   Starting                 13m                  kube-proxy       
	  Normal   Starting                 14m                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    14m                  kubelet          Node ha-672593 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  14m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  14m                  kubelet          Node ha-672593 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     14m                  kubelet          Node ha-672593 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                  node-controller  Node ha-672593 event: Registered Node ha-672593 in Controller
	  Normal   NodeReady                13m                  kubelet          Node ha-672593 status is now: NodeReady
	  Normal   RegisteredNode           12m                  node-controller  Node ha-672593 event: Registered Node ha-672593 in Controller
	  Normal   RegisteredNode           11m                  node-controller  Node ha-672593 event: Registered Node ha-672593 in Controller
	  Warning  ContainerGCFailed        3m3s (x2 over 4m3s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           97s                  node-controller  Node ha-672593 event: Registered Node ha-672593 in Controller
	  Normal   RegisteredNode           95s                  node-controller  Node ha-672593 event: Registered Node ha-672593 in Controller
	  Normal   RegisteredNode           32s                  node-controller  Node ha-672593 event: Registered Node ha-672593 in Controller
	
	
	Name:               ha-672593-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-672593-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f
	                    minikube.k8s.io/name=ha-672593
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_05T11_48_56_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 11:48:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-672593-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 12:01:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 12:00:49 +0000   Mon, 05 Aug 2024 12:00:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 12:00:49 +0000   Mon, 05 Aug 2024 12:00:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 12:00:49 +0000   Mon, 05 Aug 2024 12:00:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 12:00:49 +0000   Mon, 05 Aug 2024 12:00:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.68
	  Hostname:    ha-672593-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8aa3c6ca9e9a439e91c6c120c9ce9ce7
	  System UUID:                8aa3c6ca-9e9a-439e-91c6-c120c9ce9ce7
	  Boot ID:                    9080c272-ba5e-4e9d-8215-dd4f5b1ffe33
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-vn64j                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-672593-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-85fm7                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-672593-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-672593-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-mdwh2                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-672593-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-672593-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 98s                    kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-672593-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-672593-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-672593-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-672593-m02 event: Registered Node ha-672593-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-672593-m02 event: Registered Node ha-672593-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-672593-m02 event: Registered Node ha-672593-m02 in Controller
	  Normal  NodeNotReady             8m53s                  node-controller  Node ha-672593-m02 status is now: NodeNotReady
	  Normal  Starting                 2m11s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m10s (x8 over 2m11s)  kubelet          Node ha-672593-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m10s (x8 over 2m11s)  kubelet          Node ha-672593-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m10s (x7 over 2m11s)  kubelet          Node ha-672593-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           97s                    node-controller  Node ha-672593-m02 event: Registered Node ha-672593-m02 in Controller
	  Normal  RegisteredNode           95s                    node-controller  Node ha-672593-m02 event: Registered Node ha-672593-m02 in Controller
	  Normal  RegisteredNode           32s                    node-controller  Node ha-672593-m02 event: Registered Node ha-672593-m02 in Controller
	
	
	Name:               ha-672593-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-672593-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f
	                    minikube.k8s.io/name=ha-672593
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_05T11_50_09_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 11:50:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-672593-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 12:01:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 12:01:22 +0000   Mon, 05 Aug 2024 11:50:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 12:01:22 +0000   Mon, 05 Aug 2024 11:50:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 12:01:22 +0000   Mon, 05 Aug 2024 11:50:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 12:01:22 +0000   Mon, 05 Aug 2024 11:50:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.210
	  Hostname:    ha-672593-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 95bc9a27650e44d8882cc62883736cdc
	  System UUID:                95bc9a27-650e-44d8-882c-c62883736cdc
	  Boot ID:                    23403de9-c38d-4961-8c66-ecd038c8eda1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-dq7jg                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-672593-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-wnbr8                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-672593-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-672593-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-4q4tq                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-672593-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-672593-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 43s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-672593-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-672593-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-672593-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-672593-m03 event: Registered Node ha-672593-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-672593-m03 event: Registered Node ha-672593-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-672593-m03 event: Registered Node ha-672593-m03 in Controller
	  Normal   RegisteredNode           97s                node-controller  Node ha-672593-m03 event: Registered Node ha-672593-m03 in Controller
	  Normal   RegisteredNode           95s                node-controller  Node ha-672593-m03 event: Registered Node ha-672593-m03 in Controller
	  Normal   Starting                 60s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  60s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  60s                kubelet          Node ha-672593-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s                kubelet          Node ha-672593-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s                kubelet          Node ha-672593-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 60s                kubelet          Node ha-672593-m03 has been rebooted, boot id: 23403de9-c38d-4961-8c66-ecd038c8eda1
	  Normal   RegisteredNode           32s                node-controller  Node ha-672593-m03 event: Registered Node ha-672593-m03 in Controller
	
	
	Name:               ha-672593-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-672593-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f
	                    minikube.k8s.io/name=ha-672593
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_05T11_51_15_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 11:51:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-672593-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 12:01:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 12:01:43 +0000   Mon, 05 Aug 2024 12:01:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 12:01:43 +0000   Mon, 05 Aug 2024 12:01:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 12:01:43 +0000   Mon, 05 Aug 2024 12:01:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 12:01:43 +0000   Mon, 05 Aug 2024 12:01:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.4
	  Hostname:    ha-672593-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f5561d3ea391496e983c8078f06ff6c0
	  System UUID:                f5561d3e-a391-496e-983c-8078f06ff6c0
	  Boot ID:                    10eb468d-1081-40f6-8d09-5554203cd004
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-6dfc5       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-lpp7n    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-672593-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-672593-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-672593-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-672593-m04 event: Registered Node ha-672593-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-672593-m04 event: Registered Node ha-672593-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-672593-m04 event: Registered Node ha-672593-m04 in Controller
	  Normal   NodeReady                9m49s              kubelet          Node ha-672593-m04 status is now: NodeReady
	  Normal   RegisteredNode           97s                node-controller  Node ha-672593-m04 event: Registered Node ha-672593-m04 in Controller
	  Normal   RegisteredNode           95s                node-controller  Node ha-672593-m04 event: Registered Node ha-672593-m04 in Controller
	  Normal   NodeNotReady             57s                node-controller  Node ha-672593-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           32s                node-controller  Node ha-672593-m04 event: Registered Node ha-672593-m04 in Controller
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8s (x2 over 9s)    kubelet          Node ha-672593-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x2 over 9s)    kubelet          Node ha-672593-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x2 over 9s)    kubelet          Node ha-672593-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 8s                 kubelet          Node ha-672593-m04 has been rebooted, boot id: 10eb468d-1081-40f6-8d09-5554203cd004
	  Normal   NodeReady                8s                 kubelet          Node ha-672593-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +14.056383] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.055926] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054067] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.198790] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.118095] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.300760] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.230253] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +4.263666] systemd-fstab-generator[942]: Ignoring "noauto" option for root device
	[  +0.055709] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.688004] kauditd_printk_skb: 79 callbacks suppressed
	[  +1.472104] systemd-fstab-generator[1356]: Ignoring "noauto" option for root device
	[Aug 5 11:48] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.212364] kauditd_printk_skb: 29 callbacks suppressed
	[ +52.825644] kauditd_printk_skb: 24 callbacks suppressed
	[Aug 5 11:59] systemd-fstab-generator[3610]: Ignoring "noauto" option for root device
	[  +0.142491] systemd-fstab-generator[3622]: Ignoring "noauto" option for root device
	[  +0.185678] systemd-fstab-generator[3636]: Ignoring "noauto" option for root device
	[  +0.153541] systemd-fstab-generator[3648]: Ignoring "noauto" option for root device
	[  +0.401182] systemd-fstab-generator[3731]: Ignoring "noauto" option for root device
	[  +0.824558] systemd-fstab-generator[3855]: Ignoring "noauto" option for root device
	[  +3.894626] kauditd_printk_skb: 127 callbacks suppressed
	[  +5.323701] kauditd_printk_skb: 85 callbacks suppressed
	[Aug 5 12:00] kauditd_printk_skb: 1 callbacks suppressed
	[ +11.023263] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [1019d9e10074631835690fa0d372f2c043a64f237e1ddf9e22bcbd18d59fa6cd] <==
	2024/08/05 11:57:43 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/08/05 11:57:43 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-05T11:57:43.535859Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-05T11:57:43.234938Z","time spent":"300.907105ms","remote":"127.0.0.1:33196","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":0,"response size":0,"request content":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" limit:500 "}
	2024/08/05 11:57:43 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/08/05 11:57:43 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-05T11:57:43.58185Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.102:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-05T11:57:43.582035Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.102:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-05T11:57:43.583483Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"6b93c4bc4617b0fe","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-05T11:57:43.583676Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"64db3d4ba151eb25"}
	{"level":"info","ts":"2024-08-05T11:57:43.58371Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"64db3d4ba151eb25"}
	{"level":"info","ts":"2024-08-05T11:57:43.583734Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"64db3d4ba151eb25"}
	{"level":"info","ts":"2024-08-05T11:57:43.583849Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25"}
	{"level":"info","ts":"2024-08-05T11:57:43.583986Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25"}
	{"level":"info","ts":"2024-08-05T11:57:43.584056Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25"}
	{"level":"info","ts":"2024-08-05T11:57:43.584098Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"64db3d4ba151eb25"}
	{"level":"info","ts":"2024-08-05T11:57:43.584106Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"9c6cca754efc9caa"}
	{"level":"info","ts":"2024-08-05T11:57:43.584116Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"9c6cca754efc9caa"}
	{"level":"info","ts":"2024-08-05T11:57:43.584167Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"9c6cca754efc9caa"}
	{"level":"info","ts":"2024-08-05T11:57:43.584268Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"9c6cca754efc9caa"}
	{"level":"info","ts":"2024-08-05T11:57:43.58435Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"9c6cca754efc9caa"}
	{"level":"info","ts":"2024-08-05T11:57:43.584403Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"9c6cca754efc9caa"}
	{"level":"info","ts":"2024-08-05T11:57:43.584447Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"9c6cca754efc9caa"}
	{"level":"info","ts":"2024-08-05T11:57:43.587657Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.102:2380"}
	{"level":"info","ts":"2024-08-05T11:57:43.587795Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.102:2380"}
	{"level":"info","ts":"2024-08-05T11:57:43.587834Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-672593","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.102:2380"],"advertise-client-urls":["https://192.168.39.102:2379"]}
	
	
	==> etcd [bfd2da4a0337d86874231815a57a712b3d7ccf013158a47e27592b9428e6543a] <==
	{"level":"warn","ts":"2024-08-05T12:00:46.628066Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9c6cca754efc9caa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T12:00:46.965469Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"9c6cca754efc9caa","rtt":"0s","error":"dial tcp 192.168.39.210:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-05T12:00:46.96778Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"9c6cca754efc9caa","rtt":"0s","error":"dial tcp 192.168.39.210:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-05T12:00:48.825102Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.210:2380/version","remote-member-id":"9c6cca754efc9caa","error":"Get \"https://192.168.39.210:2380/version\": dial tcp 192.168.39.210:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-05T12:00:48.825229Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"9c6cca754efc9caa","error":"Get \"https://192.168.39.210:2380/version\": dial tcp 192.168.39.210:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-05T12:00:51.966633Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"9c6cca754efc9caa","rtt":"0s","error":"dial tcp 192.168.39.210:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-05T12:00:51.968018Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"9c6cca754efc9caa","rtt":"0s","error":"dial tcp 192.168.39.210:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-05T12:00:52.827839Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.210:2380/version","remote-member-id":"9c6cca754efc9caa","error":"Get \"https://192.168.39.210:2380/version\": dial tcp 192.168.39.210:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-05T12:00:52.828061Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"9c6cca754efc9caa","error":"Get \"https://192.168.39.210:2380/version\": dial tcp 192.168.39.210:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-05T12:00:56.822459Z","caller":"traceutil/trace.go:171","msg":"trace[1272595612] transaction","detail":"{read_only:false; response_revision:2366; number_of_response:1; }","duration":"241.591025ms","start":"2024-08-05T12:00:56.58083Z","end":"2024-08-05T12:00:56.822421Z","steps":["trace[1272595612] 'process raft request'  (duration: 241.449398ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-05T12:00:56.822063Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"9c6cca754efc9caa","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"93.879892ms"}
	{"level":"warn","ts":"2024-08-05T12:00:56.826092Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"64db3d4ba151eb25","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"97.909241ms"}
	{"level":"warn","ts":"2024-08-05T12:00:56.83205Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.210:2380/version","remote-member-id":"9c6cca754efc9caa","error":"Get \"https://192.168.39.210:2380/version\": dial tcp 192.168.39.210:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-05T12:00:56.832144Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"9c6cca754efc9caa","error":"Get \"https://192.168.39.210:2380/version\": dial tcp 192.168.39.210:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-05T12:00:56.967299Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"9c6cca754efc9caa","rtt":"0s","error":"dial tcp 192.168.39.210:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-05T12:00:56.968516Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"9c6cca754efc9caa","rtt":"0s","error":"dial tcp 192.168.39.210:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-05T12:01:00.833851Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.210:2380/version","remote-member-id":"9c6cca754efc9caa","error":"Get \"https://192.168.39.210:2380/version\": dial tcp 192.168.39.210:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-05T12:01:00.83404Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"9c6cca754efc9caa","error":"Get \"https://192.168.39.210:2380/version\": dial tcp 192.168.39.210:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-05T12:01:01.389194Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"9c6cca754efc9caa"}
	{"level":"info","ts":"2024-08-05T12:01:01.392695Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"9c6cca754efc9caa"}
	{"level":"info","ts":"2024-08-05T12:01:01.393452Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"9c6cca754efc9caa"}
	{"level":"info","ts":"2024-08-05T12:01:01.4093Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6b93c4bc4617b0fe","to":"9c6cca754efc9caa","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-05T12:01:01.409492Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"9c6cca754efc9caa"}
	{"level":"info","ts":"2024-08-05T12:01:01.425502Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6b93c4bc4617b0fe","to":"9c6cca754efc9caa","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-05T12:01:01.425606Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"9c6cca754efc9caa"}
	
	
	==> kernel <==
	 12:01:51 up 14 min,  0 users,  load average: 0.55, 0.72, 0.43
	Linux ha-672593 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [57cec2b511aa8ca1b171b7dfff39ecb51cb11d9cd4efd552598fcc0054488c46] <==
	I0805 11:57:17.433866       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0805 11:57:17.433926       1 main.go:322] Node ha-672593-m03 has CIDR [10.244.2.0/24] 
	I0805 11:57:17.434171       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0805 11:57:17.434202       1 main.go:322] Node ha-672593-m04 has CIDR [10.244.3.0/24] 
	I0805 11:57:17.434283       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0805 11:57:17.434309       1 main.go:299] handling current node
	I0805 11:57:17.434324       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0805 11:57:17.434330       1 main.go:322] Node ha-672593-m02 has CIDR [10.244.1.0/24] 
	I0805 11:57:27.435455       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0805 11:57:27.435605       1 main.go:299] handling current node
	I0805 11:57:27.435658       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0805 11:57:27.435666       1 main.go:322] Node ha-672593-m02 has CIDR [10.244.1.0/24] 
	I0805 11:57:27.436012       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0805 11:57:27.436036       1 main.go:322] Node ha-672593-m03 has CIDR [10.244.2.0/24] 
	I0805 11:57:27.436103       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0805 11:57:27.436134       1 main.go:322] Node ha-672593-m04 has CIDR [10.244.3.0/24] 
	I0805 11:57:37.436099       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0805 11:57:37.436141       1 main.go:322] Node ha-672593-m04 has CIDR [10.244.3.0/24] 
	I0805 11:57:37.436401       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0805 11:57:37.436441       1 main.go:299] handling current node
	I0805 11:57:37.436476       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0805 11:57:37.436493       1 main.go:322] Node ha-672593-m02 has CIDR [10.244.1.0/24] 
	I0805 11:57:37.436621       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0805 11:57:37.436644       1 main.go:322] Node ha-672593-m03 has CIDR [10.244.2.0/24] 
	E0805 11:57:42.354277       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	
	
	==> kindnet [880c87361835da744d26ac324a55105244b712a8e5007091996ee17dbd6ad829] <==
	I0805 12:01:12.459723       1 main.go:322] Node ha-672593-m04 has CIDR [10.244.3.0/24] 
	I0805 12:01:22.450064       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0805 12:01:22.450249       1 main.go:299] handling current node
	I0805 12:01:22.450288       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0805 12:01:22.450307       1 main.go:322] Node ha-672593-m02 has CIDR [10.244.1.0/24] 
	I0805 12:01:22.450466       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0805 12:01:22.450493       1 main.go:322] Node ha-672593-m03 has CIDR [10.244.2.0/24] 
	I0805 12:01:22.450551       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0805 12:01:22.450584       1 main.go:322] Node ha-672593-m04 has CIDR [10.244.3.0/24] 
	I0805 12:01:32.454146       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0805 12:01:32.454286       1 main.go:299] handling current node
	I0805 12:01:32.454337       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0805 12:01:32.454345       1 main.go:322] Node ha-672593-m02 has CIDR [10.244.1.0/24] 
	I0805 12:01:32.454864       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0805 12:01:32.454901       1 main.go:322] Node ha-672593-m03 has CIDR [10.244.2.0/24] 
	I0805 12:01:32.455135       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0805 12:01:32.455165       1 main.go:322] Node ha-672593-m04 has CIDR [10.244.3.0/24] 
	I0805 12:01:42.457088       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0805 12:01:42.457134       1 main.go:299] handling current node
	I0805 12:01:42.457148       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0805 12:01:42.457153       1 main.go:322] Node ha-672593-m02 has CIDR [10.244.1.0/24] 
	I0805 12:01:42.457300       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0805 12:01:42.457327       1 main.go:322] Node ha-672593-m03 has CIDR [10.244.2.0/24] 
	I0805 12:01:42.457395       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0805 12:01:42.457415       1 main.go:322] Node ha-672593-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [053ea64b0f339759d8b134779bddc3a6f5df793cf2ae89b324e84a4f4fe8f12c] <==
	I0805 11:59:21.825180       1 options.go:221] external host was not specified, using 192.168.39.102
	I0805 11:59:21.828521       1 server.go:148] Version: v1.30.3
	I0805 11:59:21.828593       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 11:59:22.285768       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0805 11:59:22.294913       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0805 11:59:22.298168       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0805 11:59:22.298198       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0805 11:59:22.298358       1 instance.go:299] Using reconciler: lease
	W0805 11:59:42.285935       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0805 11:59:42.286176       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0805 11:59:42.298872       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [b85ae9f8969ae4a6662656f5f6e5aa97c0dd6b966396a243445b4b71fb627f7b] <==
	I0805 12:00:04.368653       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0805 12:00:04.368666       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0805 12:00:04.312689       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0805 12:00:04.417407       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0805 12:00:04.418008       1 aggregator.go:165] initial CRD sync complete...
	I0805 12:00:04.418130       1 autoregister_controller.go:141] Starting autoregister controller
	I0805 12:00:04.418179       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0805 12:00:04.512475       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0805 12:00:04.518407       1 cache.go:39] Caches are synced for autoregister controller
	I0805 12:00:04.518545       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0805 12:00:04.520141       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0805 12:00:04.518578       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0805 12:00:04.518590       1 shared_informer.go:320] Caches are synced for configmaps
	I0805 12:00:04.519515       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0805 12:00:04.519536       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0805 12:00:04.529673       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0805 12:00:04.530310       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0805 12:00:04.530384       1 policy_source.go:224] refreshing policies
	W0805 12:00:04.531148       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.210 192.168.39.68]
	I0805 12:00:04.532635       1 controller.go:615] quota admission added evaluator for: endpoints
	I0805 12:00:04.541385       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0805 12:00:04.547522       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0805 12:00:04.554561       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0805 12:00:05.318715       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0805 12:00:05.771837       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.102 192.168.39.210 192.168.39.68]
	
	
	==> kube-controller-manager [5ffeb8fa20fc0dccefd8cd88750035d83a8fd93120b43abda0098f4ca62da858] <==
	I0805 11:59:22.036038       1 serving.go:380] Generated self-signed cert in-memory
	I0805 11:59:22.485631       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0805 11:59:22.485676       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 11:59:22.487282       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0805 11:59:22.487855       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0805 11:59:22.488067       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0805 11:59:22.488152       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0805 11:59:43.304376       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.102:8443/healthz\": dial tcp 192.168.39.102:8443: connect: connection refused"
	
	
	==> kube-controller-manager [e579894cd5595ae28bba0f23c22901f5d6e2d2234c275c125c4866264f111567] <==
	I0805 12:00:16.540487       1 shared_informer.go:320] Caches are synced for PVC protection
	I0805 12:00:16.543831       1 shared_informer.go:320] Caches are synced for ephemeral
	I0805 12:00:16.557596       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0805 12:00:16.557726       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0805 12:00:16.561074       1 shared_informer.go:320] Caches are synced for persistent volume
	I0805 12:00:16.577074       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.327078ms"
	I0805 12:00:16.577168       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.085µs"
	I0805 12:00:16.596526       1 shared_informer.go:320] Caches are synced for HPA
	I0805 12:00:16.687574       1 shared_informer.go:320] Caches are synced for resource quota
	I0805 12:00:16.712917       1 shared_informer.go:320] Caches are synced for attach detach
	I0805 12:00:16.726560       1 shared_informer.go:320] Caches are synced for resource quota
	I0805 12:00:17.178333       1 shared_informer.go:320] Caches are synced for garbage collector
	I0805 12:00:17.181486       1 shared_informer.go:320] Caches are synced for garbage collector
	I0805 12:00:17.181519       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0805 12:00:28.237770       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-rctzh EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-rctzh\": the object has been modified; please apply your changes to the latest version and try again"
	I0805 12:00:28.238266       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"15a14bcf-7628-4b88-a547-4a20d328b035", APIVersion:"v1", ResourceVersion:"258", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-rctzh EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-rctzh": the object has been modified; please apply your changes to the latest version and try again
	I0805 12:00:28.261781       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="89.611089ms"
	E0805 12:00:28.261873       1 replica_set.go:557] sync "kube-system/coredns-7db6d8ff4d" failed with Operation cannot be fulfilled on replicasets.apps "coredns-7db6d8ff4d": the object has been modified; please apply your changes to the latest version and try again
	I0805 12:00:28.262261       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="127.775µs"
	I0805 12:00:28.267572       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="186.595µs"
	I0805 12:00:52.326614       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.177838ms"
	I0805 12:00:52.327069       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="203.831µs"
	I0805 12:01:11.418621       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.226276ms"
	I0805 12:01:11.418924       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.528µs"
	I0805 12:01:43.038445       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-672593-m04"
	
	
	==> kube-proxy [11c4e00c9ba78ff0cfb337d7435931f39fe7ccd42145fa6670487d190cacee48] <==
	E0805 11:56:28.311792       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1846": dial tcp 192.168.39.254:8443: connect: no route to host
	W0805 11:56:28.311853       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1923": dial tcp 192.168.39.254:8443: connect: no route to host
	E0805 11:56:28.311872       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1923": dial tcp 192.168.39.254:8443: connect: no route to host
	W0805 11:56:34.647715       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1923": dial tcp 192.168.39.254:8443: connect: no route to host
	E0805 11:56:34.648621       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1923": dial tcp 192.168.39.254:8443: connect: no route to host
	W0805 11:56:34.648624       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1846": dial tcp 192.168.39.254:8443: connect: no route to host
	E0805 11:56:34.648680       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1846": dial tcp 192.168.39.254:8443: connect: no route to host
	W0805 11:56:34.648798       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-672593&resourceVersion=1890": dial tcp 192.168.39.254:8443: connect: no route to host
	E0805 11:56:34.648816       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-672593&resourceVersion=1890": dial tcp 192.168.39.254:8443: connect: no route to host
	W0805 11:56:43.863816       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-672593&resourceVersion=1890": dial tcp 192.168.39.254:8443: connect: no route to host
	E0805 11:56:43.864052       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-672593&resourceVersion=1890": dial tcp 192.168.39.254:8443: connect: no route to host
	W0805 11:56:43.864211       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1846": dial tcp 192.168.39.254:8443: connect: no route to host
	E0805 11:56:43.864276       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1846": dial tcp 192.168.39.254:8443: connect: no route to host
	W0805 11:56:46.936482       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1923": dial tcp 192.168.39.254:8443: connect: no route to host
	E0805 11:56:46.936615       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1923": dial tcp 192.168.39.254:8443: connect: no route to host
	W0805 11:57:02.297110       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1846": dial tcp 192.168.39.254:8443: connect: no route to host
	E0805 11:57:02.297156       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1846": dial tcp 192.168.39.254:8443: connect: no route to host
	W0805 11:57:05.369264       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-672593&resourceVersion=1890": dial tcp 192.168.39.254:8443: connect: no route to host
	E0805 11:57:05.369455       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-672593&resourceVersion=1890": dial tcp 192.168.39.254:8443: connect: no route to host
	W0805 11:57:11.512173       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1923": dial tcp 192.168.39.254:8443: connect: no route to host
	E0805 11:57:11.512673       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1923": dial tcp 192.168.39.254:8443: connect: no route to host
	W0805 11:57:39.160405       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-672593&resourceVersion=1890": dial tcp 192.168.39.254:8443: connect: no route to host
	E0805 11:57:39.160758       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-672593&resourceVersion=1890": dial tcp 192.168.39.254:8443: connect: no route to host
	W0805 11:57:39.160918       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1846": dial tcp 192.168.39.254:8443: connect: no route to host
	E0805 11:57:39.161015       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1846": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [1a210faa6df5512e0f8133b7eff1626430c8da65c8a544702154ec3c88d40fc7] <==
	I0805 11:59:22.547037       1 server_linux.go:69] "Using iptables proxy"
	E0805 11:59:23.608298       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-672593\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0805 11:59:26.680598       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-672593\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0805 11:59:29.752543       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-672593\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0805 11:59:35.895859       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-672593\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0805 11:59:48.183682       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-672593\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0805 12:00:06.616722       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-672593\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0805 12:00:06.620088       1 server.go:1032] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	I0805 12:00:06.727178       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0805 12:00:06.727351       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0805 12:00:06.727387       1 server_linux.go:165] "Using iptables Proxier"
	I0805 12:00:06.735802       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0805 12:00:06.737603       1 server.go:872] "Version info" version="v1.30.3"
	I0805 12:00:06.737672       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 12:00:06.745171       1 config.go:192] "Starting service config controller"
	I0805 12:00:06.745243       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 12:00:06.745300       1 config.go:101] "Starting endpoint slice config controller"
	I0805 12:00:06.745321       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 12:00:06.746206       1 config.go:319] "Starting node config controller"
	I0805 12:00:06.746263       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 12:00:06.845876       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0805 12:00:06.846011       1 shared_informer.go:320] Caches are synced for service config
	I0805 12:00:06.846364       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2c47baf951c9feb2c1c30ec178c6363465fbd3163873df192a22e627c24c7248] <==
	W0805 11:59:58.391453       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.102:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	E0805 11:59:58.391520       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.102:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	W0805 11:59:58.742710       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.102:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	E0805 11:59:58.742772       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.102:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	W0805 11:59:58.984121       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.102:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	E0805 11:59:58.984158       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.102:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	W0805 11:59:58.985735       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.102:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	E0805 11:59:58.985779       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.102:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	W0805 11:59:59.518357       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.102:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	E0805 11:59:59.518420       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.102:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	W0805 11:59:59.842662       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.102:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	E0805 11:59:59.842734       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.102:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	W0805 12:00:01.700540       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.102:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	E0805 12:00:01.700638       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.102:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	W0805 12:00:01.708795       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.102:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	E0805 12:00:01.708875       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.102:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	W0805 12:00:04.415532       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0805 12:00:04.415716       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0805 12:00:04.420391       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0805 12:00:04.421082       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0805 12:00:04.421280       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0805 12:00:04.421590       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0805 12:00:04.421735       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0805 12:00:04.422066       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0805 12:00:25.112927       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [ca9839b56e3e62d7ac6b88dc20149da25f586b4033e03a09844938e5b85b6334] <==
	W0805 11:57:40.344688       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0805 11:57:40.344734       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0805 11:57:41.020325       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0805 11:57:41.020355       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0805 11:57:41.042771       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0805 11:57:41.042836       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0805 11:57:41.248719       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0805 11:57:41.248771       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0805 11:57:41.304894       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0805 11:57:41.305028       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0805 11:57:41.608869       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0805 11:57:41.609030       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0805 11:57:41.773774       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0805 11:57:41.773880       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0805 11:57:41.830572       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0805 11:57:41.830618       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0805 11:57:41.866668       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0805 11:57:41.866720       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0805 11:57:42.062515       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0805 11:57:42.062552       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0805 11:57:42.275826       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0805 11:57:42.276037       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0805 11:57:43.021019       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0805 11:57:43.021062       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0805 11:57:43.482559       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 05 12:00:06 ha-672593 kubelet[1363]: E0805 12:00:06.010212    1363 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(9c3a4e49-f517-40e4-bd83-1e69b6a7550c)\"" pod="kube-system/storage-provisioner" podUID="9c3a4e49-f517-40e4-bd83-1e69b6a7550c"
	Aug 05 12:00:06 ha-672593 kubelet[1363]: E0805 12:00:06.615443    1363 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-672593?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Aug 05 12:00:06 ha-672593 kubelet[1363]: W0805 12:00:06.615509    1363 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)kube-root-ca.crt&resourceVersion=1848": dial tcp 192.168.39.254:8443: connect: no route to host
	Aug 05 12:00:06 ha-672593 kubelet[1363]: E0805 12:00:06.615706    1363 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)kube-root-ca.crt&resourceVersion=1848": dial tcp 192.168.39.254:8443: connect: no route to host
	Aug 05 12:00:06 ha-672593 kubelet[1363]: E0805 12:00:06.615787    1363 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-672593\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-672593?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Aug 05 12:00:06 ha-672593 kubelet[1363]: I0805 12:00:06.615887    1363 status_manager.go:853] "Failed to get status for pod" podUID="9c3a4e49-f517-40e4-bd83-1e69b6a7550c" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Aug 05 12:00:06 ha-672593 kubelet[1363]: E0805 12:00:06.616222    1363 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events\": dial tcp 192.168.39.254:8443: connect: no route to host" event="&Event{ObjectMeta:{kube-apiserver-ha-672593.17e8d3166709dc63  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-672593,UID:51a381773e823990c7e015983b07a0d8,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ha-672593,},FirstTimestamp:2024-08-05 11:55:50.640655459 +0000 UTC m=+482.750536883,LastTimestamp:2024-08-05 11:55:50.640655459 +0000 UTC m=+482.750536883,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related
:nil,ReportingController:kubelet,ReportingInstance:ha-672593,}"
	Aug 05 12:00:19 ha-672593 kubelet[1363]: I0805 12:00:19.010503    1363 scope.go:117] "RemoveContainer" containerID="f9b74412829253a4ea936bd3b48c7091e031227a4153b3d9a160a98a0a0dba97"
	Aug 05 12:00:19 ha-672593 kubelet[1363]: E0805 12:00:19.010757    1363 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(9c3a4e49-f517-40e4-bd83-1e69b6a7550c)\"" pod="kube-system/storage-provisioner" podUID="9c3a4e49-f517-40e4-bd83-1e69b6a7550c"
	Aug 05 12:00:27 ha-672593 kubelet[1363]: I0805 12:00:27.028031    1363 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-xx72g" podStartSLOduration=589.413659913 podStartE2EDuration="9m52.027990244s" podCreationTimestamp="2024-08-05 11:50:35 +0000 UTC" firstStartedPulling="2024-08-05 11:50:37.761426799 +0000 UTC m=+169.871308219" lastFinishedPulling="2024-08-05 11:50:40.37575714 +0000 UTC m=+172.485638550" observedRunningTime="2024-08-05 11:50:40.762141515 +0000 UTC m=+172.872022943" watchObservedRunningTime="2024-08-05 12:00:27.027990244 +0000 UTC m=+759.137871665"
	Aug 05 12:00:33 ha-672593 kubelet[1363]: I0805 12:00:33.010732    1363 scope.go:117] "RemoveContainer" containerID="f9b74412829253a4ea936bd3b48c7091e031227a4153b3d9a160a98a0a0dba97"
	Aug 05 12:00:33 ha-672593 kubelet[1363]: E0805 12:00:33.011021    1363 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(9c3a4e49-f517-40e4-bd83-1e69b6a7550c)\"" pod="kube-system/storage-provisioner" podUID="9c3a4e49-f517-40e4-bd83-1e69b6a7550c"
	Aug 05 12:00:45 ha-672593 kubelet[1363]: I0805 12:00:45.010423    1363 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-672593" podUID="36928548-a08e-49a4-a82a-6c6c3fb52b48"
	Aug 05 12:00:45 ha-672593 kubelet[1363]: I0805 12:00:45.011128    1363 scope.go:117] "RemoveContainer" containerID="f9b74412829253a4ea936bd3b48c7091e031227a4153b3d9a160a98a0a0dba97"
	Aug 05 12:00:45 ha-672593 kubelet[1363]: I0805 12:00:45.058795    1363 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-672593"
	Aug 05 12:00:48 ha-672593 kubelet[1363]: E0805 12:00:48.031866    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 12:00:48 ha-672593 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 12:00:48 ha-672593 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 12:00:48 ha-672593 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 12:00:48 ha-672593 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 12:01:48 ha-672593 kubelet[1363]: E0805 12:01:48.029877    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 12:01:48 ha-672593 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 12:01:48 ha-672593 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 12:01:48 ha-672593 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 12:01:48 ha-672593 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0805 12:01:50.507591  410620 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19377-383955/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-672593 -n ha-672593
helpers_test.go:261: (dbg) Run:  kubectl --context ha-672593 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (372.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 stop -v=7 --alsologtostderr
E0805 12:02:52.927344  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/functional-014296/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-672593 stop -v=7 --alsologtostderr: exit status 82 (2m0.478712921s)

                                                
                                                
-- stdout --
	* Stopping node "ha-672593-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 12:02:10.521581  411032 out.go:291] Setting OutFile to fd 1 ...
	I0805 12:02:10.521692  411032 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:02:10.521700  411032 out.go:304] Setting ErrFile to fd 2...
	I0805 12:02:10.521704  411032 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:02:10.521870  411032 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
	I0805 12:02:10.522111  411032 out.go:298] Setting JSON to false
	I0805 12:02:10.522186  411032 mustload.go:65] Loading cluster: ha-672593
	I0805 12:02:10.522529  411032 config.go:182] Loaded profile config "ha-672593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:02:10.522643  411032 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/config.json ...
	I0805 12:02:10.522822  411032 mustload.go:65] Loading cluster: ha-672593
	I0805 12:02:10.522958  411032 config.go:182] Loaded profile config "ha-672593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:02:10.522986  411032 stop.go:39] StopHost: ha-672593-m04
	I0805 12:02:10.523303  411032 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:02:10.523356  411032 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:02:10.538645  411032 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40011
	I0805 12:02:10.539127  411032 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:02:10.539723  411032 main.go:141] libmachine: Using API Version  1
	I0805 12:02:10.539763  411032 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:02:10.540127  411032 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:02:10.542465  411032 out.go:177] * Stopping node "ha-672593-m04"  ...
	I0805 12:02:10.544237  411032 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0805 12:02:10.544282  411032 main.go:141] libmachine: (ha-672593-m04) Calling .DriverName
	I0805 12:02:10.544565  411032 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0805 12:02:10.544604  411032 main.go:141] libmachine: (ha-672593-m04) Calling .GetSSHHostname
	I0805 12:02:10.547560  411032 main.go:141] libmachine: (ha-672593-m04) DBG | domain ha-672593-m04 has defined MAC address 52:54:00:23:8c:55 in network mk-ha-672593
	I0805 12:02:10.548050  411032 main.go:141] libmachine: (ha-672593-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:8c:55", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 13:01:37 +0000 UTC Type:0 Mac:52:54:00:23:8c:55 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-672593-m04 Clientid:01:52:54:00:23:8c:55}
	I0805 12:02:10.548088  411032 main.go:141] libmachine: (ha-672593-m04) DBG | domain ha-672593-m04 has defined IP address 192.168.39.4 and MAC address 52:54:00:23:8c:55 in network mk-ha-672593
	I0805 12:02:10.548207  411032 main.go:141] libmachine: (ha-672593-m04) Calling .GetSSHPort
	I0805 12:02:10.548412  411032 main.go:141] libmachine: (ha-672593-m04) Calling .GetSSHKeyPath
	I0805 12:02:10.548600  411032 main.go:141] libmachine: (ha-672593-m04) Calling .GetSSHUsername
	I0805 12:02:10.548774  411032 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m04/id_rsa Username:docker}
	I0805 12:02:10.635661  411032 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0805 12:02:10.689151  411032 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0805 12:02:10.742919  411032 main.go:141] libmachine: Stopping "ha-672593-m04"...
	I0805 12:02:10.742956  411032 main.go:141] libmachine: (ha-672593-m04) Calling .GetState
	I0805 12:02:10.744610  411032 main.go:141] libmachine: (ha-672593-m04) Calling .Stop
	I0805 12:02:10.748169  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 0/120
	I0805 12:02:11.750544  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 1/120
	I0805 12:02:12.751915  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 2/120
	I0805 12:02:13.753364  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 3/120
	I0805 12:02:14.754703  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 4/120
	I0805 12:02:15.756918  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 5/120
	I0805 12:02:16.758407  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 6/120
	I0805 12:02:17.760354  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 7/120
	I0805 12:02:18.762011  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 8/120
	I0805 12:02:19.763897  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 9/120
	I0805 12:02:20.766147  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 10/120
	I0805 12:02:21.767618  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 11/120
	I0805 12:02:22.769588  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 12/120
	I0805 12:02:23.771082  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 13/120
	I0805 12:02:24.772601  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 14/120
	I0805 12:02:25.774652  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 15/120
	I0805 12:02:26.776017  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 16/120
	I0805 12:02:27.778204  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 17/120
	I0805 12:02:28.779615  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 18/120
	I0805 12:02:29.781035  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 19/120
	I0805 12:02:30.782625  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 20/120
	I0805 12:02:31.784112  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 21/120
	I0805 12:02:32.786345  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 22/120
	I0805 12:02:33.787578  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 23/120
	I0805 12:02:34.789983  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 24/120
	I0805 12:02:35.792115  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 25/120
	I0805 12:02:36.793861  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 26/120
	I0805 12:02:37.795342  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 27/120
	I0805 12:02:38.796578  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 28/120
	I0805 12:02:39.798954  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 29/120
	I0805 12:02:40.801238  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 30/120
	I0805 12:02:41.802601  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 31/120
	I0805 12:02:42.804013  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 32/120
	I0805 12:02:43.806300  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 33/120
	I0805 12:02:44.807820  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 34/120
	I0805 12:02:45.809924  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 35/120
	I0805 12:02:46.811348  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 36/120
	I0805 12:02:47.813064  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 37/120
	I0805 12:02:48.814352  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 38/120
	I0805 12:02:49.815731  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 39/120
	I0805 12:02:50.817808  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 40/120
	I0805 12:02:51.819139  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 41/120
	I0805 12:02:52.820804  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 42/120
	I0805 12:02:53.822151  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 43/120
	I0805 12:02:54.824402  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 44/120
	I0805 12:02:55.826300  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 45/120
	I0805 12:02:56.827700  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 46/120
	I0805 12:02:57.828975  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 47/120
	I0805 12:02:58.830538  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 48/120
	I0805 12:02:59.832001  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 49/120
	I0805 12:03:00.833369  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 50/120
	I0805 12:03:01.834671  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 51/120
	I0805 12:03:02.836323  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 52/120
	I0805 12:03:03.837793  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 53/120
	I0805 12:03:04.839193  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 54/120
	I0805 12:03:05.841107  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 55/120
	I0805 12:03:06.842614  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 56/120
	I0805 12:03:07.843785  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 57/120
	I0805 12:03:08.845248  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 58/120
	I0805 12:03:09.846393  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 59/120
	I0805 12:03:10.848571  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 60/120
	I0805 12:03:11.849826  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 61/120
	I0805 12:03:12.851391  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 62/120
	I0805 12:03:13.852870  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 63/120
	I0805 12:03:14.854126  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 64/120
	I0805 12:03:15.856402  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 65/120
	I0805 12:03:16.858309  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 66/120
	I0805 12:03:17.859547  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 67/120
	I0805 12:03:18.860994  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 68/120
	I0805 12:03:19.863101  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 69/120
	I0805 12:03:20.865232  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 70/120
	I0805 12:03:21.867000  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 71/120
	I0805 12:03:22.868471  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 72/120
	I0805 12:03:23.870596  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 73/120
	I0805 12:03:24.872088  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 74/120
	I0805 12:03:25.874175  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 75/120
	I0805 12:03:26.875586  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 76/120
	I0805 12:03:27.877316  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 77/120
	I0805 12:03:28.878799  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 78/120
	I0805 12:03:29.880371  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 79/120
	I0805 12:03:30.882714  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 80/120
	I0805 12:03:31.884013  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 81/120
	I0805 12:03:32.886382  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 82/120
	I0805 12:03:33.887711  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 83/120
	I0805 12:03:34.888974  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 84/120
	I0805 12:03:35.890233  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 85/120
	I0805 12:03:36.891600  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 86/120
	I0805 12:03:37.892721  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 87/120
	I0805 12:03:38.894129  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 88/120
	I0805 12:03:39.895407  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 89/120
	I0805 12:03:40.897386  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 90/120
	I0805 12:03:41.899283  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 91/120
	I0805 12:03:42.900767  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 92/120
	I0805 12:03:43.902417  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 93/120
	I0805 12:03:44.903835  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 94/120
	I0805 12:03:45.906029  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 95/120
	I0805 12:03:46.907479  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 96/120
	I0805 12:03:47.909148  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 97/120
	I0805 12:03:48.910786  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 98/120
	I0805 12:03:49.912226  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 99/120
	I0805 12:03:50.914554  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 100/120
	I0805 12:03:51.916025  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 101/120
	I0805 12:03:52.918298  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 102/120
	I0805 12:03:53.919642  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 103/120
	I0805 12:03:54.921056  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 104/120
	I0805 12:03:55.922966  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 105/120
	I0805 12:03:56.924415  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 106/120
	I0805 12:03:57.925889  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 107/120
	I0805 12:03:58.927281  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 108/120
	I0805 12:03:59.928822  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 109/120
	I0805 12:04:00.930791  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 110/120
	I0805 12:04:01.932249  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 111/120
	I0805 12:04:02.933664  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 112/120
	I0805 12:04:03.935112  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 113/120
	I0805 12:04:04.937385  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 114/120
	I0805 12:04:05.939477  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 115/120
	I0805 12:04:06.940941  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 116/120
	I0805 12:04:07.942881  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 117/120
	I0805 12:04:08.944496  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 118/120
	I0805 12:04:09.946048  411032 main.go:141] libmachine: (ha-672593-m04) Waiting for machine to stop 119/120
	I0805 12:04:10.947015  411032 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0805 12:04:10.947077  411032 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0805 12:04:10.948369  411032 out.go:177] 
	W0805 12:04:10.949736  411032 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0805 12:04:10.949767  411032 out.go:239] * 
	* 
	W0805 12:04:10.953408  411032 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 12:04:10.954784  411032 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-672593 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 status -v=7 --alsologtostderr
E0805 12:04:15.972893  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/functional-014296/client.crt: no such file or directory
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-672593 status -v=7 --alsologtostderr: exit status 3 (18.967068941s)

                                                
                                                
-- stdout --
	ha-672593
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-672593-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-672593-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 12:04:11.006526  411487 out.go:291] Setting OutFile to fd 1 ...
	I0805 12:04:11.006787  411487 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:04:11.006798  411487 out.go:304] Setting ErrFile to fd 2...
	I0805 12:04:11.006802  411487 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:04:11.007024  411487 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
	I0805 12:04:11.007257  411487 out.go:298] Setting JSON to false
	I0805 12:04:11.007299  411487 mustload.go:65] Loading cluster: ha-672593
	I0805 12:04:11.007342  411487 notify.go:220] Checking for updates...
	I0805 12:04:11.007700  411487 config.go:182] Loaded profile config "ha-672593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:04:11.007718  411487 status.go:255] checking status of ha-672593 ...
	I0805 12:04:11.008199  411487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:04:11.008261  411487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:04:11.031538  411487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34219
	I0805 12:04:11.032134  411487 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:04:11.032872  411487 main.go:141] libmachine: Using API Version  1
	I0805 12:04:11.032901  411487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:04:11.033282  411487 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:04:11.033518  411487 main.go:141] libmachine: (ha-672593) Calling .GetState
	I0805 12:04:11.035335  411487 status.go:330] ha-672593 host status = "Running" (err=<nil>)
	I0805 12:04:11.035359  411487 host.go:66] Checking if "ha-672593" exists ...
	I0805 12:04:11.035697  411487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:04:11.035737  411487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:04:11.050702  411487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40945
	I0805 12:04:11.051072  411487 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:04:11.051496  411487 main.go:141] libmachine: Using API Version  1
	I0805 12:04:11.051510  411487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:04:11.051803  411487 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:04:11.051996  411487 main.go:141] libmachine: (ha-672593) Calling .GetIP
	I0805 12:04:11.054759  411487 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 12:04:11.055244  411487 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 12:04:11.055266  411487 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 12:04:11.055377  411487 host.go:66] Checking if "ha-672593" exists ...
	I0805 12:04:11.055787  411487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:04:11.055845  411487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:04:11.071284  411487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44287
	I0805 12:04:11.071716  411487 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:04:11.072220  411487 main.go:141] libmachine: Using API Version  1
	I0805 12:04:11.072243  411487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:04:11.072546  411487 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:04:11.072706  411487 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 12:04:11.072919  411487 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 12:04:11.072947  411487 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 12:04:11.075367  411487 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 12:04:11.075775  411487 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 12:04:11.075807  411487 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 12:04:11.075950  411487 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 12:04:11.076129  411487 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 12:04:11.076412  411487 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 12:04:11.076571  411487 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/id_rsa Username:docker}
	I0805 12:04:11.166377  411487 ssh_runner.go:195] Run: systemctl --version
	I0805 12:04:11.174550  411487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 12:04:11.191260  411487 kubeconfig.go:125] found "ha-672593" server: "https://192.168.39.254:8443"
	I0805 12:04:11.191292  411487 api_server.go:166] Checking apiserver status ...
	I0805 12:04:11.191335  411487 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:04:11.207927  411487 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5058/cgroup
	W0805 12:04:11.218121  411487 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5058/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 12:04:11.218201  411487 ssh_runner.go:195] Run: ls
	I0805 12:04:11.222869  411487 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0805 12:04:11.227228  411487 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0805 12:04:11.227257  411487 status.go:422] ha-672593 apiserver status = Running (err=<nil>)
	I0805 12:04:11.227269  411487 status.go:257] ha-672593 status: &{Name:ha-672593 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 12:04:11.227302  411487 status.go:255] checking status of ha-672593-m02 ...
	I0805 12:04:11.227711  411487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:04:11.227779  411487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:04:11.244212  411487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36799
	I0805 12:04:11.244689  411487 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:04:11.245262  411487 main.go:141] libmachine: Using API Version  1
	I0805 12:04:11.245286  411487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:04:11.245664  411487 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:04:11.245866  411487 main.go:141] libmachine: (ha-672593-m02) Calling .GetState
	I0805 12:04:11.247407  411487 status.go:330] ha-672593-m02 host status = "Running" (err=<nil>)
	I0805 12:04:11.247422  411487 host.go:66] Checking if "ha-672593-m02" exists ...
	I0805 12:04:11.247730  411487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:04:11.247794  411487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:04:11.263631  411487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43219
	I0805 12:04:11.264117  411487 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:04:11.264595  411487 main.go:141] libmachine: Using API Version  1
	I0805 12:04:11.264617  411487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:04:11.264932  411487 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:04:11.265170  411487 main.go:141] libmachine: (ha-672593-m02) Calling .GetIP
	I0805 12:04:11.268556  411487 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 12:04:11.269000  411487 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:59:28 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-672593-m02 Clientid:01:52:54:00:67:7b:e8}
	I0805 12:04:11.269035  411487 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 12:04:11.269385  411487 host.go:66] Checking if "ha-672593-m02" exists ...
	I0805 12:04:11.269771  411487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:04:11.269821  411487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:04:11.284979  411487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46641
	I0805 12:04:11.285406  411487 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:04:11.285871  411487 main.go:141] libmachine: Using API Version  1
	I0805 12:04:11.285896  411487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:04:11.286214  411487 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:04:11.286394  411487 main.go:141] libmachine: (ha-672593-m02) Calling .DriverName
	I0805 12:04:11.286563  411487 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 12:04:11.286585  411487 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHHostname
	I0805 12:04:11.289186  411487 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 12:04:11.289580  411487 main.go:141] libmachine: (ha-672593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:7b:e8", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:59:28 +0000 UTC Type:0 Mac:52:54:00:67:7b:e8 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-672593-m02 Clientid:01:52:54:00:67:7b:e8}
	I0805 12:04:11.289607  411487 main.go:141] libmachine: (ha-672593-m02) DBG | domain ha-672593-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:67:7b:e8 in network mk-ha-672593
	I0805 12:04:11.289769  411487 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHPort
	I0805 12:04:11.289924  411487 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHKeyPath
	I0805 12:04:11.290079  411487 main.go:141] libmachine: (ha-672593-m02) Calling .GetSSHUsername
	I0805 12:04:11.290228  411487 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m02/id_rsa Username:docker}
	I0805 12:04:11.374488  411487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 12:04:11.393281  411487 kubeconfig.go:125] found "ha-672593" server: "https://192.168.39.254:8443"
	I0805 12:04:11.393310  411487 api_server.go:166] Checking apiserver status ...
	I0805 12:04:11.393343  411487 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:04:11.415017  411487 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1377/cgroup
	W0805 12:04:11.425482  411487 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1377/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 12:04:11.425542  411487 ssh_runner.go:195] Run: ls
	I0805 12:04:11.430567  411487 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0805 12:04:11.437340  411487 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0805 12:04:11.437369  411487 status.go:422] ha-672593-m02 apiserver status = Running (err=<nil>)
	I0805 12:04:11.437382  411487 status.go:257] ha-672593-m02 status: &{Name:ha-672593-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 12:04:11.437424  411487 status.go:255] checking status of ha-672593-m04 ...
	I0805 12:04:11.437737  411487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:04:11.437782  411487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:04:11.453483  411487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39681
	I0805 12:04:11.453969  411487 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:04:11.454535  411487 main.go:141] libmachine: Using API Version  1
	I0805 12:04:11.454556  411487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:04:11.454862  411487 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:04:11.455068  411487 main.go:141] libmachine: (ha-672593-m04) Calling .GetState
	I0805 12:04:11.456654  411487 status.go:330] ha-672593-m04 host status = "Running" (err=<nil>)
	I0805 12:04:11.456673  411487 host.go:66] Checking if "ha-672593-m04" exists ...
	I0805 12:04:11.456973  411487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:04:11.457020  411487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:04:11.473169  411487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44433
	I0805 12:04:11.473683  411487 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:04:11.474188  411487 main.go:141] libmachine: Using API Version  1
	I0805 12:04:11.474209  411487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:04:11.474531  411487 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:04:11.474714  411487 main.go:141] libmachine: (ha-672593-m04) Calling .GetIP
	I0805 12:04:11.477498  411487 main.go:141] libmachine: (ha-672593-m04) DBG | domain ha-672593-m04 has defined MAC address 52:54:00:23:8c:55 in network mk-ha-672593
	I0805 12:04:11.477953  411487 main.go:141] libmachine: (ha-672593-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:8c:55", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 13:01:37 +0000 UTC Type:0 Mac:52:54:00:23:8c:55 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-672593-m04 Clientid:01:52:54:00:23:8c:55}
	I0805 12:04:11.477970  411487 main.go:141] libmachine: (ha-672593-m04) DBG | domain ha-672593-m04 has defined IP address 192.168.39.4 and MAC address 52:54:00:23:8c:55 in network mk-ha-672593
	I0805 12:04:11.478141  411487 host.go:66] Checking if "ha-672593-m04" exists ...
	I0805 12:04:11.478444  411487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:04:11.478506  411487 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:04:11.494045  411487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46405
	I0805 12:04:11.494539  411487 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:04:11.495020  411487 main.go:141] libmachine: Using API Version  1
	I0805 12:04:11.495047  411487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:04:11.495363  411487 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:04:11.495591  411487 main.go:141] libmachine: (ha-672593-m04) Calling .DriverName
	I0805 12:04:11.495810  411487 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 12:04:11.495831  411487 main.go:141] libmachine: (ha-672593-m04) Calling .GetSSHHostname
	I0805 12:04:11.498826  411487 main.go:141] libmachine: (ha-672593-m04) DBG | domain ha-672593-m04 has defined MAC address 52:54:00:23:8c:55 in network mk-ha-672593
	I0805 12:04:11.499291  411487 main.go:141] libmachine: (ha-672593-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:8c:55", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 13:01:37 +0000 UTC Type:0 Mac:52:54:00:23:8c:55 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-672593-m04 Clientid:01:52:54:00:23:8c:55}
	I0805 12:04:11.499306  411487 main.go:141] libmachine: (ha-672593-m04) DBG | domain ha-672593-m04 has defined IP address 192.168.39.4 and MAC address 52:54:00:23:8c:55 in network mk-ha-672593
	I0805 12:04:11.499490  411487 main.go:141] libmachine: (ha-672593-m04) Calling .GetSSHPort
	I0805 12:04:11.499678  411487 main.go:141] libmachine: (ha-672593-m04) Calling .GetSSHKeyPath
	I0805 12:04:11.499880  411487 main.go:141] libmachine: (ha-672593-m04) Calling .GetSSHUsername
	I0805 12:04:11.500019  411487 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593-m04/id_rsa Username:docker}
	W0805 12:04:29.923984  411487 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.4:22: connect: no route to host
	W0805 12:04:29.924109  411487 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.4:22: connect: no route to host
	E0805 12:04:29.924133  411487 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.4:22: connect: no route to host
	I0805 12:04:29.924145  411487 status.go:257] ha-672593-m04 status: &{Name:ha-672593-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0805 12:04:29.924178  411487 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.4:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-672593 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-672593 -n ha-672593
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-672593 logs -n 25: (1.74044599s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-672593 ssh -n ha-672593-m02 sudo cat                                          | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | /home/docker/cp-test_ha-672593-m03_ha-672593-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-672593 cp ha-672593-m03:/home/docker/cp-test.txt                              | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593-m04:/home/docker/cp-test_ha-672593-m03_ha-672593-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n                                                                 | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n ha-672593-m04 sudo cat                                          | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | /home/docker/cp-test_ha-672593-m03_ha-672593-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-672593 cp testdata/cp-test.txt                                                | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n                                                                 | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-672593 cp ha-672593-m04:/home/docker/cp-test.txt                              | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2308329850/001/cp-test_ha-672593-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n                                                                 | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-672593 cp ha-672593-m04:/home/docker/cp-test.txt                              | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593:/home/docker/cp-test_ha-672593-m04_ha-672593.txt                       |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n                                                                 | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n ha-672593 sudo cat                                              | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | /home/docker/cp-test_ha-672593-m04_ha-672593.txt                                 |           |         |         |                     |                     |
	| cp      | ha-672593 cp ha-672593-m04:/home/docker/cp-test.txt                              | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593-m02:/home/docker/cp-test_ha-672593-m04_ha-672593-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n                                                                 | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n ha-672593-m02 sudo cat                                          | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | /home/docker/cp-test_ha-672593-m04_ha-672593-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-672593 cp ha-672593-m04:/home/docker/cp-test.txt                              | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593-m03:/home/docker/cp-test_ha-672593-m04_ha-672593-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n                                                                 | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | ha-672593-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-672593 ssh -n ha-672593-m03 sudo cat                                          | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC | 05 Aug 24 11:52 UTC |
	|         | /home/docker/cp-test_ha-672593-m04_ha-672593-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-672593 node stop m02 -v=7                                                     | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:52 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-672593 node start m02 -v=7                                                    | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:54 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-672593 -v=7                                                           | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:55 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-672593 -v=7                                                                | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:55 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-672593 --wait=true -v=7                                                    | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 11:57 UTC | 05 Aug 24 12:01 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-672593                                                                | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 12:01 UTC |                     |
	| node    | ha-672593 node delete m03 -v=7                                                   | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 12:01 UTC | 05 Aug 24 12:02 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-672593 stop -v=7                                                              | ha-672593 | jenkins | v1.33.1 | 05 Aug 24 12:02 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 11:57:42
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 11:57:42.587028  409225 out.go:291] Setting OutFile to fd 1 ...
	I0805 11:57:42.587187  409225 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 11:57:42.587198  409225 out.go:304] Setting ErrFile to fd 2...
	I0805 11:57:42.587202  409225 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 11:57:42.587365  409225 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
	I0805 11:57:42.588019  409225 out.go:298] Setting JSON to false
	I0805 11:57:42.588988  409225 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":6010,"bootTime":1722853053,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0805 11:57:42.589064  409225 start.go:139] virtualization: kvm guest
	I0805 11:57:42.591367  409225 out.go:177] * [ha-672593] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0805 11:57:42.592840  409225 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 11:57:42.592881  409225 notify.go:220] Checking for updates...
	I0805 11:57:42.595289  409225 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 11:57:42.596561  409225 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 11:57:42.597698  409225 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 11:57:42.598774  409225 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0805 11:57:42.599903  409225 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 11:57:42.601461  409225 config.go:182] Loaded profile config "ha-672593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 11:57:42.601566  409225 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 11:57:42.601971  409225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:57:42.602059  409225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:57:42.617236  409225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39551
	I0805 11:57:42.617734  409225 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:57:42.618353  409225 main.go:141] libmachine: Using API Version  1
	I0805 11:57:42.618385  409225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:57:42.618708  409225 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:57:42.618889  409225 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:57:42.653788  409225 out.go:177] * Using the kvm2 driver based on existing profile
	I0805 11:57:42.654954  409225 start.go:297] selected driver: kvm2
	I0805 11:57:42.654967  409225 start.go:901] validating driver "kvm2" against &{Name:ha-672593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-672593 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.4 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:
false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 11:57:42.655104  409225 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 11:57:42.655421  409225 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 11:57:42.655486  409225 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19377-383955/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0805 11:57:42.670195  409225 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0805 11:57:42.671104  409225 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 11:57:42.671204  409225 cni.go:84] Creating CNI manager for ""
	I0805 11:57:42.671223  409225 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0805 11:57:42.671324  409225 start.go:340] cluster config:
	{Name:ha-672593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-672593 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.4 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tille
r:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 11:57:42.671545  409225 iso.go:125] acquiring lock: {Name:mk78a4988ea0dfb86bb6f7367e362683a39fd912 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 11:57:42.673317  409225 out.go:177] * Starting "ha-672593" primary control-plane node in "ha-672593" cluster
	I0805 11:57:42.674345  409225 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 11:57:42.674376  409225 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0805 11:57:42.674387  409225 cache.go:56] Caching tarball of preloaded images
	I0805 11:57:42.674468  409225 preload.go:172] Found /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0805 11:57:42.674478  409225 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0805 11:57:42.674589  409225 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/config.json ...
	I0805 11:57:42.674765  409225 start.go:360] acquireMachinesLock for ha-672593: {Name:mk3babe91d55c30c0b650587cdec6489eb3a7ed6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 11:57:42.674801  409225 start.go:364] duration metric: took 19.936µs to acquireMachinesLock for "ha-672593"
	I0805 11:57:42.674814  409225 start.go:96] Skipping create...Using existing machine configuration
	I0805 11:57:42.674822  409225 fix.go:54] fixHost starting: 
	I0805 11:57:42.675069  409225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:57:42.675109  409225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:57:42.689143  409225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40879
	I0805 11:57:42.689687  409225 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:57:42.690346  409225 main.go:141] libmachine: Using API Version  1
	I0805 11:57:42.690379  409225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:57:42.690694  409225 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:57:42.690897  409225 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:57:42.691063  409225 main.go:141] libmachine: (ha-672593) Calling .GetState
	I0805 11:57:42.692602  409225 fix.go:112] recreateIfNeeded on ha-672593: state=Running err=<nil>
	W0805 11:57:42.692625  409225 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 11:57:42.694522  409225 out.go:177] * Updating the running kvm2 "ha-672593" VM ...
	I0805 11:57:42.695700  409225 machine.go:94] provisionDockerMachine start ...
	I0805 11:57:42.695717  409225 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:57:42.695918  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:57:42.698252  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:57:42.698651  409225 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:57:42.698680  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:57:42.698804  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:57:42.698968  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:57:42.699125  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:57:42.699226  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:57:42.699367  409225 main.go:141] libmachine: Using SSH client type: native
	I0805 11:57:42.699583  409225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0805 11:57:42.699596  409225 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 11:57:42.820713  409225 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-672593
	
	I0805 11:57:42.820762  409225 main.go:141] libmachine: (ha-672593) Calling .GetMachineName
	I0805 11:57:42.821057  409225 buildroot.go:166] provisioning hostname "ha-672593"
	I0805 11:57:42.821091  409225 main.go:141] libmachine: (ha-672593) Calling .GetMachineName
	I0805 11:57:42.821297  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:57:42.823868  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:57:42.824262  409225 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:57:42.824288  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:57:42.824396  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:57:42.824541  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:57:42.824692  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:57:42.824816  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:57:42.824939  409225 main.go:141] libmachine: Using SSH client type: native
	I0805 11:57:42.825109  409225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0805 11:57:42.825125  409225 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-672593 && echo "ha-672593" | sudo tee /etc/hostname
	I0805 11:57:42.960025  409225 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-672593
	
	I0805 11:57:42.960054  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:57:42.962798  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:57:42.963138  409225 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:57:42.963169  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:57:42.963350  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:57:42.963536  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:57:42.963671  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:57:42.963844  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:57:42.964002  409225 main.go:141] libmachine: Using SSH client type: native
	I0805 11:57:42.964190  409225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0805 11:57:42.964216  409225 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-672593' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-672593/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-672593' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 11:57:43.076996  409225 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 11:57:43.077027  409225 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19377-383955/.minikube CaCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19377-383955/.minikube}
	I0805 11:57:43.077047  409225 buildroot.go:174] setting up certificates
	I0805 11:57:43.077055  409225 provision.go:84] configureAuth start
	I0805 11:57:43.077064  409225 main.go:141] libmachine: (ha-672593) Calling .GetMachineName
	I0805 11:57:43.077344  409225 main.go:141] libmachine: (ha-672593) Calling .GetIP
	I0805 11:57:43.079951  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:57:43.080247  409225 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:57:43.080272  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:57:43.080382  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:57:43.082607  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:57:43.082951  409225 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:57:43.082975  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:57:43.083106  409225 provision.go:143] copyHostCerts
	I0805 11:57:43.083158  409225 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 11:57:43.083191  409225 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem, removing ...
	I0805 11:57:43.083200  409225 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 11:57:43.083269  409225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem (1082 bytes)
	I0805 11:57:43.083355  409225 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 11:57:43.083375  409225 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem, removing ...
	I0805 11:57:43.083379  409225 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 11:57:43.083402  409225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem (1123 bytes)
	I0805 11:57:43.083459  409225 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 11:57:43.083474  409225 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem, removing ...
	I0805 11:57:43.083478  409225 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 11:57:43.083498  409225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem (1675 bytes)
	I0805 11:57:43.083557  409225 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem org=jenkins.ha-672593 san=[127.0.0.1 192.168.39.102 ha-672593 localhost minikube]
	I0805 11:57:43.185719  409225 provision.go:177] copyRemoteCerts
	I0805 11:57:43.185783  409225 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 11:57:43.185811  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:57:43.188495  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:57:43.188838  409225 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:57:43.188866  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:57:43.189030  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:57:43.189224  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:57:43.189392  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:57:43.189530  409225 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/id_rsa Username:docker}
	I0805 11:57:43.275306  409225 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 11:57:43.275386  409225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 11:57:43.301685  409225 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 11:57:43.301752  409225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0805 11:57:43.325360  409225 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 11:57:43.325408  409225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 11:57:43.358227  409225 provision.go:87] duration metric: took 281.156227ms to configureAuth
	I0805 11:57:43.358257  409225 buildroot.go:189] setting minikube options for container-runtime
	I0805 11:57:43.358567  409225 config.go:182] Loaded profile config "ha-672593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 11:57:43.358672  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:57:43.361266  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:57:43.361697  409225 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:57:43.361734  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:57:43.361940  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:57:43.362145  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:57:43.362318  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:57:43.362420  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:57:43.362583  409225 main.go:141] libmachine: Using SSH client type: native
	I0805 11:57:43.362803  409225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0805 11:57:43.362843  409225 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 11:59:14.130177  409225 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 11:59:14.130212  409225 machine.go:97] duration metric: took 1m31.434498748s to provisionDockerMachine
	I0805 11:59:14.130232  409225 start.go:293] postStartSetup for "ha-672593" (driver="kvm2")
	I0805 11:59:14.130258  409225 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 11:59:14.130278  409225 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:59:14.130639  409225 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 11:59:14.130679  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:59:14.134195  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:59:14.134682  409225 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:59:14.134712  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:59:14.134864  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:59:14.135035  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:59:14.135158  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:59:14.135357  409225 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/id_rsa Username:docker}
	I0805 11:59:14.224728  409225 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 11:59:14.229288  409225 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 11:59:14.229317  409225 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/addons for local assets ...
	I0805 11:59:14.229394  409225 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/files for local assets ...
	I0805 11:59:14.229472  409225 filesync.go:149] local asset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> 3912192.pem in /etc/ssl/certs
	I0805 11:59:14.229482  409225 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> /etc/ssl/certs/3912192.pem
	I0805 11:59:14.229582  409225 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 11:59:14.239141  409225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 11:59:14.265593  409225 start.go:296] duration metric: took 135.34256ms for postStartSetup
	I0805 11:59:14.265640  409225 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:59:14.265977  409225 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0805 11:59:14.266013  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:59:14.268773  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:59:14.269216  409225 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:59:14.269248  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:59:14.269424  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:59:14.269626  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:59:14.269755  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:59:14.269896  409225 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/id_rsa Username:docker}
	W0805 11:59:14.358779  409225 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0805 11:59:14.358807  409225 fix.go:56] duration metric: took 1m31.683985789s for fixHost
	I0805 11:59:14.358834  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:59:14.361640  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:59:14.361963  409225 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:59:14.361993  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:59:14.362145  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:59:14.362366  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:59:14.362580  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:59:14.362743  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:59:14.362898  409225 main.go:141] libmachine: Using SSH client type: native
	I0805 11:59:14.363079  409225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0805 11:59:14.363089  409225 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 11:59:14.476567  409225 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722859154.442870085
	
	I0805 11:59:14.476588  409225 fix.go:216] guest clock: 1722859154.442870085
	I0805 11:59:14.476598  409225 fix.go:229] Guest: 2024-08-05 11:59:14.442870085 +0000 UTC Remote: 2024-08-05 11:59:14.358818403 +0000 UTC m=+91.810048921 (delta=84.051682ms)
	I0805 11:59:14.476625  409225 fix.go:200] guest clock delta is within tolerance: 84.051682ms
	I0805 11:59:14.476641  409225 start.go:83] releasing machines lock for "ha-672593", held for 1m31.801822706s
	I0805 11:59:14.476663  409225 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:59:14.476981  409225 main.go:141] libmachine: (ha-672593) Calling .GetIP
	I0805 11:59:14.479780  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:59:14.480170  409225 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:59:14.480251  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:59:14.480373  409225 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:59:14.481022  409225 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:59:14.481309  409225 main.go:141] libmachine: (ha-672593) Calling .DriverName
	I0805 11:59:14.481438  409225 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 11:59:14.481506  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:59:14.481582  409225 ssh_runner.go:195] Run: cat /version.json
	I0805 11:59:14.481609  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHHostname
	I0805 11:59:14.484392  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:59:14.484552  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:59:14.484826  409225 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:59:14.484853  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:59:14.484952  409225 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:59:14.484975  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:59:14.485008  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:59:14.485205  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:59:14.485221  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHPort
	I0805 11:59:14.485403  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:59:14.485414  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHKeyPath
	I0805 11:59:14.485578  409225 main.go:141] libmachine: (ha-672593) Calling .GetSSHUsername
	I0805 11:59:14.485643  409225 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/id_rsa Username:docker}
	I0805 11:59:14.485695  409225 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/ha-672593/id_rsa Username:docker}
	I0805 11:59:14.587349  409225 ssh_runner.go:195] Run: systemctl --version
	I0805 11:59:14.593964  409225 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 11:59:14.758447  409225 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 11:59:14.765274  409225 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 11:59:14.765354  409225 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 11:59:14.775199  409225 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0805 11:59:14.775222  409225 start.go:495] detecting cgroup driver to use...
	I0805 11:59:14.775300  409225 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 11:59:14.791786  409225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 11:59:14.805731  409225 docker.go:217] disabling cri-docker service (if available) ...
	I0805 11:59:14.805785  409225 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 11:59:14.821619  409225 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 11:59:14.835613  409225 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 11:59:14.981307  409225 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 11:59:15.130708  409225 docker.go:233] disabling docker service ...
	I0805 11:59:15.130769  409225 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 11:59:15.148232  409225 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 11:59:15.162795  409225 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 11:59:15.306533  409225 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 11:59:15.465286  409225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 11:59:15.481194  409225 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 11:59:15.501248  409225 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0805 11:59:15.501336  409225 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:59:15.512467  409225 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 11:59:15.512571  409225 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:59:15.523614  409225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:59:15.546097  409225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:59:15.569776  409225 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 11:59:15.583644  409225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:59:15.600072  409225 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:59:15.625275  409225 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 11:59:15.651666  409225 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 11:59:15.681002  409225 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 11:59:15.696579  409225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 11:59:15.873112  409225 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 11:59:16.178598  409225 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 11:59:16.178685  409225 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 11:59:16.184758  409225 start.go:563] Will wait 60s for crictl version
	I0805 11:59:16.184863  409225 ssh_runner.go:195] Run: which crictl
	I0805 11:59:16.189800  409225 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 11:59:16.231086  409225 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 11:59:16.231170  409225 ssh_runner.go:195] Run: crio --version
	I0805 11:59:16.260514  409225 ssh_runner.go:195] Run: crio --version
	I0805 11:59:16.292228  409225 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0805 11:59:16.293998  409225 main.go:141] libmachine: (ha-672593) Calling .GetIP
	I0805 11:59:16.296980  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:59:16.297365  409225 main.go:141] libmachine: (ha-672593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:d5:95", ip: ""} in network mk-ha-672593: {Iface:virbr1 ExpiryTime:2024-08-05 12:47:16 +0000 UTC Type:0 Mac:52:54:00:9e:d5:95 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-672593 Clientid:01:52:54:00:9e:d5:95}
	I0805 11:59:16.297392  409225 main.go:141] libmachine: (ha-672593) DBG | domain ha-672593 has defined IP address 192.168.39.102 and MAC address 52:54:00:9e:d5:95 in network mk-ha-672593
	I0805 11:59:16.297623  409225 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0805 11:59:16.302583  409225 kubeadm.go:883] updating cluster {Name:ha-672593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-672593 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.4 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 11:59:16.302738  409225 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 11:59:16.302794  409225 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 11:59:16.349503  409225 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 11:59:16.349528  409225 crio.go:433] Images already preloaded, skipping extraction
	I0805 11:59:16.349580  409225 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 11:59:16.383917  409225 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 11:59:16.383949  409225 cache_images.go:84] Images are preloaded, skipping loading
	I0805 11:59:16.383972  409225 kubeadm.go:934] updating node { 192.168.39.102 8443 v1.30.3 crio true true} ...
	I0805 11:59:16.384136  409225 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-672593 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-672593 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 11:59:16.384214  409225 ssh_runner.go:195] Run: crio config
	I0805 11:59:16.434959  409225 cni.go:84] Creating CNI manager for ""
	I0805 11:59:16.434988  409225 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0805 11:59:16.435000  409225 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 11:59:16.435028  409225 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.102 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-672593 NodeName:ha-672593 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.102"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.102 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 11:59:16.435208  409225 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.102
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-672593"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.102
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.102"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 11:59:16.435232  409225 kube-vip.go:115] generating kube-vip config ...
	I0805 11:59:16.435288  409225 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0805 11:59:16.447203  409225 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0805 11:59:16.447329  409225 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0805 11:59:16.447412  409225 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 11:59:16.457108  409225 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 11:59:16.457173  409225 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0805 11:59:16.466373  409225 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0805 11:59:16.482492  409225 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 11:59:16.499667  409225 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0805 11:59:16.516586  409225 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0805 11:59:16.533860  409225 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0805 11:59:16.538836  409225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 11:59:16.694264  409225 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 11:59:16.708817  409225 certs.go:68] Setting up /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593 for IP: 192.168.39.102
	I0805 11:59:16.708844  409225 certs.go:194] generating shared ca certs ...
	I0805 11:59:16.708861  409225 certs.go:226] acquiring lock for ca certs: {Name:mk0abfcaff3883fbb5243c47b487f9200d9166d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:59:16.709053  409225 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key
	I0805 11:59:16.709105  409225 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key
	I0805 11:59:16.709115  409225 certs.go:256] generating profile certs ...
	I0805 11:59:16.709220  409225 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/client.key
	I0805 11:59:16.709257  409225 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key.b7561881
	I0805 11:59:16.709276  409225 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt.b7561881 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.68 192.168.39.210 192.168.39.254]
	I0805 11:59:16.939065  409225 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt.b7561881 ...
	I0805 11:59:16.939097  409225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt.b7561881: {Name:mk4630f3d373fbbfb12205370c4cc37346a5beb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:59:16.939312  409225 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key.b7561881 ...
	I0805 11:59:16.939339  409225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key.b7561881: {Name:mk2a397318a5ae0d98183fe8333bffc64ceab241 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:59:16.939448  409225 certs.go:381] copying /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt.b7561881 -> /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt
	I0805 11:59:16.939645  409225 certs.go:385] copying /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key.b7561881 -> /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key
	I0805 11:59:16.939845  409225 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/proxy-client.key
	I0805 11:59:16.939866  409225 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0805 11:59:16.939885  409225 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0805 11:59:16.939904  409225 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0805 11:59:16.939926  409225 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0805 11:59:16.939944  409225 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0805 11:59:16.939973  409225 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0805 11:59:16.939997  409225 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0805 11:59:16.940015  409225 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0805 11:59:16.940085  409225 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem (1338 bytes)
	W0805 11:59:16.940130  409225 certs.go:480] ignoring /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219_empty.pem, impossibly tiny 0 bytes
	I0805 11:59:16.940140  409225 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 11:59:16.940172  409225 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem (1082 bytes)
	I0805 11:59:16.940204  409225 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem (1123 bytes)
	I0805 11:59:16.940234  409225 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem (1675 bytes)
	I0805 11:59:16.940295  409225 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 11:59:16.940343  409225 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0805 11:59:16.940364  409225 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem -> /usr/share/ca-certificates/391219.pem
	I0805 11:59:16.940382  409225 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> /usr/share/ca-certificates/3912192.pem
	I0805 11:59:16.941001  409225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 11:59:16.965830  409225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 11:59:16.989761  409225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 11:59:17.013299  409225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 11:59:17.036348  409225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0805 11:59:17.060299  409225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0805 11:59:17.085221  409225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 11:59:17.110250  409225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/ha-672593/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 11:59:17.134049  409225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 11:59:17.157720  409225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem --> /usr/share/ca-certificates/391219.pem (1338 bytes)
	I0805 11:59:17.182218  409225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /usr/share/ca-certificates/3912192.pem (1708 bytes)
	I0805 11:59:17.205877  409225 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 11:59:17.222839  409225 ssh_runner.go:195] Run: openssl version
	I0805 11:59:17.228779  409225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 11:59:17.239931  409225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 11:59:17.244638  409225 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0805 11:59:17.244689  409225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 11:59:17.250195  409225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 11:59:17.259832  409225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391219.pem && ln -fs /usr/share/ca-certificates/391219.pem /etc/ssl/certs/391219.pem"
	I0805 11:59:17.270601  409225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/391219.pem
	I0805 11:59:17.274970  409225 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 11:39 /usr/share/ca-certificates/391219.pem
	I0805 11:59:17.275010  409225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391219.pem
	I0805 11:59:17.280569  409225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391219.pem /etc/ssl/certs/51391683.0"
	I0805 11:59:17.290487  409225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3912192.pem && ln -fs /usr/share/ca-certificates/3912192.pem /etc/ssl/certs/3912192.pem"
	I0805 11:59:17.301457  409225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3912192.pem
	I0805 11:59:17.306343  409225 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 11:39 /usr/share/ca-certificates/3912192.pem
	I0805 11:59:17.306396  409225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3912192.pem
	I0805 11:59:17.312215  409225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3912192.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 11:59:17.321333  409225 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 11:59:17.325874  409225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 11:59:17.331201  409225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 11:59:17.336751  409225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 11:59:17.342281  409225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 11:59:17.347907  409225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 11:59:17.353472  409225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 11:59:17.358988  409225 kubeadm.go:392] StartCluster: {Name:ha-672593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-672593 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.4 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod
:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 11:59:17.359126  409225 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 11:59:17.359176  409225 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 11:59:17.396955  409225 cri.go:89] found id: "dc4782d00c50bdef5b88780aeef63d08dab7808a96f6ba156107be9f56bc1800"
	I0805 11:59:17.396976  409225 cri.go:89] found id: "1ca52772afb357ffac550e72c9e900f36dcea579d77d6e84c69b53c8af4cc510"
	I0805 11:59:17.396980  409225 cri.go:89] found id: "d4fe200ecf9dca4b1fb3959ea6baebccce67dd34d879297098a2909724d8d3df"
	I0805 11:59:17.396983  409225 cri.go:89] found id: "73fd9ef1948379bdfd834218bee29f227bc55765a421d994bcc5bbfe373658c1"
	I0805 11:59:17.396985  409225 cri.go:89] found id: "e556c9ba49f5fe264685a2408b26a61c8c5c8836f0a38b89b776f338b8b0cd22"
	I0805 11:59:17.396988  409225 cri.go:89] found id: "6354e702fe80a5a9853cdd48f89dde467f1f7359bb495c8a4f6a49048f151d94"
	I0805 11:59:17.396991  409225 cri.go:89] found id: "57cec2b511aa8ca1b171b7dfff39ecb51cb11d9cd4efd552598fcc0054488c46"
	I0805 11:59:17.396993  409225 cri.go:89] found id: "11c4e00c9ba78ff0cfb337d7435931f39fe7ccd42145fa6670487d190cacee48"
	I0805 11:59:17.396996  409225 cri.go:89] found id: "019abd676baf2985a3bf77641c1032cae7b3c22eb67fff535a25d9860b394bfd"
	I0805 11:59:17.397003  409225 cri.go:89] found id: "1019d9e10074631835690fa0d372f2c043a64f237e1ddf9e22bcbd18d59fa6cd"
	I0805 11:59:17.397009  409225 cri.go:89] found id: "50907082bdeb824e9a80122033ed1df5631143e152751f066a7bdfba1156e565"
	I0805 11:59:17.397011  409225 cri.go:89] found id: "ca9839b56e3e62d7ac6b88dc20149da25f586b4033e03a09844938e5b85b6334"
	I0805 11:59:17.397016  409225 cri.go:89] found id: "b17d8131f0edcc3018bb9d820f56a29a7806d7d57a91b849fc1350d6a8465775"
	I0805 11:59:17.397019  409225 cri.go:89] found id: ""
	I0805 11:59:17.397075  409225 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 05 12:04:30 ha-672593 crio[3759]: time="2024-08-05 12:04:30.567228783Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722859470567195708,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=54a82d77-06bf-49bf-92e9-bc91ccf347d2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 12:04:30 ha-672593 crio[3759]: time="2024-08-05 12:04:30.568273191Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5a2ca955-3456-4948-b765-db47c5d7a47d name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:04:30 ha-672593 crio[3759]: time="2024-08-05 12:04:30.568355289Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5a2ca955-3456-4948-b765-db47c5d7a47d name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:04:30 ha-672593 crio[3759]: time="2024-08-05 12:04:30.568877986Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:def4e84760678cd0dbb4d7e068c88e72abae9153d09fa973dbf47fa862a37689,PodSandboxId:63b2e119430f5ebdaed8ab7d4c84474c2731a2502dcc9d8a5a2115671edeaabf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722859245020269631,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c3a4e49-f517-40e4-bd83-1e69b6a7550c,},Annotations:map[string]string{io.kubernetes.container.hash: 907c955b,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e579894cd5595ae28bba0f23c22901f5d6e2d2234c275c125c4866264f111567,PodSandboxId:4773bf48efb8a64d2aefce07c25c72dc9d826019fbd8b8219d20872f63fe0412,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722859204025017735,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b48534ca818552de6101946d7c7932fd,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b85ae9f8969ae4a6662656f5f6e5aa97c0dd6b966396a243445b4b71fb627f7b,PodSandboxId:be0aa43da318c85ae7e6f88d2cc94f9993168d381f086f4e06e40431a8b91078,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722859202024655857,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a381773e823990c7e015983b07a0d8,},Annotations:map[string]string{io.kubernetes.container.hash: caa197,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f9974eaa7c2760a20ad8a8c9dc89a8413990b6fe0097548a42ed3a7d75ca3e0,PodSandboxId:dd5b9cacb5cc537cfc77786f8abc1ac6b5cdd30bdbbdec5896b201390799176d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722859194283466371,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xx72g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b4aad5e1-e3ed-450f-b0c6-fa690e21632b,},Annotations:map[string]string{io.kubernetes.container.hash: f49c7961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9b74412829253a4ea936bd3b48c7091e031227a4153b3d9a160a98a0a0dba97,PodSandboxId:63b2e119430f5ebdaed8ab7d4c84474c2731a2502dcc9d8a5a2115671edeaabf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722859193019645818,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c3a4e49-f517-40e4-bd83-1e69b6a7550c,},Annotations:map[string]string{io.kubernetes.container.hash: 907c955b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0842e66122440248aa898aded1a79fc0724d7cacadd74bb85c99e26c4eecc856,PodSandboxId:6407ea95878ee64061230ee9994f6229411978fd82ce4c54061dea268c21eca7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722859176000628569,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32cb2283b62a6c10b45ae9ad5cf72bc4,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kub
ernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a210faa6df5512e0f8133b7eff1626430c8da65c8a544702154ec3c88d40fc7,PodSandboxId:84d195aa46f38b46a8f6ac426c3b0c075426cc727d65eb609be0b552c71abf25,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722859161511700734,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wtsdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a1664bb-e0a8-496e-a74d-3c25080dca8e,},Annotations:map[string]string{io.kubernetes.container.hash: ff2ee446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:880c87361835da744d26ac324a55105244b712a8e5007091996ee17dbd6ad829,PodSandboxId:f2f20b5872376a4eeec955d07006691b0a58a1c6934af8f278f606fdc7a3c9e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722859161235708441,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fndz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bdb2b4a-e7c6-4e03-80f8-cf80501095c4,},Annotations:map[string]string{io.kubernetes.container.hash: 96fd5c22,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c480179de
34e5729f3b0321c619752bbead36d7a7d95b4a68d00c63e4dc8824,PodSandboxId:c80856b97ef6df258f187ac4e8a84db6d4494999840bd530df4b4fd127004b44,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722859161296898618,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sgd4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ff9d45-f09f-4213-b1c3-d568ee5ab68a,},Annotations:map[string]string{io.kubernetes.container.hash: d7a5fe30,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:249df3a7b9531a6e24de2b21e3cd4e78b7a85a9cec75e1af8b622e6721ed40ca,PodSandboxId:43e06c30e0e1aedcf2ac03742e2deb25fc5e402e97df22ab09adfe540de58015,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722859161095558137,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sfh7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98c09423-e24f-4d26-b7f9-3da3986d538b,},Annotations:map[string]string{io.kubernetes.container.hash: 3a333149,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c47baf951c9feb2c1c30ec178c6363465fbd3163873df192a22e627c24c7248,PodSandboxId:0ab38c276a2d5829d4a01113de1ddc36f6c2b5b953642b8d868cb4dc77609591,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722859161031340558,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-672593,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: 96b70bfddf8dc93c8b8709942f15d00b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfd2da4a0337d86874231815a57a712b3d7ccf013158a47e27592b9428e6543a,PodSandboxId:05390a861ed7f354a26bf4d2372549f29e05df52acbed5d24a76c3d944268504,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722859160977296403,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddda5be0e77a9b07805ce43
249e5859e,},Annotations:map[string]string{io.kubernetes.container.hash: f024b421,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:053ea64b0f339759d8b134779bddc3a6f5df793cf2ae89b324e84a4f4fe8f12c,PodSandboxId:be0aa43da318c85ae7e6f88d2cc94f9993168d381f086f4e06e40431a8b91078,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722859160840698876,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a381773e823990c7e015983b07a0d8,},Annotation
s:map[string]string{io.kubernetes.container.hash: caa197,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ffeb8fa20fc0dccefd8cd88750035d83a8fd93120b43abda0098f4ca62da858,PodSandboxId:4773bf48efb8a64d2aefce07c25c72dc9d826019fbd8b8219d20872f63fe0412,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722859160774392590,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b48534ca818552de6101946d7c7932fd,},Annotat
ions:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f332a2eefb38a7643f5eabdc4c3795fdf9fc7faa3025977758afda4965c4d06f,PodSandboxId:96a63340a808e8f1d3c8938db5651c8ba9a84b0066e04495da70a33af565d687,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722858640390384315,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xx72g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b4aad5e1-e3ed-450f-b0c6-fa690e21632b,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: f49c7961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73fd9ef1948379bdfd834218bee29f227bc55765a421d994bcc5bbfe373658c1,PodSandboxId:162aab1f9af67e7a7875d7f44424f7edaa5b1aa74a891b3a0e84709da26c69fe,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722858498489320838,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sgd4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ff9d45-f09f-4213-b1c3-d568ee5ab68a,},Annotations:map[string]string{io.kubernet
es.container.hash: d7a5fe30,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6354e702fe80a5a9853cdd48f89dde467f1f7359bb495c8a4f6a49048f151d94,PodSandboxId:60a5e5f93bb15c3691c3fccd5be1c38de24355d307d1217ada049b281288a7b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722858498409265151,Labels:map[string]string{io.kubernetes.container.name: coredns
,io.kubernetes.pod.name: coredns-7db6d8ff4d-sfh7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98c09423-e24f-4d26-b7f9-3da3986d538b,},Annotations:map[string]string{io.kubernetes.container.hash: 3a333149,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57cec2b511aa8ca1b171b7dfff39ecb51cb11d9cd4efd552598fcc0054488c46,PodSandboxId:214360f7ff706f37f1cd346a7910caa4b07da7a0f1b94fd4af2eb9609e49369b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722858486486501073,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fndz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bdb2b4a-e7c6-4e03-80f8-cf80501095c4,},Annotations:map[string]string{io.kubernetes.container.hash: 96fd5c22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c4e00c9ba78ff0cfb337d7435931f39fe7ccd42145fa6670487d190cacee48,PodSandboxId:b824fdfadbf52a8243b61b3c55556272c3d50bd4fafe70328531a35defcf2fc9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722858481390532133,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wtsdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a1664bb-e0a8-496e-a74d-3c25080dca8e,},Annotations:map[string]string{io.kubernetes.container.hash: ff2ee446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1019d9e10074631835690fa0d372f2c043a64f237e1ddf9e22bcbd18d59fa6cd,PodSandboxId:1c9e20b33b7b7424aca33506f1a815c58190e9875a108206c654e048992f391f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788
eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722858461888541864,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddda5be0e77a9b07805ce43249e5859e,},Annotations:map[string]string{io.kubernetes.container.hash: f024b421,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9839b56e3e62d7ac6b88dc20149da25f586b4033e03a09844938e5b85b6334,PodSandboxId:c7429b1a8552f574f21cc855aa6bf767680c56d05bb1df8b83c28a59cd561fb1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,S
tate:CONTAINER_EXITED,CreatedAt:1722858461852184959,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96b70bfddf8dc93c8b8709942f15d00b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5a2ca955-3456-4948-b765-db47c5d7a47d name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:04:30 ha-672593 crio[3759]: time="2024-08-05 12:04:30.618640739Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0eaf5bb8-18b2-40bf-9f1d-7eff788ac221 name=/runtime.v1.RuntimeService/Version
	Aug 05 12:04:30 ha-672593 crio[3759]: time="2024-08-05 12:04:30.618760726Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0eaf5bb8-18b2-40bf-9f1d-7eff788ac221 name=/runtime.v1.RuntimeService/Version
	Aug 05 12:04:30 ha-672593 crio[3759]: time="2024-08-05 12:04:30.620330468Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1d7a5023-bd0d-45bd-a2c8-17f31809182f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 12:04:30 ha-672593 crio[3759]: time="2024-08-05 12:04:30.620811715Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722859470620780536,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1d7a5023-bd0d-45bd-a2c8-17f31809182f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 12:04:30 ha-672593 crio[3759]: time="2024-08-05 12:04:30.621473980Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bb02048e-60ac-4556-b625-06e6cb34a9ea name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:04:30 ha-672593 crio[3759]: time="2024-08-05 12:04:30.621575887Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bb02048e-60ac-4556-b625-06e6cb34a9ea name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:04:30 ha-672593 crio[3759]: time="2024-08-05 12:04:30.622359921Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:def4e84760678cd0dbb4d7e068c88e72abae9153d09fa973dbf47fa862a37689,PodSandboxId:63b2e119430f5ebdaed8ab7d4c84474c2731a2502dcc9d8a5a2115671edeaabf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722859245020269631,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c3a4e49-f517-40e4-bd83-1e69b6a7550c,},Annotations:map[string]string{io.kubernetes.container.hash: 907c955b,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e579894cd5595ae28bba0f23c22901f5d6e2d2234c275c125c4866264f111567,PodSandboxId:4773bf48efb8a64d2aefce07c25c72dc9d826019fbd8b8219d20872f63fe0412,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722859204025017735,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b48534ca818552de6101946d7c7932fd,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b85ae9f8969ae4a6662656f5f6e5aa97c0dd6b966396a243445b4b71fb627f7b,PodSandboxId:be0aa43da318c85ae7e6f88d2cc94f9993168d381f086f4e06e40431a8b91078,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722859202024655857,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a381773e823990c7e015983b07a0d8,},Annotations:map[string]string{io.kubernetes.container.hash: caa197,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f9974eaa7c2760a20ad8a8c9dc89a8413990b6fe0097548a42ed3a7d75ca3e0,PodSandboxId:dd5b9cacb5cc537cfc77786f8abc1ac6b5cdd30bdbbdec5896b201390799176d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722859194283466371,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xx72g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b4aad5e1-e3ed-450f-b0c6-fa690e21632b,},Annotations:map[string]string{io.kubernetes.container.hash: f49c7961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9b74412829253a4ea936bd3b48c7091e031227a4153b3d9a160a98a0a0dba97,PodSandboxId:63b2e119430f5ebdaed8ab7d4c84474c2731a2502dcc9d8a5a2115671edeaabf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722859193019645818,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c3a4e49-f517-40e4-bd83-1e69b6a7550c,},Annotations:map[string]string{io.kubernetes.container.hash: 907c955b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0842e66122440248aa898aded1a79fc0724d7cacadd74bb85c99e26c4eecc856,PodSandboxId:6407ea95878ee64061230ee9994f6229411978fd82ce4c54061dea268c21eca7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722859176000628569,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32cb2283b62a6c10b45ae9ad5cf72bc4,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kub
ernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a210faa6df5512e0f8133b7eff1626430c8da65c8a544702154ec3c88d40fc7,PodSandboxId:84d195aa46f38b46a8f6ac426c3b0c075426cc727d65eb609be0b552c71abf25,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722859161511700734,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wtsdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a1664bb-e0a8-496e-a74d-3c25080dca8e,},Annotations:map[string]string{io.kubernetes.container.hash: ff2ee446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:880c87361835da744d26ac324a55105244b712a8e5007091996ee17dbd6ad829,PodSandboxId:f2f20b5872376a4eeec955d07006691b0a58a1c6934af8f278f606fdc7a3c9e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722859161235708441,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fndz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bdb2b4a-e7c6-4e03-80f8-cf80501095c4,},Annotations:map[string]string{io.kubernetes.container.hash: 96fd5c22,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c480179de
34e5729f3b0321c619752bbead36d7a7d95b4a68d00c63e4dc8824,PodSandboxId:c80856b97ef6df258f187ac4e8a84db6d4494999840bd530df4b4fd127004b44,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722859161296898618,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sgd4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ff9d45-f09f-4213-b1c3-d568ee5ab68a,},Annotations:map[string]string{io.kubernetes.container.hash: d7a5fe30,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:249df3a7b9531a6e24de2b21e3cd4e78b7a85a9cec75e1af8b622e6721ed40ca,PodSandboxId:43e06c30e0e1aedcf2ac03742e2deb25fc5e402e97df22ab09adfe540de58015,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722859161095558137,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sfh7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98c09423-e24f-4d26-b7f9-3da3986d538b,},Annotations:map[string]string{io.kubernetes.container.hash: 3a333149,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c47baf951c9feb2c1c30ec178c6363465fbd3163873df192a22e627c24c7248,PodSandboxId:0ab38c276a2d5829d4a01113de1ddc36f6c2b5b953642b8d868cb4dc77609591,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722859161031340558,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-672593,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: 96b70bfddf8dc93c8b8709942f15d00b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfd2da4a0337d86874231815a57a712b3d7ccf013158a47e27592b9428e6543a,PodSandboxId:05390a861ed7f354a26bf4d2372549f29e05df52acbed5d24a76c3d944268504,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722859160977296403,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddda5be0e77a9b07805ce43
249e5859e,},Annotations:map[string]string{io.kubernetes.container.hash: f024b421,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:053ea64b0f339759d8b134779bddc3a6f5df793cf2ae89b324e84a4f4fe8f12c,PodSandboxId:be0aa43da318c85ae7e6f88d2cc94f9993168d381f086f4e06e40431a8b91078,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722859160840698876,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a381773e823990c7e015983b07a0d8,},Annotation
s:map[string]string{io.kubernetes.container.hash: caa197,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ffeb8fa20fc0dccefd8cd88750035d83a8fd93120b43abda0098f4ca62da858,PodSandboxId:4773bf48efb8a64d2aefce07c25c72dc9d826019fbd8b8219d20872f63fe0412,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722859160774392590,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b48534ca818552de6101946d7c7932fd,},Annotat
ions:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f332a2eefb38a7643f5eabdc4c3795fdf9fc7faa3025977758afda4965c4d06f,PodSandboxId:96a63340a808e8f1d3c8938db5651c8ba9a84b0066e04495da70a33af565d687,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722858640390384315,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xx72g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b4aad5e1-e3ed-450f-b0c6-fa690e21632b,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: f49c7961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73fd9ef1948379bdfd834218bee29f227bc55765a421d994bcc5bbfe373658c1,PodSandboxId:162aab1f9af67e7a7875d7f44424f7edaa5b1aa74a891b3a0e84709da26c69fe,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722858498489320838,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sgd4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ff9d45-f09f-4213-b1c3-d568ee5ab68a,},Annotations:map[string]string{io.kubernet
es.container.hash: d7a5fe30,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6354e702fe80a5a9853cdd48f89dde467f1f7359bb495c8a4f6a49048f151d94,PodSandboxId:60a5e5f93bb15c3691c3fccd5be1c38de24355d307d1217ada049b281288a7b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722858498409265151,Labels:map[string]string{io.kubernetes.container.name: coredns
,io.kubernetes.pod.name: coredns-7db6d8ff4d-sfh7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98c09423-e24f-4d26-b7f9-3da3986d538b,},Annotations:map[string]string{io.kubernetes.container.hash: 3a333149,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57cec2b511aa8ca1b171b7dfff39ecb51cb11d9cd4efd552598fcc0054488c46,PodSandboxId:214360f7ff706f37f1cd346a7910caa4b07da7a0f1b94fd4af2eb9609e49369b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722858486486501073,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fndz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bdb2b4a-e7c6-4e03-80f8-cf80501095c4,},Annotations:map[string]string{io.kubernetes.container.hash: 96fd5c22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c4e00c9ba78ff0cfb337d7435931f39fe7ccd42145fa6670487d190cacee48,PodSandboxId:b824fdfadbf52a8243b61b3c55556272c3d50bd4fafe70328531a35defcf2fc9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722858481390532133,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wtsdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a1664bb-e0a8-496e-a74d-3c25080dca8e,},Annotations:map[string]string{io.kubernetes.container.hash: ff2ee446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1019d9e10074631835690fa0d372f2c043a64f237e1ddf9e22bcbd18d59fa6cd,PodSandboxId:1c9e20b33b7b7424aca33506f1a815c58190e9875a108206c654e048992f391f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788
eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722858461888541864,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddda5be0e77a9b07805ce43249e5859e,},Annotations:map[string]string{io.kubernetes.container.hash: f024b421,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9839b56e3e62d7ac6b88dc20149da25f586b4033e03a09844938e5b85b6334,PodSandboxId:c7429b1a8552f574f21cc855aa6bf767680c56d05bb1df8b83c28a59cd561fb1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,S
tate:CONTAINER_EXITED,CreatedAt:1722858461852184959,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96b70bfddf8dc93c8b8709942f15d00b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bb02048e-60ac-4556-b625-06e6cb34a9ea name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:04:30 ha-672593 crio[3759]: time="2024-08-05 12:04:30.673824696Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=253a5920-91d4-46fe-971e-0f98a7b98dc9 name=/runtime.v1.RuntimeService/Version
	Aug 05 12:04:30 ha-672593 crio[3759]: time="2024-08-05 12:04:30.673903277Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=253a5920-91d4-46fe-971e-0f98a7b98dc9 name=/runtime.v1.RuntimeService/Version
	Aug 05 12:04:30 ha-672593 crio[3759]: time="2024-08-05 12:04:30.675783787Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6160901c-456e-4754-b152-7b6c4465720d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 12:04:30 ha-672593 crio[3759]: time="2024-08-05 12:04:30.676319002Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722859470676294747,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6160901c-456e-4754-b152-7b6c4465720d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 12:04:30 ha-672593 crio[3759]: time="2024-08-05 12:04:30.676913884Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1f649cca-7c1b-4fd0-939d-a74d897fe069 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:04:30 ha-672593 crio[3759]: time="2024-08-05 12:04:30.677014742Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1f649cca-7c1b-4fd0-939d-a74d897fe069 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:04:30 ha-672593 crio[3759]: time="2024-08-05 12:04:30.677414202Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:def4e84760678cd0dbb4d7e068c88e72abae9153d09fa973dbf47fa862a37689,PodSandboxId:63b2e119430f5ebdaed8ab7d4c84474c2731a2502dcc9d8a5a2115671edeaabf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722859245020269631,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c3a4e49-f517-40e4-bd83-1e69b6a7550c,},Annotations:map[string]string{io.kubernetes.container.hash: 907c955b,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e579894cd5595ae28bba0f23c22901f5d6e2d2234c275c125c4866264f111567,PodSandboxId:4773bf48efb8a64d2aefce07c25c72dc9d826019fbd8b8219d20872f63fe0412,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722859204025017735,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b48534ca818552de6101946d7c7932fd,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b85ae9f8969ae4a6662656f5f6e5aa97c0dd6b966396a243445b4b71fb627f7b,PodSandboxId:be0aa43da318c85ae7e6f88d2cc94f9993168d381f086f4e06e40431a8b91078,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722859202024655857,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a381773e823990c7e015983b07a0d8,},Annotations:map[string]string{io.kubernetes.container.hash: caa197,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f9974eaa7c2760a20ad8a8c9dc89a8413990b6fe0097548a42ed3a7d75ca3e0,PodSandboxId:dd5b9cacb5cc537cfc77786f8abc1ac6b5cdd30bdbbdec5896b201390799176d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722859194283466371,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xx72g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b4aad5e1-e3ed-450f-b0c6-fa690e21632b,},Annotations:map[string]string{io.kubernetes.container.hash: f49c7961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9b74412829253a4ea936bd3b48c7091e031227a4153b3d9a160a98a0a0dba97,PodSandboxId:63b2e119430f5ebdaed8ab7d4c84474c2731a2502dcc9d8a5a2115671edeaabf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722859193019645818,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c3a4e49-f517-40e4-bd83-1e69b6a7550c,},Annotations:map[string]string{io.kubernetes.container.hash: 907c955b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0842e66122440248aa898aded1a79fc0724d7cacadd74bb85c99e26c4eecc856,PodSandboxId:6407ea95878ee64061230ee9994f6229411978fd82ce4c54061dea268c21eca7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722859176000628569,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32cb2283b62a6c10b45ae9ad5cf72bc4,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kub
ernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a210faa6df5512e0f8133b7eff1626430c8da65c8a544702154ec3c88d40fc7,PodSandboxId:84d195aa46f38b46a8f6ac426c3b0c075426cc727d65eb609be0b552c71abf25,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722859161511700734,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wtsdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a1664bb-e0a8-496e-a74d-3c25080dca8e,},Annotations:map[string]string{io.kubernetes.container.hash: ff2ee446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:880c87361835da744d26ac324a55105244b712a8e5007091996ee17dbd6ad829,PodSandboxId:f2f20b5872376a4eeec955d07006691b0a58a1c6934af8f278f606fdc7a3c9e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722859161235708441,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fndz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bdb2b4a-e7c6-4e03-80f8-cf80501095c4,},Annotations:map[string]string{io.kubernetes.container.hash: 96fd5c22,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c480179de
34e5729f3b0321c619752bbead36d7a7d95b4a68d00c63e4dc8824,PodSandboxId:c80856b97ef6df258f187ac4e8a84db6d4494999840bd530df4b4fd127004b44,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722859161296898618,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sgd4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ff9d45-f09f-4213-b1c3-d568ee5ab68a,},Annotations:map[string]string{io.kubernetes.container.hash: d7a5fe30,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:249df3a7b9531a6e24de2b21e3cd4e78b7a85a9cec75e1af8b622e6721ed40ca,PodSandboxId:43e06c30e0e1aedcf2ac03742e2deb25fc5e402e97df22ab09adfe540de58015,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722859161095558137,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sfh7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98c09423-e24f-4d26-b7f9-3da3986d538b,},Annotations:map[string]string{io.kubernetes.container.hash: 3a333149,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c47baf951c9feb2c1c30ec178c6363465fbd3163873df192a22e627c24c7248,PodSandboxId:0ab38c276a2d5829d4a01113de1ddc36f6c2b5b953642b8d868cb4dc77609591,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722859161031340558,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-672593,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: 96b70bfddf8dc93c8b8709942f15d00b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfd2da4a0337d86874231815a57a712b3d7ccf013158a47e27592b9428e6543a,PodSandboxId:05390a861ed7f354a26bf4d2372549f29e05df52acbed5d24a76c3d944268504,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722859160977296403,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddda5be0e77a9b07805ce43
249e5859e,},Annotations:map[string]string{io.kubernetes.container.hash: f024b421,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:053ea64b0f339759d8b134779bddc3a6f5df793cf2ae89b324e84a4f4fe8f12c,PodSandboxId:be0aa43da318c85ae7e6f88d2cc94f9993168d381f086f4e06e40431a8b91078,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722859160840698876,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a381773e823990c7e015983b07a0d8,},Annotation
s:map[string]string{io.kubernetes.container.hash: caa197,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ffeb8fa20fc0dccefd8cd88750035d83a8fd93120b43abda0098f4ca62da858,PodSandboxId:4773bf48efb8a64d2aefce07c25c72dc9d826019fbd8b8219d20872f63fe0412,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722859160774392590,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b48534ca818552de6101946d7c7932fd,},Annotat
ions:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f332a2eefb38a7643f5eabdc4c3795fdf9fc7faa3025977758afda4965c4d06f,PodSandboxId:96a63340a808e8f1d3c8938db5651c8ba9a84b0066e04495da70a33af565d687,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722858640390384315,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xx72g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b4aad5e1-e3ed-450f-b0c6-fa690e21632b,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: f49c7961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73fd9ef1948379bdfd834218bee29f227bc55765a421d994bcc5bbfe373658c1,PodSandboxId:162aab1f9af67e7a7875d7f44424f7edaa5b1aa74a891b3a0e84709da26c69fe,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722858498489320838,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sgd4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ff9d45-f09f-4213-b1c3-d568ee5ab68a,},Annotations:map[string]string{io.kubernet
es.container.hash: d7a5fe30,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6354e702fe80a5a9853cdd48f89dde467f1f7359bb495c8a4f6a49048f151d94,PodSandboxId:60a5e5f93bb15c3691c3fccd5be1c38de24355d307d1217ada049b281288a7b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722858498409265151,Labels:map[string]string{io.kubernetes.container.name: coredns
,io.kubernetes.pod.name: coredns-7db6d8ff4d-sfh7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98c09423-e24f-4d26-b7f9-3da3986d538b,},Annotations:map[string]string{io.kubernetes.container.hash: 3a333149,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57cec2b511aa8ca1b171b7dfff39ecb51cb11d9cd4efd552598fcc0054488c46,PodSandboxId:214360f7ff706f37f1cd346a7910caa4b07da7a0f1b94fd4af2eb9609e49369b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722858486486501073,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fndz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bdb2b4a-e7c6-4e03-80f8-cf80501095c4,},Annotations:map[string]string{io.kubernetes.container.hash: 96fd5c22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c4e00c9ba78ff0cfb337d7435931f39fe7ccd42145fa6670487d190cacee48,PodSandboxId:b824fdfadbf52a8243b61b3c55556272c3d50bd4fafe70328531a35defcf2fc9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722858481390532133,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wtsdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a1664bb-e0a8-496e-a74d-3c25080dca8e,},Annotations:map[string]string{io.kubernetes.container.hash: ff2ee446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1019d9e10074631835690fa0d372f2c043a64f237e1ddf9e22bcbd18d59fa6cd,PodSandboxId:1c9e20b33b7b7424aca33506f1a815c58190e9875a108206c654e048992f391f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788
eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722858461888541864,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddda5be0e77a9b07805ce43249e5859e,},Annotations:map[string]string{io.kubernetes.container.hash: f024b421,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9839b56e3e62d7ac6b88dc20149da25f586b4033e03a09844938e5b85b6334,PodSandboxId:c7429b1a8552f574f21cc855aa6bf767680c56d05bb1df8b83c28a59cd561fb1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,S
tate:CONTAINER_EXITED,CreatedAt:1722858461852184959,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96b70bfddf8dc93c8b8709942f15d00b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1f649cca-7c1b-4fd0-939d-a74d897fe069 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:04:30 ha-672593 crio[3759]: time="2024-08-05 12:04:30.721379725Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1d6aa07e-58c6-47f0-8c20-459cbb5c4f96 name=/runtime.v1.RuntimeService/Version
	Aug 05 12:04:30 ha-672593 crio[3759]: time="2024-08-05 12:04:30.721455936Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1d6aa07e-58c6-47f0-8c20-459cbb5c4f96 name=/runtime.v1.RuntimeService/Version
	Aug 05 12:04:30 ha-672593 crio[3759]: time="2024-08-05 12:04:30.722539922Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0c29ed8d-8c5f-41ad-9404-8089b54907e7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 12:04:30 ha-672593 crio[3759]: time="2024-08-05 12:04:30.723127628Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722859470723102745,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0c29ed8d-8c5f-41ad-9404-8089b54907e7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 12:04:30 ha-672593 crio[3759]: time="2024-08-05 12:04:30.723552992Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8ee55848-7be3-4ec1-8873-870f70c79d16 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:04:30 ha-672593 crio[3759]: time="2024-08-05 12:04:30.723635962Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8ee55848-7be3-4ec1-8873-870f70c79d16 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:04:30 ha-672593 crio[3759]: time="2024-08-05 12:04:30.724111458Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:def4e84760678cd0dbb4d7e068c88e72abae9153d09fa973dbf47fa862a37689,PodSandboxId:63b2e119430f5ebdaed8ab7d4c84474c2731a2502dcc9d8a5a2115671edeaabf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722859245020269631,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c3a4e49-f517-40e4-bd83-1e69b6a7550c,},Annotations:map[string]string{io.kubernetes.container.hash: 907c955b,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e579894cd5595ae28bba0f23c22901f5d6e2d2234c275c125c4866264f111567,PodSandboxId:4773bf48efb8a64d2aefce07c25c72dc9d826019fbd8b8219d20872f63fe0412,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722859204025017735,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b48534ca818552de6101946d7c7932fd,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b85ae9f8969ae4a6662656f5f6e5aa97c0dd6b966396a243445b4b71fb627f7b,PodSandboxId:be0aa43da318c85ae7e6f88d2cc94f9993168d381f086f4e06e40431a8b91078,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722859202024655857,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a381773e823990c7e015983b07a0d8,},Annotations:map[string]string{io.kubernetes.container.hash: caa197,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f9974eaa7c2760a20ad8a8c9dc89a8413990b6fe0097548a42ed3a7d75ca3e0,PodSandboxId:dd5b9cacb5cc537cfc77786f8abc1ac6b5cdd30bdbbdec5896b201390799176d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722859194283466371,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xx72g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b4aad5e1-e3ed-450f-b0c6-fa690e21632b,},Annotations:map[string]string{io.kubernetes.container.hash: f49c7961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9b74412829253a4ea936bd3b48c7091e031227a4153b3d9a160a98a0a0dba97,PodSandboxId:63b2e119430f5ebdaed8ab7d4c84474c2731a2502dcc9d8a5a2115671edeaabf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722859193019645818,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c3a4e49-f517-40e4-bd83-1e69b6a7550c,},Annotations:map[string]string{io.kubernetes.container.hash: 907c955b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0842e66122440248aa898aded1a79fc0724d7cacadd74bb85c99e26c4eecc856,PodSandboxId:6407ea95878ee64061230ee9994f6229411978fd82ce4c54061dea268c21eca7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722859176000628569,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32cb2283b62a6c10b45ae9ad5cf72bc4,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kub
ernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a210faa6df5512e0f8133b7eff1626430c8da65c8a544702154ec3c88d40fc7,PodSandboxId:84d195aa46f38b46a8f6ac426c3b0c075426cc727d65eb609be0b552c71abf25,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722859161511700734,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wtsdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a1664bb-e0a8-496e-a74d-3c25080dca8e,},Annotations:map[string]string{io.kubernetes.container.hash: ff2ee446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:880c87361835da744d26ac324a55105244b712a8e5007091996ee17dbd6ad829,PodSandboxId:f2f20b5872376a4eeec955d07006691b0a58a1c6934af8f278f606fdc7a3c9e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722859161235708441,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fndz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bdb2b4a-e7c6-4e03-80f8-cf80501095c4,},Annotations:map[string]string{io.kubernetes.container.hash: 96fd5c22,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c480179de
34e5729f3b0321c619752bbead36d7a7d95b4a68d00c63e4dc8824,PodSandboxId:c80856b97ef6df258f187ac4e8a84db6d4494999840bd530df4b4fd127004b44,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722859161296898618,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sgd4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ff9d45-f09f-4213-b1c3-d568ee5ab68a,},Annotations:map[string]string{io.kubernetes.container.hash: d7a5fe30,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:249df3a7b9531a6e24de2b21e3cd4e78b7a85a9cec75e1af8b622e6721ed40ca,PodSandboxId:43e06c30e0e1aedcf2ac03742e2deb25fc5e402e97df22ab09adfe540de58015,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722859161095558137,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sfh7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98c09423-e24f-4d26-b7f9-3da3986d538b,},Annotations:map[string]string{io.kubernetes.container.hash: 3a333149,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c47baf951c9feb2c1c30ec178c6363465fbd3163873df192a22e627c24c7248,PodSandboxId:0ab38c276a2d5829d4a01113de1ddc36f6c2b5b953642b8d868cb4dc77609591,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722859161031340558,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-672593,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: 96b70bfddf8dc93c8b8709942f15d00b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfd2da4a0337d86874231815a57a712b3d7ccf013158a47e27592b9428e6543a,PodSandboxId:05390a861ed7f354a26bf4d2372549f29e05df52acbed5d24a76c3d944268504,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722859160977296403,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddda5be0e77a9b07805ce43
249e5859e,},Annotations:map[string]string{io.kubernetes.container.hash: f024b421,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:053ea64b0f339759d8b134779bddc3a6f5df793cf2ae89b324e84a4f4fe8f12c,PodSandboxId:be0aa43da318c85ae7e6f88d2cc94f9993168d381f086f4e06e40431a8b91078,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722859160840698876,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a381773e823990c7e015983b07a0d8,},Annotation
s:map[string]string{io.kubernetes.container.hash: caa197,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ffeb8fa20fc0dccefd8cd88750035d83a8fd93120b43abda0098f4ca62da858,PodSandboxId:4773bf48efb8a64d2aefce07c25c72dc9d826019fbd8b8219d20872f63fe0412,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722859160774392590,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b48534ca818552de6101946d7c7932fd,},Annotat
ions:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f332a2eefb38a7643f5eabdc4c3795fdf9fc7faa3025977758afda4965c4d06f,PodSandboxId:96a63340a808e8f1d3c8938db5651c8ba9a84b0066e04495da70a33af565d687,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722858640390384315,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xx72g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b4aad5e1-e3ed-450f-b0c6-fa690e21632b,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: f49c7961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73fd9ef1948379bdfd834218bee29f227bc55765a421d994bcc5bbfe373658c1,PodSandboxId:162aab1f9af67e7a7875d7f44424f7edaa5b1aa74a891b3a0e84709da26c69fe,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722858498489320838,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sgd4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ff9d45-f09f-4213-b1c3-d568ee5ab68a,},Annotations:map[string]string{io.kubernet
es.container.hash: d7a5fe30,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6354e702fe80a5a9853cdd48f89dde467f1f7359bb495c8a4f6a49048f151d94,PodSandboxId:60a5e5f93bb15c3691c3fccd5be1c38de24355d307d1217ada049b281288a7b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722858498409265151,Labels:map[string]string{io.kubernetes.container.name: coredns
,io.kubernetes.pod.name: coredns-7db6d8ff4d-sfh7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98c09423-e24f-4d26-b7f9-3da3986d538b,},Annotations:map[string]string{io.kubernetes.container.hash: 3a333149,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57cec2b511aa8ca1b171b7dfff39ecb51cb11d9cd4efd552598fcc0054488c46,PodSandboxId:214360f7ff706f37f1cd346a7910caa4b07da7a0f1b94fd4af2eb9609e49369b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722858486486501073,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fndz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bdb2b4a-e7c6-4e03-80f8-cf80501095c4,},Annotations:map[string]string{io.kubernetes.container.hash: 96fd5c22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c4e00c9ba78ff0cfb337d7435931f39fe7ccd42145fa6670487d190cacee48,PodSandboxId:b824fdfadbf52a8243b61b3c55556272c3d50bd4fafe70328531a35defcf2fc9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722858481390532133,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wtsdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a1664bb-e0a8-496e-a74d-3c25080dca8e,},Annotations:map[string]string{io.kubernetes.container.hash: ff2ee446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1019d9e10074631835690fa0d372f2c043a64f237e1ddf9e22bcbd18d59fa6cd,PodSandboxId:1c9e20b33b7b7424aca33506f1a815c58190e9875a108206c654e048992f391f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788
eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722858461888541864,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddda5be0e77a9b07805ce43249e5859e,},Annotations:map[string]string{io.kubernetes.container.hash: f024b421,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9839b56e3e62d7ac6b88dc20149da25f586b4033e03a09844938e5b85b6334,PodSandboxId:c7429b1a8552f574f21cc855aa6bf767680c56d05bb1df8b83c28a59cd561fb1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,S
tate:CONTAINER_EXITED,CreatedAt:1722858461852184959,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-672593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96b70bfddf8dc93c8b8709942f15d00b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8ee55848-7be3-4ec1-8873-870f70c79d16 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	def4e84760678       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       4                   63b2e119430f5       storage-provisioner
	e579894cd5595       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   2                   4773bf48efb8a       kube-controller-manager-ha-672593
	b85ae9f8969ae       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            3                   be0aa43da318c       kube-apiserver-ha-672593
	5f9974eaa7c27       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   dd5b9cacb5cc5       busybox-fc5497c4f-xx72g
	f9b7441282925       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       3                   63b2e119430f5       storage-provisioner
	0842e66122440       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago       Running             kube-vip                  0                   6407ea95878ee       kube-vip-ha-672593
	1a210faa6df55       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      5 minutes ago       Running             kube-proxy                1                   84d195aa46f38       kube-proxy-wtsdt
	1c480179de34e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   c80856b97ef6d       coredns-7db6d8ff4d-sgd4v
	880c87361835d       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      5 minutes ago       Running             kindnet-cni               1                   f2f20b5872376       kindnet-7fndz
	249df3a7b9531       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   43e06c30e0e1a       coredns-7db6d8ff4d-sfh7c
	2c47baf951c9f       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      5 minutes ago       Running             kube-scheduler            1                   0ab38c276a2d5       kube-scheduler-ha-672593
	bfd2da4a0337d       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      5 minutes ago       Running             etcd                      1                   05390a861ed7f       etcd-ha-672593
	053ea64b0f339       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      5 minutes ago       Exited              kube-apiserver            2                   be0aa43da318c       kube-apiserver-ha-672593
	5ffeb8fa20fc0       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      5 minutes ago       Exited              kube-controller-manager   1                   4773bf48efb8a       kube-controller-manager-ha-672593
	f332a2eefb38a       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   96a63340a808e       busybox-fc5497c4f-xx72g
	73fd9ef194837       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   162aab1f9af67       coredns-7db6d8ff4d-sgd4v
	6354e702fe80a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   60a5e5f93bb15       coredns-7db6d8ff4d-sfh7c
	57cec2b511aa8       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    16 minutes ago      Exited              kindnet-cni               0                   214360f7ff706       kindnet-7fndz
	11c4e00c9ba78       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      16 minutes ago      Exited              kube-proxy                0                   b824fdfadbf52       kube-proxy-wtsdt
	1019d9e100746       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      16 minutes ago      Exited              etcd                      0                   1c9e20b33b7b7       etcd-ha-672593
	ca9839b56e3e6       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      16 minutes ago      Exited              kube-scheduler            0                   c7429b1a8552f       kube-scheduler-ha-672593
	
	
	==> coredns [1c480179de34e5729f3b0321c619752bbead36d7a7d95b4a68d00c63e4dc8824] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:51772->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[853896924]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (05-Aug-2024 11:59:32.827) (total time: 10479ms):
	Trace[853896924]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:51772->10.96.0.1:443: read: connection reset by peer 10477ms (11:59:43.304)
	Trace[853896924]: [10.479008471s] [10.479008471s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:51772->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:51782->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:51782->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [249df3a7b9531a6e24de2b21e3cd4e78b7a85a9cec75e1af8b622e6721ed40ca] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:52562->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1899813604]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (05-Aug-2024 11:59:32.691) (total time: 10614ms):
	Trace[1899813604]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:52562->10.96.0.1:443: read: connection reset by peer 10613ms (11:59:43.305)
	Trace[1899813604]: [10.614071628s] [10.614071628s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:52562->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:51080->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:51080->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [6354e702fe80a5a9853cdd48f89dde467f1f7359bb495c8a4f6a49048f151d94] <==
	[INFO] 10.244.1.2:35448 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147398s
	[INFO] 10.244.1.2:52034 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000147397s
	[INFO] 10.244.0.4:50553 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110672s
	[INFO] 10.244.0.4:47698 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000069619s
	[INFO] 10.244.0.4:39504 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000139191s
	[INFO] 10.244.0.4:35787 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000065087s
	[INFO] 10.244.2.2:57478 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118877s
	[INFO] 10.244.2.2:44657 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000121159s
	[INFO] 10.244.2.2:33599 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126768s
	[INFO] 10.244.1.2:54159 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000179418s
	[INFO] 10.244.1.2:49562 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092072s
	[INFO] 10.244.0.4:42290 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077914s
	[INFO] 10.244.2.2:59634 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000164343s
	[INFO] 10.244.2.2:43784 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000159677s
	[INFO] 10.244.1.2:49443 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000173465s
	[INFO] 10.244.1.2:58280 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00015744s
	[INFO] 10.244.0.4:52050 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111584s
	[INFO] 10.244.0.4:42223 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000078636s
	[INFO] 10.244.0.4:42616 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000084454s
	[INFO] 10.244.0.4:49723 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000087038s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [73fd9ef1948379bdfd834218bee29f227bc55765a421d994bcc5bbfe373658c1] <==
	[INFO] 10.244.2.2:34794 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00013565s
	[INFO] 10.244.1.2:33425 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168612s
	[INFO] 10.244.1.2:49339 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001876895s
	[INFO] 10.244.1.2:41345 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001388007s
	[INFO] 10.244.1.2:39680 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097906s
	[INFO] 10.244.1.2:38660 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000162674s
	[INFO] 10.244.0.4:37518 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001828264s
	[INFO] 10.244.0.4:43389 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000136081s
	[INFO] 10.244.0.4:58226 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000071105s
	[INFO] 10.244.0.4:43658 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001098104s
	[INFO] 10.244.2.2:40561 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000109999s
	[INFO] 10.244.1.2:41071 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120854s
	[INFO] 10.244.1.2:40710 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080783s
	[INFO] 10.244.0.4:54672 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011185s
	[INFO] 10.244.0.4:55288 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117161s
	[INFO] 10.244.0.4:41744 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068123s
	[INFO] 10.244.2.2:60620 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013916s
	[INFO] 10.244.2.2:52672 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000153187s
	[INFO] 10.244.1.2:36870 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144481s
	[INFO] 10.244.1.2:43017 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000166959s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1936&timeout=5m55s&timeoutSeconds=355&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1877&timeout=6m1s&timeoutSeconds=361&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1893&timeout=8m8s&timeoutSeconds=488&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-672593
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-672593
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f
	                    minikube.k8s.io/name=ha-672593
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_05T11_47_49_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 11:47:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-672593
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 12:04:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 12:03:09 +0000   Mon, 05 Aug 2024 12:03:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 12:03:09 +0000   Mon, 05 Aug 2024 12:03:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 12:03:09 +0000   Mon, 05 Aug 2024 12:03:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 12:03:09 +0000   Mon, 05 Aug 2024 12:03:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.102
	  Hostname:    ha-672593
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fb8829a6b1d145d6aee2ea0e80194fe4
	  System UUID:                fb8829a6-b1d1-45d6-aee2-ea0e80194fe4
	  Boot ID:                    ecb22512-bcb2-43ab-b502-fc0c346e754f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xx72g              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-sfh7c             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-7db6d8ff4d-sgd4v             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-ha-672593                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-7fndz                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-ha-672593             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-672593    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-wtsdt                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-ha-672593             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-672593                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m46s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m24s                  kube-proxy       
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           16m                    node-controller  Node ha-672593 event: Registered Node ha-672593 in Controller
	  Normal   RegisteredNode           15m                    node-controller  Node ha-672593 event: Registered Node ha-672593 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-672593 event: Registered Node ha-672593 in Controller
	  Warning  ContainerGCFailed        5m43s (x2 over 6m43s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m17s                  node-controller  Node ha-672593 event: Registered Node ha-672593 in Controller
	  Normal   RegisteredNode           4m15s                  node-controller  Node ha-672593 event: Registered Node ha-672593 in Controller
	  Normal   RegisteredNode           3m12s                  node-controller  Node ha-672593 event: Registered Node ha-672593 in Controller
	  Normal   NodeNotReady             105s                   node-controller  Node ha-672593 status is now: NodeNotReady
	  Normal   NodeHasSufficientMemory  82s (x2 over 16m)      kubelet          Node ha-672593 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    82s (x2 over 16m)      kubelet          Node ha-672593 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     82s (x2 over 16m)      kubelet          Node ha-672593 status is now: NodeHasSufficientPID
	  Normal   NodeReady                82s (x2 over 16m)      kubelet          Node ha-672593 status is now: NodeReady
	
	
	Name:               ha-672593-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-672593-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f
	                    minikube.k8s.io/name=ha-672593
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_05T11_48_56_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 11:48:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-672593-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 12:04:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 12:00:49 +0000   Mon, 05 Aug 2024 12:00:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 12:00:49 +0000   Mon, 05 Aug 2024 12:00:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 12:00:49 +0000   Mon, 05 Aug 2024 12:00:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 12:00:49 +0000   Mon, 05 Aug 2024 12:00:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.68
	  Hostname:    ha-672593-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8aa3c6ca9e9a439e91c6c120c9ce9ce7
	  System UUID:                8aa3c6ca-9e9a-439e-91c6-c120c9ce9ce7
	  Boot ID:                    9080c272-ba5e-4e9d-8215-dd4f5b1ffe33
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-vn64j                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-672593-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-85fm7                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-672593-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-672593-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-mdwh2                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-672593-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-672593-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m17s                  kube-proxy       
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-672593-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-672593-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-672593-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                    node-controller  Node ha-672593-m02 event: Registered Node ha-672593-m02 in Controller
	  Normal  RegisteredNode           15m                    node-controller  Node ha-672593-m02 event: Registered Node ha-672593-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-672593-m02 event: Registered Node ha-672593-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-672593-m02 status is now: NodeNotReady
	  Normal  Starting                 4m51s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m50s (x8 over 4m51s)  kubelet          Node ha-672593-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m50s (x8 over 4m51s)  kubelet          Node ha-672593-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m50s (x7 over 4m51s)  kubelet          Node ha-672593-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m17s                  node-controller  Node ha-672593-m02 event: Registered Node ha-672593-m02 in Controller
	  Normal  RegisteredNode           4m15s                  node-controller  Node ha-672593-m02 event: Registered Node ha-672593-m02 in Controller
	  Normal  RegisteredNode           3m12s                  node-controller  Node ha-672593-m02 event: Registered Node ha-672593-m02 in Controller
	
	
	Name:               ha-672593-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-672593-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f
	                    minikube.k8s.io/name=ha-672593
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_05T11_51_15_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 11:51:14 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-672593-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 12:02:03 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 05 Aug 2024 12:01:43 +0000   Mon, 05 Aug 2024 12:02:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 05 Aug 2024 12:01:43 +0000   Mon, 05 Aug 2024 12:02:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 05 Aug 2024 12:01:43 +0000   Mon, 05 Aug 2024 12:02:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 05 Aug 2024 12:01:43 +0000   Mon, 05 Aug 2024 12:02:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.4
	  Hostname:    ha-672593-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f5561d3ea391496e983c8078f06ff6c0
	  System UUID:                f5561d3e-a391-496e-983c-8078f06ff6c0
	  Boot ID:                    10eb468d-1081-40f6-8d09-5554203cd004
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-md68x    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-6dfc5              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-lpp7n           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m44s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m (x2 over 13m)      kubelet          Node ha-672593-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x2 over 13m)      kubelet          Node ha-672593-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x2 over 13m)      kubelet          Node ha-672593-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                    node-controller  Node ha-672593-m04 event: Registered Node ha-672593-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-672593-m04 event: Registered Node ha-672593-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-672593-m04 event: Registered Node ha-672593-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-672593-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m17s                  node-controller  Node ha-672593-m04 event: Registered Node ha-672593-m04 in Controller
	  Normal   RegisteredNode           4m15s                  node-controller  Node ha-672593-m04 event: Registered Node ha-672593-m04 in Controller
	  Normal   RegisteredNode           3m12s                  node-controller  Node ha-672593-m04 event: Registered Node ha-672593-m04 in Controller
	  Normal   Starting                 2m49s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m48s (x2 over 2m49s)  kubelet          Node ha-672593-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m48s (x2 over 2m49s)  kubelet          Node ha-672593-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x2 over 2m49s)  kubelet          Node ha-672593-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m48s                  kubelet          Node ha-672593-m04 has been rebooted, boot id: 10eb468d-1081-40f6-8d09-5554203cd004
	  Normal   NodeReady                2m48s                  kubelet          Node ha-672593-m04 status is now: NodeReady
	  Normal   NodeNotReady             107s (x2 over 3m37s)   node-controller  Node ha-672593-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +14.056383] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.055926] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054067] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.198790] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.118095] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.300760] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.230253] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +4.263666] systemd-fstab-generator[942]: Ignoring "noauto" option for root device
	[  +0.055709] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.688004] kauditd_printk_skb: 79 callbacks suppressed
	[  +1.472104] systemd-fstab-generator[1356]: Ignoring "noauto" option for root device
	[Aug 5 11:48] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.212364] kauditd_printk_skb: 29 callbacks suppressed
	[ +52.825644] kauditd_printk_skb: 24 callbacks suppressed
	[Aug 5 11:59] systemd-fstab-generator[3610]: Ignoring "noauto" option for root device
	[  +0.142491] systemd-fstab-generator[3622]: Ignoring "noauto" option for root device
	[  +0.185678] systemd-fstab-generator[3636]: Ignoring "noauto" option for root device
	[  +0.153541] systemd-fstab-generator[3648]: Ignoring "noauto" option for root device
	[  +0.401182] systemd-fstab-generator[3731]: Ignoring "noauto" option for root device
	[  +0.824558] systemd-fstab-generator[3855]: Ignoring "noauto" option for root device
	[  +3.894626] kauditd_printk_skb: 127 callbacks suppressed
	[  +5.323701] kauditd_printk_skb: 85 callbacks suppressed
	[Aug 5 12:00] kauditd_printk_skb: 1 callbacks suppressed
	[ +11.023263] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [1019d9e10074631835690fa0d372f2c043a64f237e1ddf9e22bcbd18d59fa6cd] <==
	2024/08/05 11:57:43 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/08/05 11:57:43 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-05T11:57:43.535859Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-05T11:57:43.234938Z","time spent":"300.907105ms","remote":"127.0.0.1:33196","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":0,"response size":0,"request content":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" limit:500 "}
	2024/08/05 11:57:43 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/08/05 11:57:43 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-05T11:57:43.58185Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.102:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-05T11:57:43.582035Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.102:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-05T11:57:43.583483Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"6b93c4bc4617b0fe","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-05T11:57:43.583676Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"64db3d4ba151eb25"}
	{"level":"info","ts":"2024-08-05T11:57:43.58371Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"64db3d4ba151eb25"}
	{"level":"info","ts":"2024-08-05T11:57:43.583734Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"64db3d4ba151eb25"}
	{"level":"info","ts":"2024-08-05T11:57:43.583849Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25"}
	{"level":"info","ts":"2024-08-05T11:57:43.583986Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25"}
	{"level":"info","ts":"2024-08-05T11:57:43.584056Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"64db3d4ba151eb25"}
	{"level":"info","ts":"2024-08-05T11:57:43.584098Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"64db3d4ba151eb25"}
	{"level":"info","ts":"2024-08-05T11:57:43.584106Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"9c6cca754efc9caa"}
	{"level":"info","ts":"2024-08-05T11:57:43.584116Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"9c6cca754efc9caa"}
	{"level":"info","ts":"2024-08-05T11:57:43.584167Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"9c6cca754efc9caa"}
	{"level":"info","ts":"2024-08-05T11:57:43.584268Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"9c6cca754efc9caa"}
	{"level":"info","ts":"2024-08-05T11:57:43.58435Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"9c6cca754efc9caa"}
	{"level":"info","ts":"2024-08-05T11:57:43.584403Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"9c6cca754efc9caa"}
	{"level":"info","ts":"2024-08-05T11:57:43.584447Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"9c6cca754efc9caa"}
	{"level":"info","ts":"2024-08-05T11:57:43.587657Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.102:2380"}
	{"level":"info","ts":"2024-08-05T11:57:43.587795Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.102:2380"}
	{"level":"info","ts":"2024-08-05T11:57:43.587834Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-672593","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.102:2380"],"advertise-client-urls":["https://192.168.39.102:2379"]}
	
	
	==> etcd [bfd2da4a0337d86874231815a57a712b3d7ccf013158a47e27592b9428e6543a] <==
	{"level":"warn","ts":"2024-08-05T12:01:00.833851Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.210:2380/version","remote-member-id":"9c6cca754efc9caa","error":"Get \"https://192.168.39.210:2380/version\": dial tcp 192.168.39.210:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-05T12:01:00.83404Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"9c6cca754efc9caa","error":"Get \"https://192.168.39.210:2380/version\": dial tcp 192.168.39.210:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-05T12:01:01.389194Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"9c6cca754efc9caa"}
	{"level":"info","ts":"2024-08-05T12:01:01.392695Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"9c6cca754efc9caa"}
	{"level":"info","ts":"2024-08-05T12:01:01.393452Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"9c6cca754efc9caa"}
	{"level":"info","ts":"2024-08-05T12:01:01.4093Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6b93c4bc4617b0fe","to":"9c6cca754efc9caa","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-05T12:01:01.409492Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"9c6cca754efc9caa"}
	{"level":"info","ts":"2024-08-05T12:01:01.425502Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6b93c4bc4617b0fe","to":"9c6cca754efc9caa","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-05T12:01:01.425606Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"9c6cca754efc9caa"}
	{"level":"info","ts":"2024-08-05T12:01:56.785866Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b93c4bc4617b0fe switched to configuration voters=(7267469818730769189 7751755696543609086)"}
	{"level":"info","ts":"2024-08-05T12:01:56.78781Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"1cdd3ec65c5f94ba","local-member-id":"6b93c4bc4617b0fe","removed-remote-peer-id":"9c6cca754efc9caa","removed-remote-peer-urls":["https://192.168.39.210:2380"]}
	{"level":"info","ts":"2024-08-05T12:01:56.787872Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"9c6cca754efc9caa"}
	{"level":"warn","ts":"2024-08-05T12:01:56.788175Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"9c6cca754efc9caa"}
	{"level":"info","ts":"2024-08-05T12:01:56.788224Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"9c6cca754efc9caa"}
	{"level":"warn","ts":"2024-08-05T12:01:56.788527Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"9c6cca754efc9caa"}
	{"level":"info","ts":"2024-08-05T12:01:56.78856Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"9c6cca754efc9caa"}
	{"level":"info","ts":"2024-08-05T12:01:56.788733Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"9c6cca754efc9caa"}
	{"level":"warn","ts":"2024-08-05T12:01:56.788891Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"9c6cca754efc9caa","error":"context canceled"}
	{"level":"warn","ts":"2024-08-05T12:01:56.789047Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"9c6cca754efc9caa","error":"failed to read 9c6cca754efc9caa on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-08-05T12:01:56.789172Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"9c6cca754efc9caa"}
	{"level":"warn","ts":"2024-08-05T12:01:56.789359Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"9c6cca754efc9caa","error":"context canceled"}
	{"level":"info","ts":"2024-08-05T12:01:56.789434Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"9c6cca754efc9caa"}
	{"level":"info","ts":"2024-08-05T12:01:56.789449Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"9c6cca754efc9caa"}
	{"level":"info","ts":"2024-08-05T12:01:56.789461Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"6b93c4bc4617b0fe","removed-remote-peer-id":"9c6cca754efc9caa"}
	{"level":"warn","ts":"2024-08-05T12:01:56.802175Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"6b93c4bc4617b0fe","remote-peer-id-stream-handler":"6b93c4bc4617b0fe","remote-peer-id-from":"9c6cca754efc9caa"}
	
	
	==> kernel <==
	 12:04:31 up 17 min,  0 users,  load average: 0.38, 0.59, 0.43
	Linux ha-672593 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [57cec2b511aa8ca1b171b7dfff39ecb51cb11d9cd4efd552598fcc0054488c46] <==
	I0805 11:57:17.433866       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0805 11:57:17.433926       1 main.go:322] Node ha-672593-m03 has CIDR [10.244.2.0/24] 
	I0805 11:57:17.434171       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0805 11:57:17.434202       1 main.go:322] Node ha-672593-m04 has CIDR [10.244.3.0/24] 
	I0805 11:57:17.434283       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0805 11:57:17.434309       1 main.go:299] handling current node
	I0805 11:57:17.434324       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0805 11:57:17.434330       1 main.go:322] Node ha-672593-m02 has CIDR [10.244.1.0/24] 
	I0805 11:57:27.435455       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0805 11:57:27.435605       1 main.go:299] handling current node
	I0805 11:57:27.435658       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0805 11:57:27.435666       1 main.go:322] Node ha-672593-m02 has CIDR [10.244.1.0/24] 
	I0805 11:57:27.436012       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0805 11:57:27.436036       1 main.go:322] Node ha-672593-m03 has CIDR [10.244.2.0/24] 
	I0805 11:57:27.436103       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0805 11:57:27.436134       1 main.go:322] Node ha-672593-m04 has CIDR [10.244.3.0/24] 
	I0805 11:57:37.436099       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0805 11:57:37.436141       1 main.go:322] Node ha-672593-m04 has CIDR [10.244.3.0/24] 
	I0805 11:57:37.436401       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0805 11:57:37.436441       1 main.go:299] handling current node
	I0805 11:57:37.436476       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0805 11:57:37.436493       1 main.go:322] Node ha-672593-m02 has CIDR [10.244.1.0/24] 
	I0805 11:57:37.436621       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0805 11:57:37.436644       1 main.go:322] Node ha-672593-m03 has CIDR [10.244.2.0/24] 
	E0805 11:57:42.354277       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	
	
	==> kindnet [880c87361835da744d26ac324a55105244b712a8e5007091996ee17dbd6ad829] <==
	I0805 12:03:42.450937       1 main.go:322] Node ha-672593-m04 has CIDR [10.244.3.0/24] 
	I0805 12:03:52.458837       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0805 12:03:52.459035       1 main.go:299] handling current node
	I0805 12:03:52.459078       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0805 12:03:52.459107       1 main.go:322] Node ha-672593-m02 has CIDR [10.244.1.0/24] 
	I0805 12:03:52.459305       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0805 12:03:52.459345       1 main.go:322] Node ha-672593-m04 has CIDR [10.244.3.0/24] 
	I0805 12:04:02.455339       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0805 12:04:02.455438       1 main.go:322] Node ha-672593-m04 has CIDR [10.244.3.0/24] 
	I0805 12:04:02.455606       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0805 12:04:02.455630       1 main.go:299] handling current node
	I0805 12:04:02.455663       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0805 12:04:02.455695       1 main.go:322] Node ha-672593-m02 has CIDR [10.244.1.0/24] 
	I0805 12:04:12.457463       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0805 12:04:12.457653       1 main.go:299] handling current node
	I0805 12:04:12.457686       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0805 12:04:12.457705       1 main.go:322] Node ha-672593-m02 has CIDR [10.244.1.0/24] 
	I0805 12:04:12.457864       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0805 12:04:12.457898       1 main.go:322] Node ha-672593-m04 has CIDR [10.244.3.0/24] 
	I0805 12:04:22.449903       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0805 12:04:22.450175       1 main.go:299] handling current node
	I0805 12:04:22.450221       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0805 12:04:22.450249       1 main.go:322] Node ha-672593-m02 has CIDR [10.244.1.0/24] 
	I0805 12:04:22.450437       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0805 12:04:22.450472       1 main.go:322] Node ha-672593-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [053ea64b0f339759d8b134779bddc3a6f5df793cf2ae89b324e84a4f4fe8f12c] <==
	I0805 11:59:21.825180       1 options.go:221] external host was not specified, using 192.168.39.102
	I0805 11:59:21.828521       1 server.go:148] Version: v1.30.3
	I0805 11:59:21.828593       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 11:59:22.285768       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0805 11:59:22.294913       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0805 11:59:22.298168       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0805 11:59:22.298198       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0805 11:59:22.298358       1 instance.go:299] Using reconciler: lease
	W0805 11:59:42.285935       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0805 11:59:42.286176       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0805 11:59:42.298872       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [b85ae9f8969ae4a6662656f5f6e5aa97c0dd6b966396a243445b4b71fb627f7b] <==
	I0805 12:00:04.368653       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0805 12:00:04.368666       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0805 12:00:04.312689       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0805 12:00:04.417407       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0805 12:00:04.418008       1 aggregator.go:165] initial CRD sync complete...
	I0805 12:00:04.418130       1 autoregister_controller.go:141] Starting autoregister controller
	I0805 12:00:04.418179       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0805 12:00:04.512475       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0805 12:00:04.518407       1 cache.go:39] Caches are synced for autoregister controller
	I0805 12:00:04.518545       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0805 12:00:04.520141       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0805 12:00:04.518578       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0805 12:00:04.518590       1 shared_informer.go:320] Caches are synced for configmaps
	I0805 12:00:04.519515       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0805 12:00:04.519536       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0805 12:00:04.529673       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0805 12:00:04.530310       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0805 12:00:04.530384       1 policy_source.go:224] refreshing policies
	W0805 12:00:04.531148       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.210 192.168.39.68]
	I0805 12:00:04.532635       1 controller.go:615] quota admission added evaluator for: endpoints
	I0805 12:00:04.541385       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0805 12:00:04.547522       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0805 12:00:04.554561       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0805 12:00:05.318715       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0805 12:00:05.771837       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.102 192.168.39.210 192.168.39.68]
	
	
	==> kube-controller-manager [5ffeb8fa20fc0dccefd8cd88750035d83a8fd93120b43abda0098f4ca62da858] <==
	I0805 11:59:22.036038       1 serving.go:380] Generated self-signed cert in-memory
	I0805 11:59:22.485631       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0805 11:59:22.485676       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 11:59:22.487282       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0805 11:59:22.487855       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0805 11:59:22.488067       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0805 11:59:22.488152       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0805 11:59:43.304376       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.102:8443/healthz\": dial tcp 192.168.39.102:8443: connect: connection refused"
	
	
	==> kube-controller-manager [e579894cd5595ae28bba0f23c22901f5d6e2d2234c275c125c4866264f111567] <==
	E0805 12:02:36.525935       1 gc_controller.go:153] "Failed to get node" err="node \"ha-672593-m03\" not found" logger="pod-garbage-collector-controller" node="ha-672593-m03"
	E0805 12:02:36.526047       1 gc_controller.go:153] "Failed to get node" err="node \"ha-672593-m03\" not found" logger="pod-garbage-collector-controller" node="ha-672593-m03"
	E0805 12:02:36.526060       1 gc_controller.go:153] "Failed to get node" err="node \"ha-672593-m03\" not found" logger="pod-garbage-collector-controller" node="ha-672593-m03"
	E0805 12:02:36.526066       1 gc_controller.go:153] "Failed to get node" err="node \"ha-672593-m03\" not found" logger="pod-garbage-collector-controller" node="ha-672593-m03"
	E0805 12:02:36.526071       1 gc_controller.go:153] "Failed to get node" err="node \"ha-672593-m03\" not found" logger="pod-garbage-collector-controller" node="ha-672593-m03"
	I0805 12:02:44.576196       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.545519ms"
	I0805 12:02:44.576311       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.808µs"
	I0805 12:02:46.594073       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.009806ms"
	I0805 12:02:46.594287       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="121.584µs"
	I0805 12:02:46.659686       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="62.416713ms"
	I0805 12:02:46.659787       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="48.325µs"
	I0805 12:02:46.770822       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-rctzh EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-rctzh\": the object has been modified; please apply your changes to the latest version and try again"
	I0805 12:02:46.771509       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"15a14bcf-7628-4b88-a547-4a20d328b035", APIVersion:"v1", ResourceVersion:"258", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-rctzh EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-rctzh": the object has been modified; please apply your changes to the latest version and try again
	I0805 12:02:46.794557       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="58.973953ms"
	I0805 12:02:46.794662       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="58.459µs"
	I0805 12:03:18.200836       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.41749ms"
	I0805 12:03:18.201301       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="100.868µs"
	I0805 12:03:18.309924       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-rctzh EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-rctzh\": the object has been modified; please apply your changes to the latest version and try again"
	I0805 12:03:18.310271       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"15a14bcf-7628-4b88-a547-4a20d328b035", APIVersion:"v1", ResourceVersion:"258", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-rctzh EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-rctzh": the object has been modified; please apply your changes to the latest version and try again
	I0805 12:03:18.375035       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-rctzh EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-rctzh\": the object has been modified; please apply your changes to the latest version and try again"
	I0805 12:03:18.375227       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"15a14bcf-7628-4b88-a547-4a20d328b035", APIVersion:"v1", ResourceVersion:"258", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-rctzh EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-rctzh": the object has been modified; please apply your changes to the latest version and try again
	I0805 12:03:18.388794       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="115.29337ms"
	E0805 12:03:18.389631       1 replica_set.go:557] sync "kube-system/coredns-7db6d8ff4d" failed with Operation cannot be fulfilled on replicasets.apps "coredns-7db6d8ff4d": the object has been modified; please apply your changes to the latest version and try again
	I0805 12:03:18.390029       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="235.175µs"
	I0805 12:03:18.395537       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="156.067µs"
	
	
	==> kube-proxy [11c4e00c9ba78ff0cfb337d7435931f39fe7ccd42145fa6670487d190cacee48] <==
	E0805 11:56:28.311792       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1846": dial tcp 192.168.39.254:8443: connect: no route to host
	W0805 11:56:28.311853       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1923": dial tcp 192.168.39.254:8443: connect: no route to host
	E0805 11:56:28.311872       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1923": dial tcp 192.168.39.254:8443: connect: no route to host
	W0805 11:56:34.647715       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1923": dial tcp 192.168.39.254:8443: connect: no route to host
	E0805 11:56:34.648621       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1923": dial tcp 192.168.39.254:8443: connect: no route to host
	W0805 11:56:34.648624       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1846": dial tcp 192.168.39.254:8443: connect: no route to host
	E0805 11:56:34.648680       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1846": dial tcp 192.168.39.254:8443: connect: no route to host
	W0805 11:56:34.648798       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-672593&resourceVersion=1890": dial tcp 192.168.39.254:8443: connect: no route to host
	E0805 11:56:34.648816       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-672593&resourceVersion=1890": dial tcp 192.168.39.254:8443: connect: no route to host
	W0805 11:56:43.863816       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-672593&resourceVersion=1890": dial tcp 192.168.39.254:8443: connect: no route to host
	E0805 11:56:43.864052       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-672593&resourceVersion=1890": dial tcp 192.168.39.254:8443: connect: no route to host
	W0805 11:56:43.864211       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1846": dial tcp 192.168.39.254:8443: connect: no route to host
	E0805 11:56:43.864276       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1846": dial tcp 192.168.39.254:8443: connect: no route to host
	W0805 11:56:46.936482       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1923": dial tcp 192.168.39.254:8443: connect: no route to host
	E0805 11:56:46.936615       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1923": dial tcp 192.168.39.254:8443: connect: no route to host
	W0805 11:57:02.297110       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1846": dial tcp 192.168.39.254:8443: connect: no route to host
	E0805 11:57:02.297156       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1846": dial tcp 192.168.39.254:8443: connect: no route to host
	W0805 11:57:05.369264       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-672593&resourceVersion=1890": dial tcp 192.168.39.254:8443: connect: no route to host
	E0805 11:57:05.369455       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-672593&resourceVersion=1890": dial tcp 192.168.39.254:8443: connect: no route to host
	W0805 11:57:11.512173       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1923": dial tcp 192.168.39.254:8443: connect: no route to host
	E0805 11:57:11.512673       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1923": dial tcp 192.168.39.254:8443: connect: no route to host
	W0805 11:57:39.160405       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-672593&resourceVersion=1890": dial tcp 192.168.39.254:8443: connect: no route to host
	E0805 11:57:39.160758       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-672593&resourceVersion=1890": dial tcp 192.168.39.254:8443: connect: no route to host
	W0805 11:57:39.160918       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1846": dial tcp 192.168.39.254:8443: connect: no route to host
	E0805 11:57:39.161015       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1846": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [1a210faa6df5512e0f8133b7eff1626430c8da65c8a544702154ec3c88d40fc7] <==
	E0805 11:59:23.608298       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-672593\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0805 11:59:26.680598       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-672593\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0805 11:59:29.752543       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-672593\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0805 11:59:35.895859       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-672593\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0805 11:59:48.183682       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-672593\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0805 12:00:06.616722       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-672593\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0805 12:00:06.620088       1 server.go:1032] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	I0805 12:00:06.727178       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0805 12:00:06.727351       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0805 12:00:06.727387       1 server_linux.go:165] "Using iptables Proxier"
	I0805 12:00:06.735802       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0805 12:00:06.737603       1 server.go:872] "Version info" version="v1.30.3"
	I0805 12:00:06.737672       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 12:00:06.745171       1 config.go:192] "Starting service config controller"
	I0805 12:00:06.745243       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 12:00:06.745300       1 config.go:101] "Starting endpoint slice config controller"
	I0805 12:00:06.745321       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 12:00:06.746206       1 config.go:319] "Starting node config controller"
	I0805 12:00:06.746263       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 12:00:06.845876       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0805 12:00:06.846011       1 shared_informer.go:320] Caches are synced for service config
	I0805 12:00:06.846364       1 shared_informer.go:320] Caches are synced for node config
	W0805 12:02:52.109468       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0805 12:02:52.109611       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0805 12:02:52.109655       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	
	
	==> kube-scheduler [2c47baf951c9feb2c1c30ec178c6363465fbd3163873df192a22e627c24c7248] <==
	W0805 11:59:58.984121       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.102:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	E0805 11:59:58.984158       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.102:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	W0805 11:59:58.985735       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.102:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	E0805 11:59:58.985779       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.102:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	W0805 11:59:59.518357       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.102:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	E0805 11:59:59.518420       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.102:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	W0805 11:59:59.842662       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.102:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	E0805 11:59:59.842734       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.102:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	W0805 12:00:01.700540       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.102:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	E0805 12:00:01.700638       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.102:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	W0805 12:00:01.708795       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.102:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	E0805 12:00:01.708875       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.102:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	W0805 12:00:04.415532       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0805 12:00:04.415716       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0805 12:00:04.420391       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0805 12:00:04.421082       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0805 12:00:04.421280       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0805 12:00:04.421590       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0805 12:00:04.421735       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0805 12:00:04.422066       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0805 12:00:25.112927       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0805 12:01:55.323536       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-md68x\": pod busybox-fc5497c4f-md68x is already assigned to node \"ha-672593-m04\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-md68x" node="ha-672593-m04"
	E0805 12:01:55.324886       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 6ecfa5ff-bd91-4ccc-8c90-11b2ef49e4e0(default/busybox-fc5497c4f-md68x) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-md68x"
	E0805 12:01:55.325075       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-md68x\": pod busybox-fc5497c4f-md68x is already assigned to node \"ha-672593-m04\"" pod="default/busybox-fc5497c4f-md68x"
	I0805 12:01:55.325123       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-md68x" node="ha-672593-m04"
	
	
	==> kube-scheduler [ca9839b56e3e62d7ac6b88dc20149da25f586b4033e03a09844938e5b85b6334] <==
	W0805 11:57:40.344688       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0805 11:57:40.344734       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0805 11:57:41.020325       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0805 11:57:41.020355       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0805 11:57:41.042771       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0805 11:57:41.042836       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0805 11:57:41.248719       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0805 11:57:41.248771       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0805 11:57:41.304894       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0805 11:57:41.305028       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0805 11:57:41.608869       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0805 11:57:41.609030       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0805 11:57:41.773774       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0805 11:57:41.773880       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0805 11:57:41.830572       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0805 11:57:41.830618       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0805 11:57:41.866668       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0805 11:57:41.866720       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0805 11:57:42.062515       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0805 11:57:42.062552       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0805 11:57:42.275826       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0805 11:57:42.276037       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0805 11:57:43.021019       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0805 11:57:43.021062       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0805 11:57:43.482559       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 05 12:02:48 ha-672593 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 12:02:48 ha-672593 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 12:02:48 ha-672593 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 12:02:49 ha-672593 kubelet[1363]: E0805 12:02:49.106044    1363 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-672593\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-672593?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Aug 05 12:02:56 ha-672593 kubelet[1363]: E0805 12:02:56.051417    1363 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-672593?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Aug 05 12:02:59 ha-672593 kubelet[1363]: E0805 12:02:59.107150    1363 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-672593\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-672593?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Aug 05 12:02:59 ha-672593 kubelet[1363]: E0805 12:02:59.107203    1363 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Aug 05 12:03:06 ha-672593 kubelet[1363]: E0805 12:03:06.052368    1363 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-672593?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Aug 05 12:03:06 ha-672593 kubelet[1363]: I0805 12:03:06.052455    1363 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease"
	Aug 05 12:03:08 ha-672593 kubelet[1363]: W0805 12:03:08.485180    1363 reflector.go:470] object-"kube-system"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Aug 05 12:03:08 ha-672593 kubelet[1363]: W0805 12:03:08.485273    1363 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Aug 05 12:03:08 ha-672593 kubelet[1363]: W0805 12:03:08.485316    1363 reflector.go:470] pkg/kubelet/config/apiserver.go:66: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Aug 05 12:03:08 ha-672593 kubelet[1363]: W0805 12:03:08.485337    1363 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Aug 05 12:03:08 ha-672593 kubelet[1363]: W0805 12:03:08.485362    1363 reflector.go:470] object-"default"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Aug 05 12:03:08 ha-672593 kubelet[1363]: W0805 12:03:08.485383    1363 reflector.go:470] object-"kube-system"/"coredns": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Aug 05 12:03:08 ha-672593 kubelet[1363]: W0805 12:03:08.485402    1363 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Aug 05 12:03:08 ha-672593 kubelet[1363]: W0805 12:03:08.485421    1363 reflector.go:470] object-"kube-system"/"kube-proxy": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Aug 05 12:03:08 ha-672593 kubelet[1363]: E0805 12:03:08.485488    1363 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-672593?timeout=10s\": http2: client connection lost" interval="200ms"
	Aug 05 12:03:08 ha-672593 kubelet[1363]: W0805 12:03:08.485678    1363 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Aug 05 12:03:09 ha-672593 kubelet[1363]: I0805 12:03:09.411015    1363 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-672593" podStartSLOduration=144.410926291 podStartE2EDuration="2m24.410926291s" podCreationTimestamp="2024-08-05 12:00:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 12:00:46.01939515 +0000 UTC m=+778.129276577" watchObservedRunningTime="2024-08-05 12:03:09.410926291 +0000 UTC m=+921.520807718"
	Aug 05 12:03:48 ha-672593 kubelet[1363]: E0805 12:03:48.029778    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 12:03:48 ha-672593 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 12:03:48 ha-672593 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 12:03:48 ha-672593 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 12:03:48 ha-672593 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0805 12:04:30.267862  411648 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19377-383955/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-672593 -n ha-672593
helpers_test.go:261: (dbg) Run:  kubectl --context ha-672593 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.85s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (323.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-841883
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-841883
E0805 12:20:27.757492  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.crt: no such file or directory
E0805 12:20:55.975539  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/functional-014296/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-841883: exit status 82 (2m1.879974244s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-841883-m03"  ...
	* Stopping node "multinode-841883-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-841883" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-841883 --wait=true -v=8 --alsologtostderr
E0805 12:22:52.926917  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/functional-014296/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-841883 --wait=true -v=8 --alsologtostderr: (3m19.011163017s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-841883
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-841883 -n multinode-841883
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841883 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-841883 logs -n 25: (1.471621327s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-841883 ssh -n                                                                 | multinode-841883 | jenkins | v1.33.1 | 05 Aug 24 12:19 UTC | 05 Aug 24 12:19 UTC |
	|         | multinode-841883-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-841883 cp multinode-841883-m02:/home/docker/cp-test.txt                       | multinode-841883 | jenkins | v1.33.1 | 05 Aug 24 12:19 UTC | 05 Aug 24 12:19 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2344340306/001/cp-test_multinode-841883-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-841883 ssh -n                                                                 | multinode-841883 | jenkins | v1.33.1 | 05 Aug 24 12:19 UTC | 05 Aug 24 12:19 UTC |
	|         | multinode-841883-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-841883 cp multinode-841883-m02:/home/docker/cp-test.txt                       | multinode-841883 | jenkins | v1.33.1 | 05 Aug 24 12:19 UTC | 05 Aug 24 12:19 UTC |
	|         | multinode-841883:/home/docker/cp-test_multinode-841883-m02_multinode-841883.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-841883 ssh -n                                                                 | multinode-841883 | jenkins | v1.33.1 | 05 Aug 24 12:19 UTC | 05 Aug 24 12:19 UTC |
	|         | multinode-841883-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-841883 ssh -n multinode-841883 sudo cat                                       | multinode-841883 | jenkins | v1.33.1 | 05 Aug 24 12:19 UTC | 05 Aug 24 12:19 UTC |
	|         | /home/docker/cp-test_multinode-841883-m02_multinode-841883.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-841883 cp multinode-841883-m02:/home/docker/cp-test.txt                       | multinode-841883 | jenkins | v1.33.1 | 05 Aug 24 12:19 UTC | 05 Aug 24 12:19 UTC |
	|         | multinode-841883-m03:/home/docker/cp-test_multinode-841883-m02_multinode-841883-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-841883 ssh -n                                                                 | multinode-841883 | jenkins | v1.33.1 | 05 Aug 24 12:19 UTC | 05 Aug 24 12:19 UTC |
	|         | multinode-841883-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-841883 ssh -n multinode-841883-m03 sudo cat                                   | multinode-841883 | jenkins | v1.33.1 | 05 Aug 24 12:19 UTC | 05 Aug 24 12:19 UTC |
	|         | /home/docker/cp-test_multinode-841883-m02_multinode-841883-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-841883 cp testdata/cp-test.txt                                                | multinode-841883 | jenkins | v1.33.1 | 05 Aug 24 12:19 UTC | 05 Aug 24 12:19 UTC |
	|         | multinode-841883-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-841883 ssh -n                                                                 | multinode-841883 | jenkins | v1.33.1 | 05 Aug 24 12:19 UTC | 05 Aug 24 12:19 UTC |
	|         | multinode-841883-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-841883 cp multinode-841883-m03:/home/docker/cp-test.txt                       | multinode-841883 | jenkins | v1.33.1 | 05 Aug 24 12:19 UTC | 05 Aug 24 12:19 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2344340306/001/cp-test_multinode-841883-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-841883 ssh -n                                                                 | multinode-841883 | jenkins | v1.33.1 | 05 Aug 24 12:19 UTC | 05 Aug 24 12:19 UTC |
	|         | multinode-841883-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-841883 cp multinode-841883-m03:/home/docker/cp-test.txt                       | multinode-841883 | jenkins | v1.33.1 | 05 Aug 24 12:19 UTC | 05 Aug 24 12:19 UTC |
	|         | multinode-841883:/home/docker/cp-test_multinode-841883-m03_multinode-841883.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-841883 ssh -n                                                                 | multinode-841883 | jenkins | v1.33.1 | 05 Aug 24 12:19 UTC | 05 Aug 24 12:19 UTC |
	|         | multinode-841883-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-841883 ssh -n multinode-841883 sudo cat                                       | multinode-841883 | jenkins | v1.33.1 | 05 Aug 24 12:19 UTC | 05 Aug 24 12:19 UTC |
	|         | /home/docker/cp-test_multinode-841883-m03_multinode-841883.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-841883 cp multinode-841883-m03:/home/docker/cp-test.txt                       | multinode-841883 | jenkins | v1.33.1 | 05 Aug 24 12:19 UTC | 05 Aug 24 12:19 UTC |
	|         | multinode-841883-m02:/home/docker/cp-test_multinode-841883-m03_multinode-841883-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-841883 ssh -n                                                                 | multinode-841883 | jenkins | v1.33.1 | 05 Aug 24 12:19 UTC | 05 Aug 24 12:19 UTC |
	|         | multinode-841883-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-841883 ssh -n multinode-841883-m02 sudo cat                                   | multinode-841883 | jenkins | v1.33.1 | 05 Aug 24 12:19 UTC | 05 Aug 24 12:19 UTC |
	|         | /home/docker/cp-test_multinode-841883-m03_multinode-841883-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-841883 node stop m03                                                          | multinode-841883 | jenkins | v1.33.1 | 05 Aug 24 12:19 UTC | 05 Aug 24 12:19 UTC |
	| node    | multinode-841883 node start                                                             | multinode-841883 | jenkins | v1.33.1 | 05 Aug 24 12:19 UTC | 05 Aug 24 12:19 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-841883                                                                | multinode-841883 | jenkins | v1.33.1 | 05 Aug 24 12:19 UTC |                     |
	| stop    | -p multinode-841883                                                                     | multinode-841883 | jenkins | v1.33.1 | 05 Aug 24 12:19 UTC |                     |
	| start   | -p multinode-841883                                                                     | multinode-841883 | jenkins | v1.33.1 | 05 Aug 24 12:21 UTC | 05 Aug 24 12:25 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-841883                                                                | multinode-841883 | jenkins | v1.33.1 | 05 Aug 24 12:25 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 12:21:54
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 12:21:54.825660  421523 out.go:291] Setting OutFile to fd 1 ...
	I0805 12:21:54.825898  421523 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:21:54.825927  421523 out.go:304] Setting ErrFile to fd 2...
	I0805 12:21:54.825944  421523 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:21:54.826537  421523 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
	I0805 12:21:54.827270  421523 out.go:298] Setting JSON to false
	I0805 12:21:54.828356  421523 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":7462,"bootTime":1722853053,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0805 12:21:54.828422  421523 start.go:139] virtualization: kvm guest
	I0805 12:21:54.830928  421523 out.go:177] * [multinode-841883] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0805 12:21:54.832466  421523 notify.go:220] Checking for updates...
	I0805 12:21:54.832493  421523 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 12:21:54.834048  421523 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 12:21:54.835661  421523 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 12:21:54.837247  421523 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 12:21:54.838901  421523 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0805 12:21:54.840066  421523 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 12:21:54.841693  421523 config.go:182] Loaded profile config "multinode-841883": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:21:54.841782  421523 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 12:21:54.842202  421523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:21:54.842265  421523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:21:54.858025  421523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44289
	I0805 12:21:54.858470  421523 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:21:54.858965  421523 main.go:141] libmachine: Using API Version  1
	I0805 12:21:54.858985  421523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:21:54.859457  421523 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:21:54.859679  421523 main.go:141] libmachine: (multinode-841883) Calling .DriverName
	I0805 12:21:54.894646  421523 out.go:177] * Using the kvm2 driver based on existing profile
	I0805 12:21:54.895874  421523 start.go:297] selected driver: kvm2
	I0805 12:21:54.895892  421523 start.go:901] validating driver "kvm2" against &{Name:multinode-841883 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-841883 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.205 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.3 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingres
s-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:21:54.896034  421523 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 12:21:54.896365  421523 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 12:21:54.896441  421523 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19377-383955/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0805 12:21:54.911831  421523 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0805 12:21:54.912595  421523 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 12:21:54.912680  421523 cni.go:84] Creating CNI manager for ""
	I0805 12:21:54.912697  421523 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0805 12:21:54.912779  421523 start.go:340] cluster config:
	{Name:multinode-841883 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-841883 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.205 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.3 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kon
g:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:21:54.912949  421523 iso.go:125] acquiring lock: {Name:mk78a4988ea0dfb86bb6f7367e362683a39fd912 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 12:21:54.914924  421523 out.go:177] * Starting "multinode-841883" primary control-plane node in "multinode-841883" cluster
	I0805 12:21:54.916494  421523 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 12:21:54.916542  421523 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0805 12:21:54.916553  421523 cache.go:56] Caching tarball of preloaded images
	I0805 12:21:54.916651  421523 preload.go:172] Found /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0805 12:21:54.916669  421523 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0805 12:21:54.916793  421523 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/multinode-841883/config.json ...
	I0805 12:21:54.917032  421523 start.go:360] acquireMachinesLock for multinode-841883: {Name:mk3babe91d55c30c0b650587cdec6489eb3a7ed6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 12:21:54.917105  421523 start.go:364] duration metric: took 42.097µs to acquireMachinesLock for "multinode-841883"
	I0805 12:21:54.917122  421523 start.go:96] Skipping create...Using existing machine configuration
	I0805 12:21:54.917128  421523 fix.go:54] fixHost starting: 
	I0805 12:21:54.917394  421523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:21:54.917429  421523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:21:54.931256  421523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37635
	I0805 12:21:54.931699  421523 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:21:54.932193  421523 main.go:141] libmachine: Using API Version  1
	I0805 12:21:54.932215  421523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:21:54.932499  421523 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:21:54.932663  421523 main.go:141] libmachine: (multinode-841883) Calling .DriverName
	I0805 12:21:54.932924  421523 main.go:141] libmachine: (multinode-841883) Calling .GetState
	I0805 12:21:54.934746  421523 fix.go:112] recreateIfNeeded on multinode-841883: state=Running err=<nil>
	W0805 12:21:54.934765  421523 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 12:21:54.936728  421523 out.go:177] * Updating the running kvm2 "multinode-841883" VM ...
	I0805 12:21:54.937971  421523 machine.go:94] provisionDockerMachine start ...
	I0805 12:21:54.938011  421523 main.go:141] libmachine: (multinode-841883) Calling .DriverName
	I0805 12:21:54.938256  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHHostname
	I0805 12:21:54.941112  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:21:54.941603  421523 main.go:141] libmachine: (multinode-841883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:b1:cd", ip: ""} in network mk-multinode-841883: {Iface:virbr1 ExpiryTime:2024-08-05 13:16:23 +0000 UTC Type:0 Mac:52:54:00:e6:b1:cd Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-841883 Clientid:01:52:54:00:e6:b1:cd}
	I0805 12:21:54.941625  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined IP address 192.168.39.86 and MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:21:54.941807  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHPort
	I0805 12:21:54.942000  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHKeyPath
	I0805 12:21:54.942171  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHKeyPath
	I0805 12:21:54.942285  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHUsername
	I0805 12:21:54.942436  421523 main.go:141] libmachine: Using SSH client type: native
	I0805 12:21:54.942645  421523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0805 12:21:54.942664  421523 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 12:21:55.056589  421523 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-841883
	
	I0805 12:21:55.056615  421523 main.go:141] libmachine: (multinode-841883) Calling .GetMachineName
	I0805 12:21:55.056906  421523 buildroot.go:166] provisioning hostname "multinode-841883"
	I0805 12:21:55.056938  421523 main.go:141] libmachine: (multinode-841883) Calling .GetMachineName
	I0805 12:21:55.057193  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHHostname
	I0805 12:21:55.059679  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:21:55.060027  421523 main.go:141] libmachine: (multinode-841883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:b1:cd", ip: ""} in network mk-multinode-841883: {Iface:virbr1 ExpiryTime:2024-08-05 13:16:23 +0000 UTC Type:0 Mac:52:54:00:e6:b1:cd Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-841883 Clientid:01:52:54:00:e6:b1:cd}
	I0805 12:21:55.060054  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined IP address 192.168.39.86 and MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:21:55.060159  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHPort
	I0805 12:21:55.060359  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHKeyPath
	I0805 12:21:55.060634  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHKeyPath
	I0805 12:21:55.060792  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHUsername
	I0805 12:21:55.060957  421523 main.go:141] libmachine: Using SSH client type: native
	I0805 12:21:55.061294  421523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0805 12:21:55.061327  421523 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-841883 && echo "multinode-841883" | sudo tee /etc/hostname
	I0805 12:21:55.182960  421523 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-841883
	
	I0805 12:21:55.182988  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHHostname
	I0805 12:21:55.186110  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:21:55.186502  421523 main.go:141] libmachine: (multinode-841883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:b1:cd", ip: ""} in network mk-multinode-841883: {Iface:virbr1 ExpiryTime:2024-08-05 13:16:23 +0000 UTC Type:0 Mac:52:54:00:e6:b1:cd Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-841883 Clientid:01:52:54:00:e6:b1:cd}
	I0805 12:21:55.186561  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined IP address 192.168.39.86 and MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:21:55.186681  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHPort
	I0805 12:21:55.186874  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHKeyPath
	I0805 12:21:55.187069  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHKeyPath
	I0805 12:21:55.187184  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHUsername
	I0805 12:21:55.187381  421523 main.go:141] libmachine: Using SSH client type: native
	I0805 12:21:55.187597  421523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0805 12:21:55.187620  421523 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-841883' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-841883/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-841883' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 12:21:55.296831  421523 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 12:21:55.296862  421523 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19377-383955/.minikube CaCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19377-383955/.minikube}
	I0805 12:21:55.296901  421523 buildroot.go:174] setting up certificates
	I0805 12:21:55.296910  421523 provision.go:84] configureAuth start
	I0805 12:21:55.296920  421523 main.go:141] libmachine: (multinode-841883) Calling .GetMachineName
	I0805 12:21:55.297203  421523 main.go:141] libmachine: (multinode-841883) Calling .GetIP
	I0805 12:21:55.300056  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:21:55.300460  421523 main.go:141] libmachine: (multinode-841883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:b1:cd", ip: ""} in network mk-multinode-841883: {Iface:virbr1 ExpiryTime:2024-08-05 13:16:23 +0000 UTC Type:0 Mac:52:54:00:e6:b1:cd Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-841883 Clientid:01:52:54:00:e6:b1:cd}
	I0805 12:21:55.300485  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined IP address 192.168.39.86 and MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:21:55.300805  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHHostname
	I0805 12:21:55.303190  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:21:55.303554  421523 main.go:141] libmachine: (multinode-841883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:b1:cd", ip: ""} in network mk-multinode-841883: {Iface:virbr1 ExpiryTime:2024-08-05 13:16:23 +0000 UTC Type:0 Mac:52:54:00:e6:b1:cd Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-841883 Clientid:01:52:54:00:e6:b1:cd}
	I0805 12:21:55.303592  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined IP address 192.168.39.86 and MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:21:55.303777  421523 provision.go:143] copyHostCerts
	I0805 12:21:55.303808  421523 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 12:21:55.303854  421523 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem, removing ...
	I0805 12:21:55.303875  421523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 12:21:55.303956  421523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem (1082 bytes)
	I0805 12:21:55.304061  421523 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 12:21:55.304088  421523 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem, removing ...
	I0805 12:21:55.304098  421523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 12:21:55.304136  421523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem (1123 bytes)
	I0805 12:21:55.304237  421523 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 12:21:55.304334  421523 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem, removing ...
	I0805 12:21:55.304362  421523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 12:21:55.304423  421523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem (1675 bytes)
	I0805 12:21:55.304532  421523 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem org=jenkins.multinode-841883 san=[127.0.0.1 192.168.39.86 localhost minikube multinode-841883]
	I0805 12:21:55.647089  421523 provision.go:177] copyRemoteCerts
	I0805 12:21:55.647168  421523 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 12:21:55.647200  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHHostname
	I0805 12:21:55.650056  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:21:55.650501  421523 main.go:141] libmachine: (multinode-841883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:b1:cd", ip: ""} in network mk-multinode-841883: {Iface:virbr1 ExpiryTime:2024-08-05 13:16:23 +0000 UTC Type:0 Mac:52:54:00:e6:b1:cd Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-841883 Clientid:01:52:54:00:e6:b1:cd}
	I0805 12:21:55.650563  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined IP address 192.168.39.86 and MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:21:55.650694  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHPort
	I0805 12:21:55.650894  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHKeyPath
	I0805 12:21:55.651118  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHUsername
	I0805 12:21:55.651319  421523 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/multinode-841883/id_rsa Username:docker}
	I0805 12:21:55.738988  421523 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 12:21:55.739071  421523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 12:21:55.765888  421523 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 12:21:55.765963  421523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0805 12:21:55.793111  421523 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 12:21:55.793206  421523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0805 12:21:55.818208  421523 provision.go:87] duration metric: took 521.279177ms to configureAuth
	I0805 12:21:55.818244  421523 buildroot.go:189] setting minikube options for container-runtime
	I0805 12:21:55.818480  421523 config.go:182] Loaded profile config "multinode-841883": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:21:55.818568  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHHostname
	I0805 12:21:55.821361  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:21:55.821753  421523 main.go:141] libmachine: (multinode-841883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:b1:cd", ip: ""} in network mk-multinode-841883: {Iface:virbr1 ExpiryTime:2024-08-05 13:16:23 +0000 UTC Type:0 Mac:52:54:00:e6:b1:cd Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-841883 Clientid:01:52:54:00:e6:b1:cd}
	I0805 12:21:55.821788  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined IP address 192.168.39.86 and MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:21:55.821910  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHPort
	I0805 12:21:55.822134  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHKeyPath
	I0805 12:21:55.822304  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHKeyPath
	I0805 12:21:55.822473  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHUsername
	I0805 12:21:55.822691  421523 main.go:141] libmachine: Using SSH client type: native
	I0805 12:21:55.822880  421523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0805 12:21:55.822896  421523 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 12:23:26.705502  421523 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 12:23:26.705543  421523 machine.go:97] duration metric: took 1m31.767551252s to provisionDockerMachine
	I0805 12:23:26.705560  421523 start.go:293] postStartSetup for "multinode-841883" (driver="kvm2")
	I0805 12:23:26.705577  421523 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 12:23:26.705601  421523 main.go:141] libmachine: (multinode-841883) Calling .DriverName
	I0805 12:23:26.705984  421523 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 12:23:26.706016  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHHostname
	I0805 12:23:26.709515  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:23:26.709983  421523 main.go:141] libmachine: (multinode-841883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:b1:cd", ip: ""} in network mk-multinode-841883: {Iface:virbr1 ExpiryTime:2024-08-05 13:16:23 +0000 UTC Type:0 Mac:52:54:00:e6:b1:cd Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-841883 Clientid:01:52:54:00:e6:b1:cd}
	I0805 12:23:26.710017  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined IP address 192.168.39.86 and MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:23:26.710152  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHPort
	I0805 12:23:26.710349  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHKeyPath
	I0805 12:23:26.710538  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHUsername
	I0805 12:23:26.710715  421523 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/multinode-841883/id_rsa Username:docker}
	I0805 12:23:26.795819  421523 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 12:23:26.799957  421523 command_runner.go:130] > NAME=Buildroot
	I0805 12:23:26.799974  421523 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0805 12:23:26.799980  421523 command_runner.go:130] > ID=buildroot
	I0805 12:23:26.799988  421523 command_runner.go:130] > VERSION_ID=2023.02.9
	I0805 12:23:26.799995  421523 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0805 12:23:26.800048  421523 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 12:23:26.800090  421523 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/addons for local assets ...
	I0805 12:23:26.800185  421523 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/files for local assets ...
	I0805 12:23:26.800290  421523 filesync.go:149] local asset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> 3912192.pem in /etc/ssl/certs
	I0805 12:23:26.800303  421523 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> /etc/ssl/certs/3912192.pem
	I0805 12:23:26.800439  421523 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 12:23:26.810657  421523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:23:26.833987  421523 start.go:296] duration metric: took 128.408207ms for postStartSetup
	I0805 12:23:26.834050  421523 fix.go:56] duration metric: took 1m31.916920326s for fixHost
	I0805 12:23:26.834077  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHHostname
	I0805 12:23:26.836778  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:23:26.837247  421523 main.go:141] libmachine: (multinode-841883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:b1:cd", ip: ""} in network mk-multinode-841883: {Iface:virbr1 ExpiryTime:2024-08-05 13:16:23 +0000 UTC Type:0 Mac:52:54:00:e6:b1:cd Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-841883 Clientid:01:52:54:00:e6:b1:cd}
	I0805 12:23:26.837278  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined IP address 192.168.39.86 and MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:23:26.837473  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHPort
	I0805 12:23:26.837682  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHKeyPath
	I0805 12:23:26.837847  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHKeyPath
	I0805 12:23:26.837982  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHUsername
	I0805 12:23:26.838141  421523 main.go:141] libmachine: Using SSH client type: native
	I0805 12:23:26.838366  421523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0805 12:23:26.838381  421523 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 12:23:26.944755  421523 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722860606.923364013
	
	I0805 12:23:26.944784  421523 fix.go:216] guest clock: 1722860606.923364013
	I0805 12:23:26.944792  421523 fix.go:229] Guest: 2024-08-05 12:23:26.923364013 +0000 UTC Remote: 2024-08-05 12:23:26.834055772 +0000 UTC m=+92.046937845 (delta=89.308241ms)
	I0805 12:23:26.944834  421523 fix.go:200] guest clock delta is within tolerance: 89.308241ms
	I0805 12:23:26.944845  421523 start.go:83] releasing machines lock for "multinode-841883", held for 1m32.027729676s
	I0805 12:23:26.944875  421523 main.go:141] libmachine: (multinode-841883) Calling .DriverName
	I0805 12:23:26.945139  421523 main.go:141] libmachine: (multinode-841883) Calling .GetIP
	I0805 12:23:26.947812  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:23:26.948246  421523 main.go:141] libmachine: (multinode-841883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:b1:cd", ip: ""} in network mk-multinode-841883: {Iface:virbr1 ExpiryTime:2024-08-05 13:16:23 +0000 UTC Type:0 Mac:52:54:00:e6:b1:cd Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-841883 Clientid:01:52:54:00:e6:b1:cd}
	I0805 12:23:26.948286  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined IP address 192.168.39.86 and MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:23:26.948432  421523 main.go:141] libmachine: (multinode-841883) Calling .DriverName
	I0805 12:23:26.948935  421523 main.go:141] libmachine: (multinode-841883) Calling .DriverName
	I0805 12:23:26.949147  421523 main.go:141] libmachine: (multinode-841883) Calling .DriverName
	I0805 12:23:26.949246  421523 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 12:23:26.949293  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHHostname
	I0805 12:23:26.949388  421523 ssh_runner.go:195] Run: cat /version.json
	I0805 12:23:26.949429  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHHostname
	I0805 12:23:26.952088  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:23:26.952431  421523 main.go:141] libmachine: (multinode-841883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:b1:cd", ip: ""} in network mk-multinode-841883: {Iface:virbr1 ExpiryTime:2024-08-05 13:16:23 +0000 UTC Type:0 Mac:52:54:00:e6:b1:cd Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-841883 Clientid:01:52:54:00:e6:b1:cd}
	I0805 12:23:26.952456  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:23:26.952493  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined IP address 192.168.39.86 and MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:23:26.952657  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHPort
	I0805 12:23:26.952825  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHKeyPath
	I0805 12:23:26.952932  421523 main.go:141] libmachine: (multinode-841883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:b1:cd", ip: ""} in network mk-multinode-841883: {Iface:virbr1 ExpiryTime:2024-08-05 13:16:23 +0000 UTC Type:0 Mac:52:54:00:e6:b1:cd Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-841883 Clientid:01:52:54:00:e6:b1:cd}
	I0805 12:23:26.952958  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined IP address 192.168.39.86 and MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:23:26.952963  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHUsername
	I0805 12:23:26.953081  421523 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/multinode-841883/id_rsa Username:docker}
	I0805 12:23:26.953137  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHPort
	I0805 12:23:26.953251  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHKeyPath
	I0805 12:23:26.953434  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHUsername
	I0805 12:23:26.953605  421523 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/multinode-841883/id_rsa Username:docker}
	I0805 12:23:27.052081  421523 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0805 12:23:27.052940  421523 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0805 12:23:27.053115  421523 ssh_runner.go:195] Run: systemctl --version
	I0805 12:23:27.058902  421523 command_runner.go:130] > systemd 252 (252)
	I0805 12:23:27.058936  421523 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0805 12:23:27.059188  421523 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 12:23:27.234197  421523 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0805 12:23:27.240736  421523 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0805 12:23:27.240786  421523 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 12:23:27.240852  421523 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 12:23:27.250419  421523 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0805 12:23:27.250445  421523 start.go:495] detecting cgroup driver to use...
	I0805 12:23:27.250529  421523 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 12:23:27.269813  421523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 12:23:27.288676  421523 docker.go:217] disabling cri-docker service (if available) ...
	I0805 12:23:27.288736  421523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 12:23:27.303402  421523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 12:23:27.318886  421523 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 12:23:27.487837  421523 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 12:23:27.625251  421523 docker.go:233] disabling docker service ...
	I0805 12:23:27.625317  421523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 12:23:27.647064  421523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 12:23:27.661886  421523 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 12:23:27.794504  421523 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 12:23:27.930845  421523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 12:23:27.945315  421523 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 12:23:27.963525  421523 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0805 12:23:27.963813  421523 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0805 12:23:27.963877  421523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:23:27.974620  421523 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 12:23:27.974684  421523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:23:27.985649  421523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:23:27.995775  421523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:23:28.006085  421523 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 12:23:28.016497  421523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:23:28.027028  421523 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:23:28.037341  421523 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:23:28.047576  421523 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 12:23:28.057292  421523 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0805 12:23:28.057391  421523 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 12:23:28.066643  421523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:23:28.199392  421523 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 12:23:28.447994  421523 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 12:23:28.448076  421523 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 12:23:28.457074  421523 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0805 12:23:28.457111  421523 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0805 12:23:28.457118  421523 command_runner.go:130] > Device: 0,22	Inode: 1317        Links: 1
	I0805 12:23:28.457125  421523 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0805 12:23:28.457130  421523 command_runner.go:130] > Access: 2024-08-05 12:23:28.411505442 +0000
	I0805 12:23:28.457138  421523 command_runner.go:130] > Modify: 2024-08-05 12:23:28.321503260 +0000
	I0805 12:23:28.457143  421523 command_runner.go:130] > Change: 2024-08-05 12:23:28.321503260 +0000
	I0805 12:23:28.457146  421523 command_runner.go:130] >  Birth: -
	I0805 12:23:28.457449  421523 start.go:563] Will wait 60s for crictl version
	I0805 12:23:28.457510  421523 ssh_runner.go:195] Run: which crictl
	I0805 12:23:28.461231  421523 command_runner.go:130] > /usr/bin/crictl
	I0805 12:23:28.461499  421523 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 12:23:28.496308  421523 command_runner.go:130] > Version:  0.1.0
	I0805 12:23:28.496331  421523 command_runner.go:130] > RuntimeName:  cri-o
	I0805 12:23:28.496338  421523 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0805 12:23:28.496346  421523 command_runner.go:130] > RuntimeApiVersion:  v1
	I0805 12:23:28.496492  421523 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 12:23:28.496570  421523 ssh_runner.go:195] Run: crio --version
	I0805 12:23:28.524237  421523 command_runner.go:130] > crio version 1.29.1
	I0805 12:23:28.524261  421523 command_runner.go:130] > Version:        1.29.1
	I0805 12:23:28.524267  421523 command_runner.go:130] > GitCommit:      unknown
	I0805 12:23:28.524272  421523 command_runner.go:130] > GitCommitDate:  unknown
	I0805 12:23:28.524275  421523 command_runner.go:130] > GitTreeState:   clean
	I0805 12:23:28.524281  421523 command_runner.go:130] > BuildDate:      2024-07-29T16:04:01Z
	I0805 12:23:28.524285  421523 command_runner.go:130] > GoVersion:      go1.21.6
	I0805 12:23:28.524289  421523 command_runner.go:130] > Compiler:       gc
	I0805 12:23:28.524294  421523 command_runner.go:130] > Platform:       linux/amd64
	I0805 12:23:28.524301  421523 command_runner.go:130] > Linkmode:       dynamic
	I0805 12:23:28.524308  421523 command_runner.go:130] > BuildTags:      
	I0805 12:23:28.524327  421523 command_runner.go:130] >   containers_image_ostree_stub
	I0805 12:23:28.524335  421523 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0805 12:23:28.524343  421523 command_runner.go:130] >   btrfs_noversion
	I0805 12:23:28.524350  421523 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0805 12:23:28.524361  421523 command_runner.go:130] >   libdm_no_deferred_remove
	I0805 12:23:28.524365  421523 command_runner.go:130] >   seccomp
	I0805 12:23:28.524370  421523 command_runner.go:130] > LDFlags:          unknown
	I0805 12:23:28.524377  421523 command_runner.go:130] > SeccompEnabled:   true
	I0805 12:23:28.524381  421523 command_runner.go:130] > AppArmorEnabled:  false
	I0805 12:23:28.524462  421523 ssh_runner.go:195] Run: crio --version
	I0805 12:23:28.551812  421523 command_runner.go:130] > crio version 1.29.1
	I0805 12:23:28.551840  421523 command_runner.go:130] > Version:        1.29.1
	I0805 12:23:28.551850  421523 command_runner.go:130] > GitCommit:      unknown
	I0805 12:23:28.551856  421523 command_runner.go:130] > GitCommitDate:  unknown
	I0805 12:23:28.551863  421523 command_runner.go:130] > GitTreeState:   clean
	I0805 12:23:28.551872  421523 command_runner.go:130] > BuildDate:      2024-07-29T16:04:01Z
	I0805 12:23:28.551879  421523 command_runner.go:130] > GoVersion:      go1.21.6
	I0805 12:23:28.551885  421523 command_runner.go:130] > Compiler:       gc
	I0805 12:23:28.551892  421523 command_runner.go:130] > Platform:       linux/amd64
	I0805 12:23:28.551896  421523 command_runner.go:130] > Linkmode:       dynamic
	I0805 12:23:28.551901  421523 command_runner.go:130] > BuildTags:      
	I0805 12:23:28.551906  421523 command_runner.go:130] >   containers_image_ostree_stub
	I0805 12:23:28.551911  421523 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0805 12:23:28.551915  421523 command_runner.go:130] >   btrfs_noversion
	I0805 12:23:28.551920  421523 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0805 12:23:28.551924  421523 command_runner.go:130] >   libdm_no_deferred_remove
	I0805 12:23:28.551928  421523 command_runner.go:130] >   seccomp
	I0805 12:23:28.551932  421523 command_runner.go:130] > LDFlags:          unknown
	I0805 12:23:28.551936  421523 command_runner.go:130] > SeccompEnabled:   true
	I0805 12:23:28.551941  421523 command_runner.go:130] > AppArmorEnabled:  false
	I0805 12:23:28.554053  421523 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0805 12:23:28.555418  421523 main.go:141] libmachine: (multinode-841883) Calling .GetIP
	I0805 12:23:28.558354  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:23:28.558703  421523 main.go:141] libmachine: (multinode-841883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:b1:cd", ip: ""} in network mk-multinode-841883: {Iface:virbr1 ExpiryTime:2024-08-05 13:16:23 +0000 UTC Type:0 Mac:52:54:00:e6:b1:cd Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-841883 Clientid:01:52:54:00:e6:b1:cd}
	I0805 12:23:28.558738  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined IP address 192.168.39.86 and MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:23:28.558910  421523 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0805 12:23:28.563510  421523 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0805 12:23:28.563636  421523 kubeadm.go:883] updating cluster {Name:multinode-841883 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-841883 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.205 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.3 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 12:23:28.563839  421523 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 12:23:28.563910  421523 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:23:28.607922  421523 command_runner.go:130] > {
	I0805 12:23:28.607950  421523 command_runner.go:130] >   "images": [
	I0805 12:23:28.607955  421523 command_runner.go:130] >     {
	I0805 12:23:28.607962  421523 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0805 12:23:28.607967  421523 command_runner.go:130] >       "repoTags": [
	I0805 12:23:28.607973  421523 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0805 12:23:28.607977  421523 command_runner.go:130] >       ],
	I0805 12:23:28.607981  421523 command_runner.go:130] >       "repoDigests": [
	I0805 12:23:28.607991  421523 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0805 12:23:28.608003  421523 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0805 12:23:28.608010  421523 command_runner.go:130] >       ],
	I0805 12:23:28.608018  421523 command_runner.go:130] >       "size": "87165492",
	I0805 12:23:28.608028  421523 command_runner.go:130] >       "uid": null,
	I0805 12:23:28.608034  421523 command_runner.go:130] >       "username": "",
	I0805 12:23:28.608046  421523 command_runner.go:130] >       "spec": null,
	I0805 12:23:28.608053  421523 command_runner.go:130] >       "pinned": false
	I0805 12:23:28.608060  421523 command_runner.go:130] >     },
	I0805 12:23:28.608065  421523 command_runner.go:130] >     {
	I0805 12:23:28.608077  421523 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0805 12:23:28.608087  421523 command_runner.go:130] >       "repoTags": [
	I0805 12:23:28.608100  421523 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0805 12:23:28.608106  421523 command_runner.go:130] >       ],
	I0805 12:23:28.608115  421523 command_runner.go:130] >       "repoDigests": [
	I0805 12:23:28.608127  421523 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0805 12:23:28.608144  421523 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0805 12:23:28.608152  421523 command_runner.go:130] >       ],
	I0805 12:23:28.608158  421523 command_runner.go:130] >       "size": "87174707",
	I0805 12:23:28.608162  421523 command_runner.go:130] >       "uid": null,
	I0805 12:23:28.608171  421523 command_runner.go:130] >       "username": "",
	I0805 12:23:28.608181  421523 command_runner.go:130] >       "spec": null,
	I0805 12:23:28.608193  421523 command_runner.go:130] >       "pinned": false
	I0805 12:23:28.608203  421523 command_runner.go:130] >     },
	I0805 12:23:28.608211  421523 command_runner.go:130] >     {
	I0805 12:23:28.608221  421523 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0805 12:23:28.608230  421523 command_runner.go:130] >       "repoTags": [
	I0805 12:23:28.608239  421523 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0805 12:23:28.608245  421523 command_runner.go:130] >       ],
	I0805 12:23:28.608249  421523 command_runner.go:130] >       "repoDigests": [
	I0805 12:23:28.608262  421523 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0805 12:23:28.608277  421523 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0805 12:23:28.608286  421523 command_runner.go:130] >       ],
	I0805 12:23:28.608296  421523 command_runner.go:130] >       "size": "1363676",
	I0805 12:23:28.608305  421523 command_runner.go:130] >       "uid": null,
	I0805 12:23:28.608398  421523 command_runner.go:130] >       "username": "",
	I0805 12:23:28.608439  421523 command_runner.go:130] >       "spec": null,
	I0805 12:23:28.608447  421523 command_runner.go:130] >       "pinned": false
	I0805 12:23:28.608456  421523 command_runner.go:130] >     },
	I0805 12:23:28.608462  421523 command_runner.go:130] >     {
	I0805 12:23:28.608476  421523 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0805 12:23:28.608486  421523 command_runner.go:130] >       "repoTags": [
	I0805 12:23:28.608502  421523 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0805 12:23:28.608513  421523 command_runner.go:130] >       ],
	I0805 12:23:28.608521  421523 command_runner.go:130] >       "repoDigests": [
	I0805 12:23:28.608531  421523 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0805 12:23:28.608558  421523 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0805 12:23:28.608567  421523 command_runner.go:130] >       ],
	I0805 12:23:28.608574  421523 command_runner.go:130] >       "size": "31470524",
	I0805 12:23:28.608583  421523 command_runner.go:130] >       "uid": null,
	I0805 12:23:28.608590  421523 command_runner.go:130] >       "username": "",
	I0805 12:23:28.608599  421523 command_runner.go:130] >       "spec": null,
	I0805 12:23:28.608604  421523 command_runner.go:130] >       "pinned": false
	I0805 12:23:28.608611  421523 command_runner.go:130] >     },
	I0805 12:23:28.608616  421523 command_runner.go:130] >     {
	I0805 12:23:28.608627  421523 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0805 12:23:28.608637  421523 command_runner.go:130] >       "repoTags": [
	I0805 12:23:28.608645  421523 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0805 12:23:28.608656  421523 command_runner.go:130] >       ],
	I0805 12:23:28.608664  421523 command_runner.go:130] >       "repoDigests": [
	I0805 12:23:28.608678  421523 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0805 12:23:28.608691  421523 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0805 12:23:28.608697  421523 command_runner.go:130] >       ],
	I0805 12:23:28.608703  421523 command_runner.go:130] >       "size": "61245718",
	I0805 12:23:28.608712  421523 command_runner.go:130] >       "uid": null,
	I0805 12:23:28.608720  421523 command_runner.go:130] >       "username": "nonroot",
	I0805 12:23:28.608728  421523 command_runner.go:130] >       "spec": null,
	I0805 12:23:28.608735  421523 command_runner.go:130] >       "pinned": false
	I0805 12:23:28.608744  421523 command_runner.go:130] >     },
	I0805 12:23:28.608750  421523 command_runner.go:130] >     {
	I0805 12:23:28.608763  421523 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0805 12:23:28.608772  421523 command_runner.go:130] >       "repoTags": [
	I0805 12:23:28.608778  421523 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0805 12:23:28.608784  421523 command_runner.go:130] >       ],
	I0805 12:23:28.608791  421523 command_runner.go:130] >       "repoDigests": [
	I0805 12:23:28.608805  421523 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0805 12:23:28.608819  421523 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0805 12:23:28.608828  421523 command_runner.go:130] >       ],
	I0805 12:23:28.608835  421523 command_runner.go:130] >       "size": "150779692",
	I0805 12:23:28.608843  421523 command_runner.go:130] >       "uid": {
	I0805 12:23:28.608849  421523 command_runner.go:130] >         "value": "0"
	I0805 12:23:28.608858  421523 command_runner.go:130] >       },
	I0805 12:23:28.608863  421523 command_runner.go:130] >       "username": "",
	I0805 12:23:28.608869  421523 command_runner.go:130] >       "spec": null,
	I0805 12:23:28.608875  421523 command_runner.go:130] >       "pinned": false
	I0805 12:23:28.608884  421523 command_runner.go:130] >     },
	I0805 12:23:28.608890  421523 command_runner.go:130] >     {
	I0805 12:23:28.608903  421523 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0805 12:23:28.608912  421523 command_runner.go:130] >       "repoTags": [
	I0805 12:23:28.608927  421523 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0805 12:23:28.608936  421523 command_runner.go:130] >       ],
	I0805 12:23:28.608943  421523 command_runner.go:130] >       "repoDigests": [
	I0805 12:23:28.608953  421523 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0805 12:23:28.608966  421523 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0805 12:23:28.608978  421523 command_runner.go:130] >       ],
	I0805 12:23:28.608987  421523 command_runner.go:130] >       "size": "117609954",
	I0805 12:23:28.608993  421523 command_runner.go:130] >       "uid": {
	I0805 12:23:28.609013  421523 command_runner.go:130] >         "value": "0"
	I0805 12:23:28.609023  421523 command_runner.go:130] >       },
	I0805 12:23:28.609032  421523 command_runner.go:130] >       "username": "",
	I0805 12:23:28.609037  421523 command_runner.go:130] >       "spec": null,
	I0805 12:23:28.609041  421523 command_runner.go:130] >       "pinned": false
	I0805 12:23:28.609047  421523 command_runner.go:130] >     },
	I0805 12:23:28.609059  421523 command_runner.go:130] >     {
	I0805 12:23:28.609072  421523 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0805 12:23:28.609085  421523 command_runner.go:130] >       "repoTags": [
	I0805 12:23:28.609096  421523 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0805 12:23:28.609105  421523 command_runner.go:130] >       ],
	I0805 12:23:28.609112  421523 command_runner.go:130] >       "repoDigests": [
	I0805 12:23:28.609134  421523 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0805 12:23:28.609147  421523 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0805 12:23:28.609153  421523 command_runner.go:130] >       ],
	I0805 12:23:28.609164  421523 command_runner.go:130] >       "size": "112198984",
	I0805 12:23:28.609171  421523 command_runner.go:130] >       "uid": {
	I0805 12:23:28.609180  421523 command_runner.go:130] >         "value": "0"
	I0805 12:23:28.609185  421523 command_runner.go:130] >       },
	I0805 12:23:28.609194  421523 command_runner.go:130] >       "username": "",
	I0805 12:23:28.609200  421523 command_runner.go:130] >       "spec": null,
	I0805 12:23:28.609206  421523 command_runner.go:130] >       "pinned": false
	I0805 12:23:28.609211  421523 command_runner.go:130] >     },
	I0805 12:23:28.609215  421523 command_runner.go:130] >     {
	I0805 12:23:28.609221  421523 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0805 12:23:28.609225  421523 command_runner.go:130] >       "repoTags": [
	I0805 12:23:28.609232  421523 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0805 12:23:28.609237  421523 command_runner.go:130] >       ],
	I0805 12:23:28.609246  421523 command_runner.go:130] >       "repoDigests": [
	I0805 12:23:28.609258  421523 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0805 12:23:28.609269  421523 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0805 12:23:28.609283  421523 command_runner.go:130] >       ],
	I0805 12:23:28.609290  421523 command_runner.go:130] >       "size": "85953945",
	I0805 12:23:28.609297  421523 command_runner.go:130] >       "uid": null,
	I0805 12:23:28.609303  421523 command_runner.go:130] >       "username": "",
	I0805 12:23:28.609310  421523 command_runner.go:130] >       "spec": null,
	I0805 12:23:28.609317  421523 command_runner.go:130] >       "pinned": false
	I0805 12:23:28.609323  421523 command_runner.go:130] >     },
	I0805 12:23:28.609333  421523 command_runner.go:130] >     {
	I0805 12:23:28.609352  421523 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0805 12:23:28.609361  421523 command_runner.go:130] >       "repoTags": [
	I0805 12:23:28.609370  421523 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0805 12:23:28.609378  421523 command_runner.go:130] >       ],
	I0805 12:23:28.609382  421523 command_runner.go:130] >       "repoDigests": [
	I0805 12:23:28.609390  421523 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0805 12:23:28.609400  421523 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0805 12:23:28.609405  421523 command_runner.go:130] >       ],
	I0805 12:23:28.609414  421523 command_runner.go:130] >       "size": "63051080",
	I0805 12:23:28.609420  421523 command_runner.go:130] >       "uid": {
	I0805 12:23:28.609429  421523 command_runner.go:130] >         "value": "0"
	I0805 12:23:28.609435  421523 command_runner.go:130] >       },
	I0805 12:23:28.609445  421523 command_runner.go:130] >       "username": "",
	I0805 12:23:28.609451  421523 command_runner.go:130] >       "spec": null,
	I0805 12:23:28.609460  421523 command_runner.go:130] >       "pinned": false
	I0805 12:23:28.609466  421523 command_runner.go:130] >     },
	I0805 12:23:28.609471  421523 command_runner.go:130] >     {
	I0805 12:23:28.609482  421523 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0805 12:23:28.609491  421523 command_runner.go:130] >       "repoTags": [
	I0805 12:23:28.609496  421523 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0805 12:23:28.609500  421523 command_runner.go:130] >       ],
	I0805 12:23:28.609505  421523 command_runner.go:130] >       "repoDigests": [
	I0805 12:23:28.609514  421523 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0805 12:23:28.609521  421523 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0805 12:23:28.609526  421523 command_runner.go:130] >       ],
	I0805 12:23:28.609530  421523 command_runner.go:130] >       "size": "750414",
	I0805 12:23:28.609534  421523 command_runner.go:130] >       "uid": {
	I0805 12:23:28.609538  421523 command_runner.go:130] >         "value": "65535"
	I0805 12:23:28.609541  421523 command_runner.go:130] >       },
	I0805 12:23:28.609545  421523 command_runner.go:130] >       "username": "",
	I0805 12:23:28.609551  421523 command_runner.go:130] >       "spec": null,
	I0805 12:23:28.609557  421523 command_runner.go:130] >       "pinned": true
	I0805 12:23:28.609560  421523 command_runner.go:130] >     }
	I0805 12:23:28.609564  421523 command_runner.go:130] >   ]
	I0805 12:23:28.609567  421523 command_runner.go:130] > }
	I0805 12:23:28.609787  421523 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 12:23:28.609799  421523 crio.go:433] Images already preloaded, skipping extraction
	I0805 12:23:28.609852  421523 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:23:28.642887  421523 command_runner.go:130] > {
	I0805 12:23:28.642909  421523 command_runner.go:130] >   "images": [
	I0805 12:23:28.642914  421523 command_runner.go:130] >     {
	I0805 12:23:28.642922  421523 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0805 12:23:28.642936  421523 command_runner.go:130] >       "repoTags": [
	I0805 12:23:28.642943  421523 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0805 12:23:28.642947  421523 command_runner.go:130] >       ],
	I0805 12:23:28.642951  421523 command_runner.go:130] >       "repoDigests": [
	I0805 12:23:28.642959  421523 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0805 12:23:28.642966  421523 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0805 12:23:28.642969  421523 command_runner.go:130] >       ],
	I0805 12:23:28.642974  421523 command_runner.go:130] >       "size": "87165492",
	I0805 12:23:28.642977  421523 command_runner.go:130] >       "uid": null,
	I0805 12:23:28.642981  421523 command_runner.go:130] >       "username": "",
	I0805 12:23:28.642990  421523 command_runner.go:130] >       "spec": null,
	I0805 12:23:28.642994  421523 command_runner.go:130] >       "pinned": false
	I0805 12:23:28.642998  421523 command_runner.go:130] >     },
	I0805 12:23:28.643001  421523 command_runner.go:130] >     {
	I0805 12:23:28.643008  421523 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0805 12:23:28.643015  421523 command_runner.go:130] >       "repoTags": [
	I0805 12:23:28.643020  421523 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0805 12:23:28.643024  421523 command_runner.go:130] >       ],
	I0805 12:23:28.643028  421523 command_runner.go:130] >       "repoDigests": [
	I0805 12:23:28.643037  421523 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0805 12:23:28.643044  421523 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0805 12:23:28.643050  421523 command_runner.go:130] >       ],
	I0805 12:23:28.643054  421523 command_runner.go:130] >       "size": "87174707",
	I0805 12:23:28.643059  421523 command_runner.go:130] >       "uid": null,
	I0805 12:23:28.643067  421523 command_runner.go:130] >       "username": "",
	I0805 12:23:28.643071  421523 command_runner.go:130] >       "spec": null,
	I0805 12:23:28.643075  421523 command_runner.go:130] >       "pinned": false
	I0805 12:23:28.643080  421523 command_runner.go:130] >     },
	I0805 12:23:28.643084  421523 command_runner.go:130] >     {
	I0805 12:23:28.643092  421523 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0805 12:23:28.643099  421523 command_runner.go:130] >       "repoTags": [
	I0805 12:23:28.643104  421523 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0805 12:23:28.643108  421523 command_runner.go:130] >       ],
	I0805 12:23:28.643112  421523 command_runner.go:130] >       "repoDigests": [
	I0805 12:23:28.643119  421523 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0805 12:23:28.643125  421523 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0805 12:23:28.643129  421523 command_runner.go:130] >       ],
	I0805 12:23:28.643133  421523 command_runner.go:130] >       "size": "1363676",
	I0805 12:23:28.643136  421523 command_runner.go:130] >       "uid": null,
	I0805 12:23:28.643142  421523 command_runner.go:130] >       "username": "",
	I0805 12:23:28.643164  421523 command_runner.go:130] >       "spec": null,
	I0805 12:23:28.643171  421523 command_runner.go:130] >       "pinned": false
	I0805 12:23:28.643174  421523 command_runner.go:130] >     },
	I0805 12:23:28.643177  421523 command_runner.go:130] >     {
	I0805 12:23:28.643183  421523 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0805 12:23:28.643187  421523 command_runner.go:130] >       "repoTags": [
	I0805 12:23:28.643192  421523 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0805 12:23:28.643198  421523 command_runner.go:130] >       ],
	I0805 12:23:28.643202  421523 command_runner.go:130] >       "repoDigests": [
	I0805 12:23:28.643209  421523 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0805 12:23:28.643222  421523 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0805 12:23:28.643228  421523 command_runner.go:130] >       ],
	I0805 12:23:28.643232  421523 command_runner.go:130] >       "size": "31470524",
	I0805 12:23:28.643236  421523 command_runner.go:130] >       "uid": null,
	I0805 12:23:28.643240  421523 command_runner.go:130] >       "username": "",
	I0805 12:23:28.643244  421523 command_runner.go:130] >       "spec": null,
	I0805 12:23:28.643248  421523 command_runner.go:130] >       "pinned": false
	I0805 12:23:28.643252  421523 command_runner.go:130] >     },
	I0805 12:23:28.643255  421523 command_runner.go:130] >     {
	I0805 12:23:28.643262  421523 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0805 12:23:28.643269  421523 command_runner.go:130] >       "repoTags": [
	I0805 12:23:28.643274  421523 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0805 12:23:28.643278  421523 command_runner.go:130] >       ],
	I0805 12:23:28.643281  421523 command_runner.go:130] >       "repoDigests": [
	I0805 12:23:28.643288  421523 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0805 12:23:28.643297  421523 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0805 12:23:28.643301  421523 command_runner.go:130] >       ],
	I0805 12:23:28.643305  421523 command_runner.go:130] >       "size": "61245718",
	I0805 12:23:28.643309  421523 command_runner.go:130] >       "uid": null,
	I0805 12:23:28.643313  421523 command_runner.go:130] >       "username": "nonroot",
	I0805 12:23:28.643317  421523 command_runner.go:130] >       "spec": null,
	I0805 12:23:28.643321  421523 command_runner.go:130] >       "pinned": false
	I0805 12:23:28.643324  421523 command_runner.go:130] >     },
	I0805 12:23:28.643327  421523 command_runner.go:130] >     {
	I0805 12:23:28.643335  421523 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0805 12:23:28.643341  421523 command_runner.go:130] >       "repoTags": [
	I0805 12:23:28.643346  421523 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0805 12:23:28.643350  421523 command_runner.go:130] >       ],
	I0805 12:23:28.643354  421523 command_runner.go:130] >       "repoDigests": [
	I0805 12:23:28.643363  421523 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0805 12:23:28.643369  421523 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0805 12:23:28.643375  421523 command_runner.go:130] >       ],
	I0805 12:23:28.643378  421523 command_runner.go:130] >       "size": "150779692",
	I0805 12:23:28.643382  421523 command_runner.go:130] >       "uid": {
	I0805 12:23:28.643388  421523 command_runner.go:130] >         "value": "0"
	I0805 12:23:28.643394  421523 command_runner.go:130] >       },
	I0805 12:23:28.643400  421523 command_runner.go:130] >       "username": "",
	I0805 12:23:28.643404  421523 command_runner.go:130] >       "spec": null,
	I0805 12:23:28.643407  421523 command_runner.go:130] >       "pinned": false
	I0805 12:23:28.643411  421523 command_runner.go:130] >     },
	I0805 12:23:28.643414  421523 command_runner.go:130] >     {
	I0805 12:23:28.643420  421523 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0805 12:23:28.643426  421523 command_runner.go:130] >       "repoTags": [
	I0805 12:23:28.643431  421523 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0805 12:23:28.643436  421523 command_runner.go:130] >       ],
	I0805 12:23:28.643449  421523 command_runner.go:130] >       "repoDigests": [
	I0805 12:23:28.643458  421523 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0805 12:23:28.643465  421523 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0805 12:23:28.643469  421523 command_runner.go:130] >       ],
	I0805 12:23:28.643473  421523 command_runner.go:130] >       "size": "117609954",
	I0805 12:23:28.643479  421523 command_runner.go:130] >       "uid": {
	I0805 12:23:28.643483  421523 command_runner.go:130] >         "value": "0"
	I0805 12:23:28.643487  421523 command_runner.go:130] >       },
	I0805 12:23:28.643491  421523 command_runner.go:130] >       "username": "",
	I0805 12:23:28.643494  421523 command_runner.go:130] >       "spec": null,
	I0805 12:23:28.643498  421523 command_runner.go:130] >       "pinned": false
	I0805 12:23:28.643504  421523 command_runner.go:130] >     },
	I0805 12:23:28.643507  421523 command_runner.go:130] >     {
	I0805 12:23:28.643515  421523 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0805 12:23:28.643520  421523 command_runner.go:130] >       "repoTags": [
	I0805 12:23:28.643527  421523 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0805 12:23:28.643530  421523 command_runner.go:130] >       ],
	I0805 12:23:28.643537  421523 command_runner.go:130] >       "repoDigests": [
	I0805 12:23:28.643550  421523 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0805 12:23:28.643559  421523 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0805 12:23:28.643563  421523 command_runner.go:130] >       ],
	I0805 12:23:28.643567  421523 command_runner.go:130] >       "size": "112198984",
	I0805 12:23:28.643572  421523 command_runner.go:130] >       "uid": {
	I0805 12:23:28.643576  421523 command_runner.go:130] >         "value": "0"
	I0805 12:23:28.643581  421523 command_runner.go:130] >       },
	I0805 12:23:28.643585  421523 command_runner.go:130] >       "username": "",
	I0805 12:23:28.643591  421523 command_runner.go:130] >       "spec": null,
	I0805 12:23:28.643595  421523 command_runner.go:130] >       "pinned": false
	I0805 12:23:28.643600  421523 command_runner.go:130] >     },
	I0805 12:23:28.643604  421523 command_runner.go:130] >     {
	I0805 12:23:28.643610  421523 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0805 12:23:28.643615  421523 command_runner.go:130] >       "repoTags": [
	I0805 12:23:28.643620  421523 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0805 12:23:28.643623  421523 command_runner.go:130] >       ],
	I0805 12:23:28.643627  421523 command_runner.go:130] >       "repoDigests": [
	I0805 12:23:28.643636  421523 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0805 12:23:28.643646  421523 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0805 12:23:28.643653  421523 command_runner.go:130] >       ],
	I0805 12:23:28.643657  421523 command_runner.go:130] >       "size": "85953945",
	I0805 12:23:28.643660  421523 command_runner.go:130] >       "uid": null,
	I0805 12:23:28.643665  421523 command_runner.go:130] >       "username": "",
	I0805 12:23:28.643668  421523 command_runner.go:130] >       "spec": null,
	I0805 12:23:28.643672  421523 command_runner.go:130] >       "pinned": false
	I0805 12:23:28.643675  421523 command_runner.go:130] >     },
	I0805 12:23:28.643679  421523 command_runner.go:130] >     {
	I0805 12:23:28.643685  421523 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0805 12:23:28.643691  421523 command_runner.go:130] >       "repoTags": [
	I0805 12:23:28.643697  421523 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0805 12:23:28.643702  421523 command_runner.go:130] >       ],
	I0805 12:23:28.643706  421523 command_runner.go:130] >       "repoDigests": [
	I0805 12:23:28.643713  421523 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0805 12:23:28.643721  421523 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0805 12:23:28.643725  421523 command_runner.go:130] >       ],
	I0805 12:23:28.643729  421523 command_runner.go:130] >       "size": "63051080",
	I0805 12:23:28.643735  421523 command_runner.go:130] >       "uid": {
	I0805 12:23:28.643749  421523 command_runner.go:130] >         "value": "0"
	I0805 12:23:28.643755  421523 command_runner.go:130] >       },
	I0805 12:23:28.643759  421523 command_runner.go:130] >       "username": "",
	I0805 12:23:28.643763  421523 command_runner.go:130] >       "spec": null,
	I0805 12:23:28.643769  421523 command_runner.go:130] >       "pinned": false
	I0805 12:23:28.643773  421523 command_runner.go:130] >     },
	I0805 12:23:28.643778  421523 command_runner.go:130] >     {
	I0805 12:23:28.643784  421523 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0805 12:23:28.643790  421523 command_runner.go:130] >       "repoTags": [
	I0805 12:23:28.643794  421523 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0805 12:23:28.643798  421523 command_runner.go:130] >       ],
	I0805 12:23:28.643803  421523 command_runner.go:130] >       "repoDigests": [
	I0805 12:23:28.643812  421523 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0805 12:23:28.643821  421523 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0805 12:23:28.643824  421523 command_runner.go:130] >       ],
	I0805 12:23:28.643828  421523 command_runner.go:130] >       "size": "750414",
	I0805 12:23:28.643833  421523 command_runner.go:130] >       "uid": {
	I0805 12:23:28.643838  421523 command_runner.go:130] >         "value": "65535"
	I0805 12:23:28.643844  421523 command_runner.go:130] >       },
	I0805 12:23:28.643848  421523 command_runner.go:130] >       "username": "",
	I0805 12:23:28.643851  421523 command_runner.go:130] >       "spec": null,
	I0805 12:23:28.643855  421523 command_runner.go:130] >       "pinned": true
	I0805 12:23:28.643859  421523 command_runner.go:130] >     }
	I0805 12:23:28.643862  421523 command_runner.go:130] >   ]
	I0805 12:23:28.643865  421523 command_runner.go:130] > }
	I0805 12:23:28.644810  421523 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 12:23:28.644830  421523 cache_images.go:84] Images are preloaded, skipping loading
	I0805 12:23:28.644841  421523 kubeadm.go:934] updating node { 192.168.39.86 8443 v1.30.3 crio true true} ...
	I0805 12:23:28.644972  421523 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-841883 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.86
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-841883 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 12:23:28.645057  421523 ssh_runner.go:195] Run: crio config
	I0805 12:23:28.685241  421523 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0805 12:23:28.685264  421523 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0805 12:23:28.685271  421523 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0805 12:23:28.685274  421523 command_runner.go:130] > #
	I0805 12:23:28.685287  421523 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0805 12:23:28.685297  421523 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0805 12:23:28.685307  421523 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0805 12:23:28.685320  421523 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0805 12:23:28.685324  421523 command_runner.go:130] > # reload'.
	I0805 12:23:28.685331  421523 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0805 12:23:28.685337  421523 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0805 12:23:28.685343  421523 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0805 12:23:28.685351  421523 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0805 12:23:28.685355  421523 command_runner.go:130] > [crio]
	I0805 12:23:28.685361  421523 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0805 12:23:28.685370  421523 command_runner.go:130] > # containers images, in this directory.
	I0805 12:23:28.685377  421523 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0805 12:23:28.685389  421523 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0805 12:23:28.685400  421523 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0805 12:23:28.685411  421523 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0805 12:23:28.685421  421523 command_runner.go:130] > # imagestore = ""
	I0805 12:23:28.685432  421523 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0805 12:23:28.685438  421523 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0805 12:23:28.685443  421523 command_runner.go:130] > storage_driver = "overlay"
	I0805 12:23:28.685449  421523 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0805 12:23:28.685461  421523 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0805 12:23:28.685467  421523 command_runner.go:130] > storage_option = [
	I0805 12:23:28.685481  421523 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0805 12:23:28.685489  421523 command_runner.go:130] > ]
	I0805 12:23:28.685498  421523 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0805 12:23:28.685516  421523 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0805 12:23:28.685522  421523 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0805 12:23:28.685530  421523 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0805 12:23:28.685538  421523 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0805 12:23:28.685549  421523 command_runner.go:130] > # always happen on a node reboot
	I0805 12:23:28.685557  421523 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0805 12:23:28.685572  421523 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0805 12:23:28.685584  421523 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0805 12:23:28.685593  421523 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0805 12:23:28.685601  421523 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0805 12:23:28.685616  421523 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0805 12:23:28.685631  421523 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0805 12:23:28.685641  421523 command_runner.go:130] > # internal_wipe = true
	I0805 12:23:28.685654  421523 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0805 12:23:28.685666  421523 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0805 12:23:28.685672  421523 command_runner.go:130] > # internal_repair = false
	I0805 12:23:28.685680  421523 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0805 12:23:28.685689  421523 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0805 12:23:28.685701  421523 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0805 12:23:28.685711  421523 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0805 12:23:28.685723  421523 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0805 12:23:28.685732  421523 command_runner.go:130] > [crio.api]
	I0805 12:23:28.685740  421523 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0805 12:23:28.685749  421523 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0805 12:23:28.685759  421523 command_runner.go:130] > # IP address on which the stream server will listen.
	I0805 12:23:28.685765  421523 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0805 12:23:28.685774  421523 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0805 12:23:28.685786  421523 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0805 12:23:28.685796  421523 command_runner.go:130] > # stream_port = "0"
	I0805 12:23:28.685806  421523 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0805 12:23:28.685817  421523 command_runner.go:130] > # stream_enable_tls = false
	I0805 12:23:28.685829  421523 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0805 12:23:28.685841  421523 command_runner.go:130] > # stream_idle_timeout = ""
	I0805 12:23:28.685852  421523 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0805 12:23:28.685864  421523 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0805 12:23:28.685874  421523 command_runner.go:130] > # minutes.
	I0805 12:23:28.685881  421523 command_runner.go:130] > # stream_tls_cert = ""
	I0805 12:23:28.685898  421523 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0805 12:23:28.685910  421523 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0805 12:23:28.685920  421523 command_runner.go:130] > # stream_tls_key = ""
	I0805 12:23:28.685929  421523 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0805 12:23:28.685938  421523 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0805 12:23:28.685954  421523 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0805 12:23:28.685964  421523 command_runner.go:130] > # stream_tls_ca = ""
	I0805 12:23:28.685977  421523 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0805 12:23:28.685987  421523 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0805 12:23:28.686005  421523 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0805 12:23:28.686015  421523 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0805 12:23:28.686022  421523 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0805 12:23:28.686030  421523 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0805 12:23:28.686040  421523 command_runner.go:130] > [crio.runtime]
	I0805 12:23:28.686052  421523 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0805 12:23:28.686127  421523 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0805 12:23:28.686134  421523 command_runner.go:130] > # "nofile=1024:2048"
	I0805 12:23:28.686144  421523 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0805 12:23:28.686150  421523 command_runner.go:130] > # default_ulimits = [
	I0805 12:23:28.686155  421523 command_runner.go:130] > # ]
	I0805 12:23:28.686172  421523 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0805 12:23:28.686179  421523 command_runner.go:130] > # no_pivot = false
	I0805 12:23:28.686185  421523 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0805 12:23:28.686192  421523 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0805 12:23:28.686199  421523 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0805 12:23:28.686207  421523 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0805 12:23:28.686214  421523 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0805 12:23:28.686226  421523 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0805 12:23:28.686237  421523 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0805 12:23:28.686244  421523 command_runner.go:130] > # Cgroup setting for conmon
	I0805 12:23:28.686258  421523 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0805 12:23:28.686268  421523 command_runner.go:130] > conmon_cgroup = "pod"
	I0805 12:23:28.686279  421523 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0805 12:23:28.686287  421523 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0805 12:23:28.686297  421523 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0805 12:23:28.686306  421523 command_runner.go:130] > conmon_env = [
	I0805 12:23:28.686318  421523 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0805 12:23:28.686334  421523 command_runner.go:130] > ]
	I0805 12:23:28.686345  421523 command_runner.go:130] > # Additional environment variables to set for all the
	I0805 12:23:28.686355  421523 command_runner.go:130] > # containers. These are overridden if set in the
	I0805 12:23:28.686368  421523 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0805 12:23:28.686377  421523 command_runner.go:130] > # default_env = [
	I0805 12:23:28.686383  421523 command_runner.go:130] > # ]
	I0805 12:23:28.686395  421523 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0805 12:23:28.686411  421523 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0805 12:23:28.686420  421523 command_runner.go:130] > # selinux = false
	I0805 12:23:28.686430  421523 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0805 12:23:28.686442  421523 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0805 12:23:28.686452  421523 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0805 12:23:28.686458  421523 command_runner.go:130] > # seccomp_profile = ""
	I0805 12:23:28.686471  421523 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0805 12:23:28.686483  421523 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0805 12:23:28.686495  421523 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0805 12:23:28.686506  421523 command_runner.go:130] > # which might increase security.
	I0805 12:23:28.686517  421523 command_runner.go:130] > # This option is currently deprecated,
	I0805 12:23:28.686526  421523 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0805 12:23:28.686536  421523 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0805 12:23:28.686546  421523 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0805 12:23:28.686556  421523 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0805 12:23:28.686562  421523 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0805 12:23:28.686570  421523 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0805 12:23:28.686577  421523 command_runner.go:130] > # This option supports live configuration reload.
	I0805 12:23:28.686582  421523 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0805 12:23:28.686588  421523 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0805 12:23:28.686593  421523 command_runner.go:130] > # the cgroup blockio controller.
	I0805 12:23:28.686597  421523 command_runner.go:130] > # blockio_config_file = ""
	I0805 12:23:28.686607  421523 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0805 12:23:28.686616  421523 command_runner.go:130] > # blockio parameters.
	I0805 12:23:28.686624  421523 command_runner.go:130] > # blockio_reload = false
	I0805 12:23:28.686638  421523 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0805 12:23:28.686648  421523 command_runner.go:130] > # irqbalance daemon.
	I0805 12:23:28.686658  421523 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0805 12:23:28.686670  421523 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0805 12:23:28.686681  421523 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0805 12:23:28.686695  421523 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0805 12:23:28.686712  421523 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0805 12:23:28.686727  421523 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0805 12:23:28.686739  421523 command_runner.go:130] > # This option supports live configuration reload.
	I0805 12:23:28.686749  421523 command_runner.go:130] > # rdt_config_file = ""
	I0805 12:23:28.686760  421523 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0805 12:23:28.686770  421523 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0805 12:23:28.686791  421523 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0805 12:23:28.686801  421523 command_runner.go:130] > # separate_pull_cgroup = ""
	I0805 12:23:28.686811  421523 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0805 12:23:28.686823  421523 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0805 12:23:28.686833  421523 command_runner.go:130] > # will be added.
	I0805 12:23:28.686840  421523 command_runner.go:130] > # default_capabilities = [
	I0805 12:23:28.686848  421523 command_runner.go:130] > # 	"CHOWN",
	I0805 12:23:28.686853  421523 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0805 12:23:28.686859  421523 command_runner.go:130] > # 	"FSETID",
	I0805 12:23:28.686862  421523 command_runner.go:130] > # 	"FOWNER",
	I0805 12:23:28.686868  421523 command_runner.go:130] > # 	"SETGID",
	I0805 12:23:28.686874  421523 command_runner.go:130] > # 	"SETUID",
	I0805 12:23:28.686879  421523 command_runner.go:130] > # 	"SETPCAP",
	I0805 12:23:28.686888  421523 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0805 12:23:28.686894  421523 command_runner.go:130] > # 	"KILL",
	I0805 12:23:28.686903  421523 command_runner.go:130] > # ]
	I0805 12:23:28.686914  421523 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0805 12:23:28.686927  421523 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0805 12:23:28.686935  421523 command_runner.go:130] > # add_inheritable_capabilities = false
	I0805 12:23:28.686947  421523 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0805 12:23:28.686961  421523 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0805 12:23:28.686971  421523 command_runner.go:130] > default_sysctls = [
	I0805 12:23:28.686978  421523 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0805 12:23:28.686988  421523 command_runner.go:130] > ]
	I0805 12:23:28.687000  421523 command_runner.go:130] > # List of devices on the host that a
	I0805 12:23:28.687012  421523 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0805 12:23:28.687022  421523 command_runner.go:130] > # allowed_devices = [
	I0805 12:23:28.687028  421523 command_runner.go:130] > # 	"/dev/fuse",
	I0805 12:23:28.687034  421523 command_runner.go:130] > # ]
	I0805 12:23:28.687042  421523 command_runner.go:130] > # List of additional devices. specified as
	I0805 12:23:28.687056  421523 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0805 12:23:28.687068  421523 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0805 12:23:28.687079  421523 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0805 12:23:28.687086  421523 command_runner.go:130] > # additional_devices = [
	I0805 12:23:28.687091  421523 command_runner.go:130] > # ]
	I0805 12:23:28.687100  421523 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0805 12:23:28.687113  421523 command_runner.go:130] > # cdi_spec_dirs = [
	I0805 12:23:28.687121  421523 command_runner.go:130] > # 	"/etc/cdi",
	I0805 12:23:28.687128  421523 command_runner.go:130] > # 	"/var/run/cdi",
	I0805 12:23:28.687137  421523 command_runner.go:130] > # ]
	I0805 12:23:28.687147  421523 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0805 12:23:28.687165  421523 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0805 12:23:28.687174  421523 command_runner.go:130] > # Defaults to false.
	I0805 12:23:28.687183  421523 command_runner.go:130] > # device_ownership_from_security_context = false
	I0805 12:23:28.687195  421523 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0805 12:23:28.687205  421523 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0805 12:23:28.687215  421523 command_runner.go:130] > # hooks_dir = [
	I0805 12:23:28.687222  421523 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0805 12:23:28.687229  421523 command_runner.go:130] > # ]
	I0805 12:23:28.687239  421523 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0805 12:23:28.687251  421523 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0805 12:23:28.687263  421523 command_runner.go:130] > # its default mounts from the following two files:
	I0805 12:23:28.687271  421523 command_runner.go:130] > #
	I0805 12:23:28.687281  421523 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0805 12:23:28.687293  421523 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0805 12:23:28.687305  421523 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0805 12:23:28.687313  421523 command_runner.go:130] > #
	I0805 12:23:28.687322  421523 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0805 12:23:28.687336  421523 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0805 12:23:28.687348  421523 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0805 12:23:28.687360  421523 command_runner.go:130] > #      only add mounts it finds in this file.
	I0805 12:23:28.687368  421523 command_runner.go:130] > #
	I0805 12:23:28.687374  421523 command_runner.go:130] > # default_mounts_file = ""
	I0805 12:23:28.687385  421523 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0805 12:23:28.687395  421523 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0805 12:23:28.687403  421523 command_runner.go:130] > pids_limit = 1024
	I0805 12:23:28.687412  421523 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0805 12:23:28.687424  421523 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0805 12:23:28.687438  421523 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0805 12:23:28.687453  421523 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0805 12:23:28.687462  421523 command_runner.go:130] > # log_size_max = -1
	I0805 12:23:28.687473  421523 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0805 12:23:28.687483  421523 command_runner.go:130] > # log_to_journald = false
	I0805 12:23:28.687492  421523 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0805 12:23:28.687507  421523 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0805 12:23:28.687522  421523 command_runner.go:130] > # Path to directory for container attach sockets.
	I0805 12:23:28.687533  421523 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0805 12:23:28.687541  421523 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0805 12:23:28.687550  421523 command_runner.go:130] > # bind_mount_prefix = ""
	I0805 12:23:28.687560  421523 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0805 12:23:28.687569  421523 command_runner.go:130] > # read_only = false
	I0805 12:23:28.687580  421523 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0805 12:23:28.687593  421523 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0805 12:23:28.687600  421523 command_runner.go:130] > # live configuration reload.
	I0805 12:23:28.687609  421523 command_runner.go:130] > # log_level = "info"
	I0805 12:23:28.687620  421523 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0805 12:23:28.687632  421523 command_runner.go:130] > # This option supports live configuration reload.
	I0805 12:23:28.687640  421523 command_runner.go:130] > # log_filter = ""
	I0805 12:23:28.687650  421523 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0805 12:23:28.687661  421523 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0805 12:23:28.687666  421523 command_runner.go:130] > # separated by comma.
	I0805 12:23:28.687679  421523 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0805 12:23:28.687688  421523 command_runner.go:130] > # uid_mappings = ""
	I0805 12:23:28.687698  421523 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0805 12:23:28.687711  421523 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0805 12:23:28.687722  421523 command_runner.go:130] > # separated by comma.
	I0805 12:23:28.687734  421523 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0805 12:23:28.687752  421523 command_runner.go:130] > # gid_mappings = ""
	I0805 12:23:28.687762  421523 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0805 12:23:28.687773  421523 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0805 12:23:28.687781  421523 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0805 12:23:28.687803  421523 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0805 12:23:28.687814  421523 command_runner.go:130] > # minimum_mappable_uid = -1
	I0805 12:23:28.687823  421523 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0805 12:23:28.687835  421523 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0805 12:23:28.687846  421523 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0805 12:23:28.687860  421523 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0805 12:23:28.687870  421523 command_runner.go:130] > # minimum_mappable_gid = -1
	I0805 12:23:28.687879  421523 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0805 12:23:28.687892  421523 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0805 12:23:28.687902  421523 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0805 12:23:28.687915  421523 command_runner.go:130] > # ctr_stop_timeout = 30
	I0805 12:23:28.687927  421523 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0805 12:23:28.687939  421523 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0805 12:23:28.687949  421523 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0805 12:23:28.687958  421523 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0805 12:23:28.687962  421523 command_runner.go:130] > drop_infra_ctr = false
	I0805 12:23:28.687970  421523 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0805 12:23:28.687980  421523 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0805 12:23:28.687992  421523 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0805 12:23:28.688002  421523 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0805 12:23:28.688013  421523 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0805 12:23:28.688024  421523 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0805 12:23:28.688037  421523 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0805 12:23:28.688047  421523 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0805 12:23:28.688053  421523 command_runner.go:130] > # shared_cpuset = ""
	I0805 12:23:28.688061  421523 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0805 12:23:28.688072  421523 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0805 12:23:28.688079  421523 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0805 12:23:28.688092  421523 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0805 12:23:28.688102  421523 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0805 12:23:28.688112  421523 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0805 12:23:28.688126  421523 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0805 12:23:28.688134  421523 command_runner.go:130] > # enable_criu_support = false
	I0805 12:23:28.688142  421523 command_runner.go:130] > # Enable/disable the generation of the container,
	I0805 12:23:28.688154  421523 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0805 12:23:28.688168  421523 command_runner.go:130] > # enable_pod_events = false
	I0805 12:23:28.688178  421523 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0805 12:23:28.688192  421523 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0805 12:23:28.688203  421523 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0805 12:23:28.688210  421523 command_runner.go:130] > # default_runtime = "runc"
	I0805 12:23:28.688222  421523 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0805 12:23:28.688235  421523 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0805 12:23:28.688248  421523 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0805 12:23:28.688258  421523 command_runner.go:130] > # creation as a file is not desired either.
	I0805 12:23:28.688272  421523 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0805 12:23:28.688286  421523 command_runner.go:130] > # the hostname is being managed dynamically.
	I0805 12:23:28.688296  421523 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0805 12:23:28.688303  421523 command_runner.go:130] > # ]
	I0805 12:23:28.688314  421523 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0805 12:23:28.688325  421523 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0805 12:23:28.688334  421523 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0805 12:23:28.688342  421523 command_runner.go:130] > # Each entry in the table should follow the format:
	I0805 12:23:28.688351  421523 command_runner.go:130] > #
	I0805 12:23:28.688358  421523 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0805 12:23:28.688368  421523 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0805 12:23:28.688394  421523 command_runner.go:130] > # runtime_type = "oci"
	I0805 12:23:28.688404  421523 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0805 12:23:28.688413  421523 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0805 12:23:28.688423  421523 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0805 12:23:28.688431  421523 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0805 12:23:28.688440  421523 command_runner.go:130] > # monitor_env = []
	I0805 12:23:28.688448  421523 command_runner.go:130] > # privileged_without_host_devices = false
	I0805 12:23:28.688459  421523 command_runner.go:130] > # allowed_annotations = []
	I0805 12:23:28.688471  421523 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0805 12:23:28.688480  421523 command_runner.go:130] > # Where:
	I0805 12:23:28.688489  421523 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0805 12:23:28.688502  421523 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0805 12:23:28.688513  421523 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0805 12:23:28.688519  421523 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0805 12:23:28.688528  421523 command_runner.go:130] > #   in $PATH.
	I0805 12:23:28.688538  421523 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0805 12:23:28.688549  421523 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0805 12:23:28.688557  421523 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0805 12:23:28.688568  421523 command_runner.go:130] > #   state.
	I0805 12:23:28.688582  421523 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0805 12:23:28.688595  421523 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0805 12:23:28.688608  421523 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0805 12:23:28.688618  421523 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0805 12:23:28.688627  421523 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0805 12:23:28.688637  421523 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0805 12:23:28.688648  421523 command_runner.go:130] > #   The currently recognized values are:
	I0805 12:23:28.688658  421523 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0805 12:23:28.688673  421523 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0805 12:23:28.688690  421523 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0805 12:23:28.688703  421523 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0805 12:23:28.688716  421523 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0805 12:23:28.688725  421523 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0805 12:23:28.688735  421523 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0805 12:23:28.688748  421523 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0805 12:23:28.688761  421523 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0805 12:23:28.688774  421523 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0805 12:23:28.688784  421523 command_runner.go:130] > #   deprecated option "conmon".
	I0805 12:23:28.688798  421523 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0805 12:23:28.688809  421523 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0805 12:23:28.688816  421523 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0805 12:23:28.688825  421523 command_runner.go:130] > #   should be moved to the container's cgroup
	I0805 12:23:28.688842  421523 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0805 12:23:28.688853  421523 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0805 12:23:28.688863  421523 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0805 12:23:28.688874  421523 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0805 12:23:28.688882  421523 command_runner.go:130] > #
	I0805 12:23:28.688891  421523 command_runner.go:130] > # Using the seccomp notifier feature:
	I0805 12:23:28.688900  421523 command_runner.go:130] > #
	I0805 12:23:28.688909  421523 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0805 12:23:28.688921  421523 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0805 12:23:28.688928  421523 command_runner.go:130] > #
	I0805 12:23:28.688935  421523 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0805 12:23:28.688948  421523 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0805 12:23:28.688957  421523 command_runner.go:130] > #
	I0805 12:23:28.688967  421523 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0805 12:23:28.688975  421523 command_runner.go:130] > # feature.
	I0805 12:23:28.688980  421523 command_runner.go:130] > #
	I0805 12:23:28.688991  421523 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0805 12:23:28.689003  421523 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0805 12:23:28.689012  421523 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0805 12:23:28.689018  421523 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0805 12:23:28.689031  421523 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0805 12:23:28.689039  421523 command_runner.go:130] > #
	I0805 12:23:28.689049  421523 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0805 12:23:28.689065  421523 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0805 12:23:28.689073  421523 command_runner.go:130] > #
	I0805 12:23:28.689082  421523 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0805 12:23:28.689095  421523 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0805 12:23:28.689103  421523 command_runner.go:130] > #
	I0805 12:23:28.689111  421523 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0805 12:23:28.689119  421523 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0805 12:23:28.689125  421523 command_runner.go:130] > # limitation.
	I0805 12:23:28.689135  421523 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0805 12:23:28.689145  421523 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0805 12:23:28.689173  421523 command_runner.go:130] > runtime_type = "oci"
	I0805 12:23:28.689190  421523 command_runner.go:130] > runtime_root = "/run/runc"
	I0805 12:23:28.689196  421523 command_runner.go:130] > runtime_config_path = ""
	I0805 12:23:28.689202  421523 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0805 12:23:28.689206  421523 command_runner.go:130] > monitor_cgroup = "pod"
	I0805 12:23:28.689215  421523 command_runner.go:130] > monitor_exec_cgroup = ""
	I0805 12:23:28.689221  421523 command_runner.go:130] > monitor_env = [
	I0805 12:23:28.689235  421523 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0805 12:23:28.689240  421523 command_runner.go:130] > ]
	I0805 12:23:28.689253  421523 command_runner.go:130] > privileged_without_host_devices = false
	I0805 12:23:28.689265  421523 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0805 12:23:28.689276  421523 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0805 12:23:28.689288  421523 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0805 12:23:28.689301  421523 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0805 12:23:28.689312  421523 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0805 12:23:28.689323  421523 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0805 12:23:28.689340  421523 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0805 12:23:28.689356  421523 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0805 12:23:28.689368  421523 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0805 12:23:28.689380  421523 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0805 12:23:28.689388  421523 command_runner.go:130] > # Example:
	I0805 12:23:28.689395  421523 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0805 12:23:28.689401  421523 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0805 12:23:28.689406  421523 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0805 12:23:28.689413  421523 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0805 12:23:28.689419  421523 command_runner.go:130] > # cpuset = 0
	I0805 12:23:28.689425  421523 command_runner.go:130] > # cpushares = "0-1"
	I0805 12:23:28.689430  421523 command_runner.go:130] > # Where:
	I0805 12:23:28.689441  421523 command_runner.go:130] > # The workload name is workload-type.
	I0805 12:23:28.689452  421523 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0805 12:23:28.689461  421523 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0805 12:23:28.689469  421523 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0805 12:23:28.689481  421523 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0805 12:23:28.689488  421523 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0805 12:23:28.689493  421523 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0805 12:23:28.689501  421523 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0805 12:23:28.689507  421523 command_runner.go:130] > # Default value is set to true
	I0805 12:23:28.689514  421523 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0805 12:23:28.689523  421523 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0805 12:23:28.689532  421523 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0805 12:23:28.689539  421523 command_runner.go:130] > # Default value is set to 'false'
	I0805 12:23:28.689546  421523 command_runner.go:130] > # disable_hostport_mapping = false
	I0805 12:23:28.689556  421523 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0805 12:23:28.689562  421523 command_runner.go:130] > #
	I0805 12:23:28.689571  421523 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0805 12:23:28.689583  421523 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0805 12:23:28.689592  421523 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0805 12:23:28.689602  421523 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0805 12:23:28.689610  421523 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0805 12:23:28.689617  421523 command_runner.go:130] > [crio.image]
	I0805 12:23:28.689629  421523 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0805 12:23:28.689639  421523 command_runner.go:130] > # default_transport = "docker://"
	I0805 12:23:28.689648  421523 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0805 12:23:28.689660  421523 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0805 12:23:28.689665  421523 command_runner.go:130] > # global_auth_file = ""
	I0805 12:23:28.689672  421523 command_runner.go:130] > # The image used to instantiate infra containers.
	I0805 12:23:28.689683  421523 command_runner.go:130] > # This option supports live configuration reload.
	I0805 12:23:28.689694  421523 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0805 12:23:28.689707  421523 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0805 12:23:28.689720  421523 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0805 12:23:28.689731  421523 command_runner.go:130] > # This option supports live configuration reload.
	I0805 12:23:28.689740  421523 command_runner.go:130] > # pause_image_auth_file = ""
	I0805 12:23:28.689749  421523 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0805 12:23:28.689758  421523 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0805 12:23:28.689774  421523 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0805 12:23:28.689786  421523 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0805 12:23:28.689793  421523 command_runner.go:130] > # pause_command = "/pause"
	I0805 12:23:28.689809  421523 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0805 12:23:28.689821  421523 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0805 12:23:28.689834  421523 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0805 12:23:28.689843  421523 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0805 12:23:28.689854  421523 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0805 12:23:28.689862  421523 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0805 12:23:28.689870  421523 command_runner.go:130] > # pinned_images = [
	I0805 12:23:28.689879  421523 command_runner.go:130] > # ]
	I0805 12:23:28.689888  421523 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0805 12:23:28.689901  421523 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0805 12:23:28.689913  421523 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0805 12:23:28.689926  421523 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0805 12:23:28.689936  421523 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0805 12:23:28.689945  421523 command_runner.go:130] > # signature_policy = ""
	I0805 12:23:28.689953  421523 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0805 12:23:28.689966  421523 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0805 12:23:28.689979  421523 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0805 12:23:28.689993  421523 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0805 12:23:28.690006  421523 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0805 12:23:28.690017  421523 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0805 12:23:28.690029  421523 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0805 12:23:28.690041  421523 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0805 12:23:28.690047  421523 command_runner.go:130] > # changing them here.
	I0805 12:23:28.690052  421523 command_runner.go:130] > # insecure_registries = [
	I0805 12:23:28.690060  421523 command_runner.go:130] > # ]
	I0805 12:23:28.690071  421523 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0805 12:23:28.690083  421523 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0805 12:23:28.690093  421523 command_runner.go:130] > # image_volumes = "mkdir"
	I0805 12:23:28.690104  421523 command_runner.go:130] > # Temporary directory to use for storing big files
	I0805 12:23:28.690114  421523 command_runner.go:130] > # big_files_temporary_dir = ""
	I0805 12:23:28.690125  421523 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0805 12:23:28.690132  421523 command_runner.go:130] > # CNI plugins.
	I0805 12:23:28.690136  421523 command_runner.go:130] > [crio.network]
	I0805 12:23:28.690148  421523 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0805 12:23:28.690167  421523 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0805 12:23:28.690177  421523 command_runner.go:130] > # cni_default_network = ""
	I0805 12:23:28.690188  421523 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0805 12:23:28.690198  421523 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0805 12:23:28.690211  421523 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0805 12:23:28.690220  421523 command_runner.go:130] > # plugin_dirs = [
	I0805 12:23:28.690227  421523 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0805 12:23:28.690231  421523 command_runner.go:130] > # ]
	I0805 12:23:28.690242  421523 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0805 12:23:28.690251  421523 command_runner.go:130] > [crio.metrics]
	I0805 12:23:28.690258  421523 command_runner.go:130] > # Globally enable or disable metrics support.
	I0805 12:23:28.690268  421523 command_runner.go:130] > enable_metrics = true
	I0805 12:23:28.690278  421523 command_runner.go:130] > # Specify enabled metrics collectors.
	I0805 12:23:28.690288  421523 command_runner.go:130] > # Per default all metrics are enabled.
	I0805 12:23:28.690300  421523 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0805 12:23:28.690313  421523 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0805 12:23:28.690322  421523 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0805 12:23:28.690330  421523 command_runner.go:130] > # metrics_collectors = [
	I0805 12:23:28.690336  421523 command_runner.go:130] > # 	"operations",
	I0805 12:23:28.690347  421523 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0805 12:23:28.690358  421523 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0805 12:23:28.690368  421523 command_runner.go:130] > # 	"operations_errors",
	I0805 12:23:28.690377  421523 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0805 12:23:28.690387  421523 command_runner.go:130] > # 	"image_pulls_by_name",
	I0805 12:23:28.690394  421523 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0805 12:23:28.690403  421523 command_runner.go:130] > # 	"image_pulls_failures",
	I0805 12:23:28.690410  421523 command_runner.go:130] > # 	"image_pulls_successes",
	I0805 12:23:28.690418  421523 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0805 12:23:28.690422  421523 command_runner.go:130] > # 	"image_layer_reuse",
	I0805 12:23:28.690432  421523 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0805 12:23:28.690442  421523 command_runner.go:130] > # 	"containers_oom_total",
	I0805 12:23:28.690449  421523 command_runner.go:130] > # 	"containers_oom",
	I0805 12:23:28.690460  421523 command_runner.go:130] > # 	"processes_defunct",
	I0805 12:23:28.690469  421523 command_runner.go:130] > # 	"operations_total",
	I0805 12:23:28.690479  421523 command_runner.go:130] > # 	"operations_latency_seconds",
	I0805 12:23:28.690490  421523 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0805 12:23:28.690500  421523 command_runner.go:130] > # 	"operations_errors_total",
	I0805 12:23:28.690508  421523 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0805 12:23:28.690515  421523 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0805 12:23:28.690520  421523 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0805 12:23:28.690530  421523 command_runner.go:130] > # 	"image_pulls_success_total",
	I0805 12:23:28.690540  421523 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0805 12:23:28.690546  421523 command_runner.go:130] > # 	"containers_oom_count_total",
	I0805 12:23:28.690557  421523 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0805 12:23:28.690567  421523 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0805 12:23:28.690580  421523 command_runner.go:130] > # ]
	I0805 12:23:28.690591  421523 command_runner.go:130] > # The port on which the metrics server will listen.
	I0805 12:23:28.690600  421523 command_runner.go:130] > # metrics_port = 9090
	I0805 12:23:28.690609  421523 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0805 12:23:28.690615  421523 command_runner.go:130] > # metrics_socket = ""
	I0805 12:23:28.690623  421523 command_runner.go:130] > # The certificate for the secure metrics server.
	I0805 12:23:28.690636  421523 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0805 12:23:28.690650  421523 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0805 12:23:28.690660  421523 command_runner.go:130] > # certificate on any modification event.
	I0805 12:23:28.690669  421523 command_runner.go:130] > # metrics_cert = ""
	I0805 12:23:28.690682  421523 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0805 12:23:28.690691  421523 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0805 12:23:28.690698  421523 command_runner.go:130] > # metrics_key = ""
	I0805 12:23:28.690704  421523 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0805 12:23:28.690713  421523 command_runner.go:130] > [crio.tracing]
	I0805 12:23:28.690727  421523 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0805 12:23:28.690737  421523 command_runner.go:130] > # enable_tracing = false
	I0805 12:23:28.690748  421523 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0805 12:23:28.690758  421523 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0805 12:23:28.690770  421523 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0805 12:23:28.690781  421523 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0805 12:23:28.690790  421523 command_runner.go:130] > # CRI-O NRI configuration.
	I0805 12:23:28.690796  421523 command_runner.go:130] > [crio.nri]
	I0805 12:23:28.690804  421523 command_runner.go:130] > # Globally enable or disable NRI.
	I0805 12:23:28.690813  421523 command_runner.go:130] > # enable_nri = false
	I0805 12:23:28.690823  421523 command_runner.go:130] > # NRI socket to listen on.
	I0805 12:23:28.690834  421523 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0805 12:23:28.690843  421523 command_runner.go:130] > # NRI plugin directory to use.
	I0805 12:23:28.690854  421523 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0805 12:23:28.690864  421523 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0805 12:23:28.690874  421523 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0805 12:23:28.690882  421523 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0805 12:23:28.690888  421523 command_runner.go:130] > # nri_disable_connections = false
	I0805 12:23:28.690899  421523 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0805 12:23:28.690910  421523 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0805 12:23:28.690919  421523 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0805 12:23:28.690929  421523 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0805 12:23:28.690941  421523 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0805 12:23:28.690949  421523 command_runner.go:130] > [crio.stats]
	I0805 12:23:28.690961  421523 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0805 12:23:28.690972  421523 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0805 12:23:28.690980  421523 command_runner.go:130] > # stats_collection_period = 0
	I0805 12:23:28.691008  421523 command_runner.go:130] ! time="2024-08-05 12:23:28.655914775Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0805 12:23:28.691035  421523 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0805 12:23:28.691180  421523 cni.go:84] Creating CNI manager for ""
	I0805 12:23:28.691191  421523 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0805 12:23:28.691201  421523 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 12:23:28.691236  421523 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.86 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-841883 NodeName:multinode-841883 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.86"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.86 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 12:23:28.691423  421523 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.86
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-841883"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.86
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.86"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 12:23:28.691502  421523 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 12:23:28.701122  421523 command_runner.go:130] > kubeadm
	I0805 12:23:28.701142  421523 command_runner.go:130] > kubectl
	I0805 12:23:28.701148  421523 command_runner.go:130] > kubelet
	I0805 12:23:28.701178  421523 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 12:23:28.701239  421523 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 12:23:28.710476  421523 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0805 12:23:28.726603  421523 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 12:23:28.742700  421523 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0805 12:23:28.759256  421523 ssh_runner.go:195] Run: grep 192.168.39.86	control-plane.minikube.internal$ /etc/hosts
	I0805 12:23:28.763112  421523 command_runner.go:130] > 192.168.39.86	control-plane.minikube.internal
	I0805 12:23:28.763279  421523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:23:28.902875  421523 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 12:23:28.917549  421523 certs.go:68] Setting up /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/multinode-841883 for IP: 192.168.39.86
	I0805 12:23:28.917578  421523 certs.go:194] generating shared ca certs ...
	I0805 12:23:28.917609  421523 certs.go:226] acquiring lock for ca certs: {Name:mk0abfcaff3883fbb5243c47b487f9200d9166d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:23:28.917815  421523 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key
	I0805 12:23:28.917874  421523 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key
	I0805 12:23:28.917888  421523 certs.go:256] generating profile certs ...
	I0805 12:23:28.917965  421523 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/multinode-841883/client.key
	I0805 12:23:28.918024  421523 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/multinode-841883/apiserver.key.993fd26d
	I0805 12:23:28.918060  421523 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/multinode-841883/proxy-client.key
	I0805 12:23:28.918071  421523 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0805 12:23:28.918083  421523 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0805 12:23:28.918097  421523 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0805 12:23:28.918109  421523 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0805 12:23:28.918121  421523 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/multinode-841883/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0805 12:23:28.918136  421523 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/multinode-841883/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0805 12:23:28.918157  421523 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/multinode-841883/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0805 12:23:28.918169  421523 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/multinode-841883/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0805 12:23:28.918220  421523 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem (1338 bytes)
	W0805 12:23:28.918248  421523 certs.go:480] ignoring /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219_empty.pem, impossibly tiny 0 bytes
	I0805 12:23:28.918255  421523 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 12:23:28.918275  421523 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem (1082 bytes)
	I0805 12:23:28.918316  421523 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem (1123 bytes)
	I0805 12:23:28.918352  421523 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem (1675 bytes)
	I0805 12:23:28.918389  421523 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:23:28.918417  421523 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem -> /usr/share/ca-certificates/391219.pem
	I0805 12:23:28.918432  421523 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> /usr/share/ca-certificates/3912192.pem
	I0805 12:23:28.918444  421523 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:23:28.919126  421523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 12:23:28.943798  421523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 12:23:28.967412  421523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 12:23:28.990644  421523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 12:23:29.014148  421523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/multinode-841883/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0805 12:23:29.037380  421523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/multinode-841883/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0805 12:23:29.061282  421523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/multinode-841883/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 12:23:29.084297  421523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/multinode-841883/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0805 12:23:29.107203  421523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem --> /usr/share/ca-certificates/391219.pem (1338 bytes)
	I0805 12:23:29.131157  421523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /usr/share/ca-certificates/3912192.pem (1708 bytes)
	I0805 12:23:29.154664  421523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 12:23:29.178549  421523 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 12:23:29.194632  421523 ssh_runner.go:195] Run: openssl version
	I0805 12:23:29.200242  421523 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0805 12:23:29.200462  421523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391219.pem && ln -fs /usr/share/ca-certificates/391219.pem /etc/ssl/certs/391219.pem"
	I0805 12:23:29.210871  421523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/391219.pem
	I0805 12:23:29.215112  421523 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  5 11:39 /usr/share/ca-certificates/391219.pem
	I0805 12:23:29.215270  421523 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 11:39 /usr/share/ca-certificates/391219.pem
	I0805 12:23:29.215309  421523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391219.pem
	I0805 12:23:29.220667  421523 command_runner.go:130] > 51391683
	I0805 12:23:29.220924  421523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391219.pem /etc/ssl/certs/51391683.0"
	I0805 12:23:29.229662  421523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3912192.pem && ln -fs /usr/share/ca-certificates/3912192.pem /etc/ssl/certs/3912192.pem"
	I0805 12:23:29.240468  421523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3912192.pem
	I0805 12:23:29.244957  421523 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  5 11:39 /usr/share/ca-certificates/3912192.pem
	I0805 12:23:29.245135  421523 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 11:39 /usr/share/ca-certificates/3912192.pem
	I0805 12:23:29.245194  421523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3912192.pem
	I0805 12:23:29.250714  421523 command_runner.go:130] > 3ec20f2e
	I0805 12:23:29.250778  421523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3912192.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 12:23:29.259658  421523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 12:23:29.270082  421523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:23:29.274403  421523 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  5 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:23:29.274544  421523 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:23:29.274622  421523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:23:29.280223  421523 command_runner.go:130] > b5213941
	I0805 12:23:29.280295  421523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 12:23:29.289224  421523 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 12:23:29.293855  421523 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 12:23:29.293880  421523 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0805 12:23:29.293889  421523 command_runner.go:130] > Device: 253,1	Inode: 7339051     Links: 1
	I0805 12:23:29.293901  421523 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0805 12:23:29.293920  421523 command_runner.go:130] > Access: 2024-08-05 12:16:38.163467123 +0000
	I0805 12:23:29.293932  421523 command_runner.go:130] > Modify: 2024-08-05 12:16:38.163467123 +0000
	I0805 12:23:29.293939  421523 command_runner.go:130] > Change: 2024-08-05 12:16:38.163467123 +0000
	I0805 12:23:29.293947  421523 command_runner.go:130] >  Birth: 2024-08-05 12:16:38.163467123 +0000
	I0805 12:23:29.294012  421523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 12:23:29.299445  421523 command_runner.go:130] > Certificate will not expire
	I0805 12:23:29.299730  421523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 12:23:29.305138  421523 command_runner.go:130] > Certificate will not expire
	I0805 12:23:29.305283  421523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 12:23:29.310504  421523 command_runner.go:130] > Certificate will not expire
	I0805 12:23:29.310684  421523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 12:23:29.316024  421523 command_runner.go:130] > Certificate will not expire
	I0805 12:23:29.316070  421523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 12:23:29.321282  421523 command_runner.go:130] > Certificate will not expire
	I0805 12:23:29.321327  421523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 12:23:29.326794  421523 command_runner.go:130] > Certificate will not expire
	I0805 12:23:29.326887  421523 kubeadm.go:392] StartCluster: {Name:multinode-841883 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-841883 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.205 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.3 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:23:29.327045  421523 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 12:23:29.327260  421523 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:23:29.361122  421523 command_runner.go:130] > be9aabcf660e72d03801ba3b30950b9c6f57ba94086ce1cc291dd5c5e32f8933
	I0805 12:23:29.361156  421523 command_runner.go:130] > bbcbcd0cbb8aac419f5d87c3ef071d0756652f64b3149ff41829c99197eb025f
	I0805 12:23:29.361166  421523 command_runner.go:130] > cc02fb96e19d7d6a667ebd81c3e5cdcecb15fbfb47330274fb4b86c710474f10
	I0805 12:23:29.361177  421523 command_runner.go:130] > e6441cea8b8c78541b9bd98f0b805d148d801178138fb4e18bb54327800d11f1
	I0805 12:23:29.361186  421523 command_runner.go:130] > 9f5bff1b0b6709c5cec533eaef857f59d546deee5fb23e9647d7dbdcd5b6645a
	I0805 12:23:29.361238  421523 command_runner.go:130] > 4c65eb324f492bf364da9ea9a47631827809e1542734c857fabec4020e9dc3d7
	I0805 12:23:29.361262  421523 command_runner.go:130] > 38b97b9b3cf57db4b8524e9f2c6d9ba04d00d56377c723b6c3868713d10fa6fe
	I0805 12:23:29.361336  421523 command_runner.go:130] > 7ad7f7b96f84996531bb595e6e5e24fb9e8a513373562f78426f1a2175bafea1
	I0805 12:23:29.362757  421523 cri.go:89] found id: "be9aabcf660e72d03801ba3b30950b9c6f57ba94086ce1cc291dd5c5e32f8933"
	I0805 12:23:29.362774  421523 cri.go:89] found id: "bbcbcd0cbb8aac419f5d87c3ef071d0756652f64b3149ff41829c99197eb025f"
	I0805 12:23:29.362780  421523 cri.go:89] found id: "cc02fb96e19d7d6a667ebd81c3e5cdcecb15fbfb47330274fb4b86c710474f10"
	I0805 12:23:29.362792  421523 cri.go:89] found id: "e6441cea8b8c78541b9bd98f0b805d148d801178138fb4e18bb54327800d11f1"
	I0805 12:23:29.362797  421523 cri.go:89] found id: "9f5bff1b0b6709c5cec533eaef857f59d546deee5fb23e9647d7dbdcd5b6645a"
	I0805 12:23:29.362801  421523 cri.go:89] found id: "4c65eb324f492bf364da9ea9a47631827809e1542734c857fabec4020e9dc3d7"
	I0805 12:23:29.362809  421523 cri.go:89] found id: "38b97b9b3cf57db4b8524e9f2c6d9ba04d00d56377c723b6c3868713d10fa6fe"
	I0805 12:23:29.362813  421523 cri.go:89] found id: "7ad7f7b96f84996531bb595e6e5e24fb9e8a513373562f78426f1a2175bafea1"
	I0805 12:23:29.362818  421523 cri.go:89] found id: ""
	I0805 12:23:29.362874  421523 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 05 12:25:14 multinode-841883 crio[2878]: time="2024-08-05 12:25:14.440812901Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722860714440785672,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=18892005-ef33-41da-ab16-3ae980fa0f99 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 12:25:14 multinode-841883 crio[2878]: time="2024-08-05 12:25:14.441350763Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=141f668e-41d7-47df-a3fa-132dc91d34ad name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:25:14 multinode-841883 crio[2878]: time="2024-08-05 12:25:14.441400954Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=141f668e-41d7-47df-a3fa-132dc91d34ad name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:25:14 multinode-841883 crio[2878]: time="2024-08-05 12:25:14.441797646Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:98795ff3de72c2452e43ef9b281090d775f986ad948912fa4f818d95b00050c0,PodSandboxId:752056d3ae54b22f231f0c9cd31b2306a402026a1079aaed2e2583afd64aab14,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722860643511564748,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7lqm2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f10ce9f2-7971-4942-836f-143b674e5cb4,},Annotations:map[string]string{io.kubernetes.container.hash: 34ccb7c2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a98aae9aaee6555cf56d0a63b8a6aa7840e4775625a04ad762cc70b4247c868,PodSandboxId:336818d1a255e5029842bdf1b80f7f275a776db50f36b23e492188fb4d37e62c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722860616434897404,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cwklz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3de46bbd-b3ee-4132-927a-2abded24a986,},Annotations:map[string]string{io.kubernetes.container.hash: c2fe2da6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69b19862fff81457c30fc3f2c95dd1ddb95078eabced6e3d18ede6ff578fc015,PodSandboxId:1cadc7450b91bc1439026f7673ee1f59769ab98d26506a1aef946d7a0d0a047e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722860616423518560,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h2bf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cadf65c5-0bf5-4c49-9ab5-442c0b3c6f49,},Annotations:map[string]string{io.kubernetes.container.hash: d65d9610,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce0d95b1a263838ecc2145f0186c2edc7664b77d6da72ca2d16fc7c59dbfb40c,PodSandboxId:72c1220da3ab072588cbe0f6408518211563aae2e6a48189a99f8db6721a1332,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722860616406911622,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zrs8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41dfbd90-fc76-49cd-8127-41d05f965cee,},Annotations:map[string]string{io.kubernetes.container.hash: 70240e0c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",
\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8317ecc61c341dfc6af1131856011dd807492b30baf5cab804eae543fea0eebc,PodSandboxId:e52865878a5061aec21758aa35a895f3d44460b5d0706d36e3b5371c8cf78b27,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722860616393433208,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4d95110-27dc-4a02-810d-c60f43201bde,},Annotations:map[string]string{io.ku
bernetes.container.hash: b6859de0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1f79275fe330946cd4b64487589ea3c51d9ccbd7d29eece0acaf200f2a63cbc,PodSandboxId:4d3916bf084ff3002b4d491c8418e852c68c65921e6f4de12ca04e86e56fe5f5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722860612589317430,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bf1ff096dd95d095391f6be6da0fb24,},Annotations:map[string
]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5117bb87b2f82290d53e273522bc4c8c828f19edcea13a3790022d30ee6f3650,PodSandboxId:d0c548acf7266dda3b49cc063799473ddfe9acb87560165e9b7292c7ed9b71cf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722860612563889753,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 769ccf6183f9b4c42bf8977e06c6180b,},Annotations:map[string]string{io.kub
ernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:695d028e1c305ae52f913f9ca1f0162d430e5829f7b2fd485ab1c928bfcd102c,PodSandboxId:72e039ff71c700ce91d9ed0f4ec05f88a6302e9680edf2c3f969e5049bd7d9b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722860612588706040,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bcce528bbdec41e89d8e795d3f250d7,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 2e13678b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f3ade224e3cf45fffdb29f70932afb418c9cd74b9345c096d14c2f17988cff3,PodSandboxId:39dab2174e03330ef93d464e584ffe6fd9028e026f68f3a18cca54a619cae32b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722860612551299409,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 480fe03e83fb027d0a44921d69591762,},Annotations:map[string]string{io.kubernetes.container.hash: c7d3e2c7,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4263e00071665fb62a27bd960d8a9141bf062de62f4e128f4f46034dfd628236,PodSandboxId:0680c63e48eecf32f4db50456d2cdbf763f72ef81b253e077df0622cc05d3e4f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722860293050416784,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7lqm2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f10ce9f2-7971-4942-836f-143b674e5cb4,},Annotations:map[string]string{io.kubernetes.container.hash: 34ccb7c2,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be9aabcf660e72d03801ba3b30950b9c6f57ba94086ce1cc291dd5c5e32f8933,PodSandboxId:5415052d80d9bc352f5a9a1e80c1fdc4965d8f486e997c14d63784a90abd792c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722860237453831575,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zrs8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41dfbd90-fc76-49cd-8127-41d05f965cee,},Annotations:map[string]string{io.kubernetes.container.hash: 70240e0c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbcbcd0cbb8aac419f5d87c3ef071d0756652f64b3149ff41829c99197eb025f,PodSandboxId:efbda2f5a062a7b3105c305106d35b07929873007d072f5afb089e7faa09219b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722860237396959152,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d4d95110-27dc-4a02-810d-c60f43201bde,},Annotations:map[string]string{io.kubernetes.container.hash: b6859de0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc02fb96e19d7d6a667ebd81c3e5cdcecb15fbfb47330274fb4b86c710474f10,PodSandboxId:eb09f0acc4db3f91aef14462a298f0f24c2c63e7152d2c04625fffd9c0a5d319,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722860225375143490,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cwklz,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 3de46bbd-b3ee-4132-927a-2abded24a986,},Annotations:map[string]string{io.kubernetes.container.hash: c2fe2da6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6441cea8b8c78541b9bd98f0b805d148d801178138fb4e18bb54327800d11f1,PodSandboxId:ba809f3556888e01562cee1a8fd8a7d639f1406ab3c3bc9a89f1a95153c37fce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722860221370951881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h2bf5,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: cadf65c5-0bf5-4c49-9ab5-442c0b3c6f49,},Annotations:map[string]string{io.kubernetes.container.hash: d65d9610,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f5bff1b0b6709c5cec533eaef857f59d546deee5fb23e9647d7dbdcd5b6645a,PodSandboxId:ec56389b09fd970d770bdcd650f65185042d1847f47c201302765071934665e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722860202228031552,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-841883,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 1bf1ff096dd95d095391f6be6da0fb24,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38b97b9b3cf57db4b8524e9f2c6d9ba04d00d56377c723b6c3868713d10fa6fe,PodSandboxId:5658839e595f8ae657238db457616865e02d80ab7b8bf244c41874a829c054e7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722860202152441421,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 769ccf6183f9b4c42bf8977e06c6180b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c65eb324f492bf364da9ea9a47631827809e1542734c857fabec4020e9dc3d7,PodSandboxId:aea5e35a8af16f80e782e0b0deb57cb886bf1ae41f9a252d1c212eb2f7e3fe22,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722860202179572149,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 480fe03e83fb027d0a44921d69591762,
},Annotations:map[string]string{io.kubernetes.container.hash: c7d3e2c7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ad7f7b96f84996531bb595e6e5e24fb9e8a513373562f78426f1a2175bafea1,PodSandboxId:4851d727499f1b8298a50dce48f87c8655f9fd8066eaf100567ccf06e7463a08,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722860202118566732,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bcce528bbdec41e89d8e795d3f250d7,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 2e13678b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=141f668e-41d7-47df-a3fa-132dc91d34ad name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:25:14 multinode-841883 crio[2878]: time="2024-08-05 12:25:14.483253252Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e10cc607-4015-44ee-af5d-b46b0fd3010a name=/runtime.v1.RuntimeService/Version
	Aug 05 12:25:14 multinode-841883 crio[2878]: time="2024-08-05 12:25:14.483906712Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e10cc607-4015-44ee-af5d-b46b0fd3010a name=/runtime.v1.RuntimeService/Version
	Aug 05 12:25:14 multinode-841883 crio[2878]: time="2024-08-05 12:25:14.487455037Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3d64ef7c-fc65-4bfd-9dd3-bfb63ee2778c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 12:25:14 multinode-841883 crio[2878]: time="2024-08-05 12:25:14.487933714Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722860714487907667,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3d64ef7c-fc65-4bfd-9dd3-bfb63ee2778c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 12:25:14 multinode-841883 crio[2878]: time="2024-08-05 12:25:14.491930021Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a0854cf9-cf1d-4e16-997f-fda9e9c6cc21 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:25:14 multinode-841883 crio[2878]: time="2024-08-05 12:25:14.492039609Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a0854cf9-cf1d-4e16-997f-fda9e9c6cc21 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:25:14 multinode-841883 crio[2878]: time="2024-08-05 12:25:14.492588043Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:98795ff3de72c2452e43ef9b281090d775f986ad948912fa4f818d95b00050c0,PodSandboxId:752056d3ae54b22f231f0c9cd31b2306a402026a1079aaed2e2583afd64aab14,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722860643511564748,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7lqm2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f10ce9f2-7971-4942-836f-143b674e5cb4,},Annotations:map[string]string{io.kubernetes.container.hash: 34ccb7c2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a98aae9aaee6555cf56d0a63b8a6aa7840e4775625a04ad762cc70b4247c868,PodSandboxId:336818d1a255e5029842bdf1b80f7f275a776db50f36b23e492188fb4d37e62c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722860616434897404,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cwklz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3de46bbd-b3ee-4132-927a-2abded24a986,},Annotations:map[string]string{io.kubernetes.container.hash: c2fe2da6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69b19862fff81457c30fc3f2c95dd1ddb95078eabced6e3d18ede6ff578fc015,PodSandboxId:1cadc7450b91bc1439026f7673ee1f59769ab98d26506a1aef946d7a0d0a047e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722860616423518560,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h2bf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cadf65c5-0bf5-4c49-9ab5-442c0b3c6f49,},Annotations:map[string]string{io.kubernetes.container.hash: d65d9610,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce0d95b1a263838ecc2145f0186c2edc7664b77d6da72ca2d16fc7c59dbfb40c,PodSandboxId:72c1220da3ab072588cbe0f6408518211563aae2e6a48189a99f8db6721a1332,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722860616406911622,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zrs8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41dfbd90-fc76-49cd-8127-41d05f965cee,},Annotations:map[string]string{io.kubernetes.container.hash: 70240e0c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",
\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8317ecc61c341dfc6af1131856011dd807492b30baf5cab804eae543fea0eebc,PodSandboxId:e52865878a5061aec21758aa35a895f3d44460b5d0706d36e3b5371c8cf78b27,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722860616393433208,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4d95110-27dc-4a02-810d-c60f43201bde,},Annotations:map[string]string{io.ku
bernetes.container.hash: b6859de0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1f79275fe330946cd4b64487589ea3c51d9ccbd7d29eece0acaf200f2a63cbc,PodSandboxId:4d3916bf084ff3002b4d491c8418e852c68c65921e6f4de12ca04e86e56fe5f5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722860612589317430,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bf1ff096dd95d095391f6be6da0fb24,},Annotations:map[string
]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5117bb87b2f82290d53e273522bc4c8c828f19edcea13a3790022d30ee6f3650,PodSandboxId:d0c548acf7266dda3b49cc063799473ddfe9acb87560165e9b7292c7ed9b71cf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722860612563889753,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 769ccf6183f9b4c42bf8977e06c6180b,},Annotations:map[string]string{io.kub
ernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:695d028e1c305ae52f913f9ca1f0162d430e5829f7b2fd485ab1c928bfcd102c,PodSandboxId:72e039ff71c700ce91d9ed0f4ec05f88a6302e9680edf2c3f969e5049bd7d9b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722860612588706040,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bcce528bbdec41e89d8e795d3f250d7,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 2e13678b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f3ade224e3cf45fffdb29f70932afb418c9cd74b9345c096d14c2f17988cff3,PodSandboxId:39dab2174e03330ef93d464e584ffe6fd9028e026f68f3a18cca54a619cae32b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722860612551299409,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 480fe03e83fb027d0a44921d69591762,},Annotations:map[string]string{io.kubernetes.container.hash: c7d3e2c7,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4263e00071665fb62a27bd960d8a9141bf062de62f4e128f4f46034dfd628236,PodSandboxId:0680c63e48eecf32f4db50456d2cdbf763f72ef81b253e077df0622cc05d3e4f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722860293050416784,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7lqm2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f10ce9f2-7971-4942-836f-143b674e5cb4,},Annotations:map[string]string{io.kubernetes.container.hash: 34ccb7c2,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be9aabcf660e72d03801ba3b30950b9c6f57ba94086ce1cc291dd5c5e32f8933,PodSandboxId:5415052d80d9bc352f5a9a1e80c1fdc4965d8f486e997c14d63784a90abd792c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722860237453831575,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zrs8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41dfbd90-fc76-49cd-8127-41d05f965cee,},Annotations:map[string]string{io.kubernetes.container.hash: 70240e0c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbcbcd0cbb8aac419f5d87c3ef071d0756652f64b3149ff41829c99197eb025f,PodSandboxId:efbda2f5a062a7b3105c305106d35b07929873007d072f5afb089e7faa09219b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722860237396959152,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d4d95110-27dc-4a02-810d-c60f43201bde,},Annotations:map[string]string{io.kubernetes.container.hash: b6859de0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc02fb96e19d7d6a667ebd81c3e5cdcecb15fbfb47330274fb4b86c710474f10,PodSandboxId:eb09f0acc4db3f91aef14462a298f0f24c2c63e7152d2c04625fffd9c0a5d319,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722860225375143490,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cwklz,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 3de46bbd-b3ee-4132-927a-2abded24a986,},Annotations:map[string]string{io.kubernetes.container.hash: c2fe2da6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6441cea8b8c78541b9bd98f0b805d148d801178138fb4e18bb54327800d11f1,PodSandboxId:ba809f3556888e01562cee1a8fd8a7d639f1406ab3c3bc9a89f1a95153c37fce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722860221370951881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h2bf5,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: cadf65c5-0bf5-4c49-9ab5-442c0b3c6f49,},Annotations:map[string]string{io.kubernetes.container.hash: d65d9610,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f5bff1b0b6709c5cec533eaef857f59d546deee5fb23e9647d7dbdcd5b6645a,PodSandboxId:ec56389b09fd970d770bdcd650f65185042d1847f47c201302765071934665e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722860202228031552,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-841883,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 1bf1ff096dd95d095391f6be6da0fb24,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38b97b9b3cf57db4b8524e9f2c6d9ba04d00d56377c723b6c3868713d10fa6fe,PodSandboxId:5658839e595f8ae657238db457616865e02d80ab7b8bf244c41874a829c054e7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722860202152441421,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 769ccf6183f9b4c42bf8977e06c6180b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c65eb324f492bf364da9ea9a47631827809e1542734c857fabec4020e9dc3d7,PodSandboxId:aea5e35a8af16f80e782e0b0deb57cb886bf1ae41f9a252d1c212eb2f7e3fe22,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722860202179572149,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 480fe03e83fb027d0a44921d69591762,
},Annotations:map[string]string{io.kubernetes.container.hash: c7d3e2c7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ad7f7b96f84996531bb595e6e5e24fb9e8a513373562f78426f1a2175bafea1,PodSandboxId:4851d727499f1b8298a50dce48f87c8655f9fd8066eaf100567ccf06e7463a08,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722860202118566732,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bcce528bbdec41e89d8e795d3f250d7,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 2e13678b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a0854cf9-cf1d-4e16-997f-fda9e9c6cc21 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:25:14 multinode-841883 crio[2878]: time="2024-08-05 12:25:14.540820648Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8c5d1ba7-c4cf-4b38-8b8f-96197e6060a3 name=/runtime.v1.RuntimeService/Version
	Aug 05 12:25:14 multinode-841883 crio[2878]: time="2024-08-05 12:25:14.540915960Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8c5d1ba7-c4cf-4b38-8b8f-96197e6060a3 name=/runtime.v1.RuntimeService/Version
	Aug 05 12:25:14 multinode-841883 crio[2878]: time="2024-08-05 12:25:14.542040579Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e3bd6abc-ac27-4e9d-a603-cd48cb522bfa name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 12:25:14 multinode-841883 crio[2878]: time="2024-08-05 12:25:14.542444132Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722860714542422027,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e3bd6abc-ac27-4e9d-a603-cd48cb522bfa name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 12:25:14 multinode-841883 crio[2878]: time="2024-08-05 12:25:14.543175314Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2b7a570b-1a15-4f45-b89e-6acce4243cb9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:25:14 multinode-841883 crio[2878]: time="2024-08-05 12:25:14.543232975Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2b7a570b-1a15-4f45-b89e-6acce4243cb9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:25:14 multinode-841883 crio[2878]: time="2024-08-05 12:25:14.543855650Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:98795ff3de72c2452e43ef9b281090d775f986ad948912fa4f818d95b00050c0,PodSandboxId:752056d3ae54b22f231f0c9cd31b2306a402026a1079aaed2e2583afd64aab14,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722860643511564748,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7lqm2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f10ce9f2-7971-4942-836f-143b674e5cb4,},Annotations:map[string]string{io.kubernetes.container.hash: 34ccb7c2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a98aae9aaee6555cf56d0a63b8a6aa7840e4775625a04ad762cc70b4247c868,PodSandboxId:336818d1a255e5029842bdf1b80f7f275a776db50f36b23e492188fb4d37e62c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722860616434897404,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cwklz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3de46bbd-b3ee-4132-927a-2abded24a986,},Annotations:map[string]string{io.kubernetes.container.hash: c2fe2da6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69b19862fff81457c30fc3f2c95dd1ddb95078eabced6e3d18ede6ff578fc015,PodSandboxId:1cadc7450b91bc1439026f7673ee1f59769ab98d26506a1aef946d7a0d0a047e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722860616423518560,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h2bf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cadf65c5-0bf5-4c49-9ab5-442c0b3c6f49,},Annotations:map[string]string{io.kubernetes.container.hash: d65d9610,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce0d95b1a263838ecc2145f0186c2edc7664b77d6da72ca2d16fc7c59dbfb40c,PodSandboxId:72c1220da3ab072588cbe0f6408518211563aae2e6a48189a99f8db6721a1332,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722860616406911622,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zrs8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41dfbd90-fc76-49cd-8127-41d05f965cee,},Annotations:map[string]string{io.kubernetes.container.hash: 70240e0c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",
\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8317ecc61c341dfc6af1131856011dd807492b30baf5cab804eae543fea0eebc,PodSandboxId:e52865878a5061aec21758aa35a895f3d44460b5d0706d36e3b5371c8cf78b27,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722860616393433208,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4d95110-27dc-4a02-810d-c60f43201bde,},Annotations:map[string]string{io.ku
bernetes.container.hash: b6859de0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1f79275fe330946cd4b64487589ea3c51d9ccbd7d29eece0acaf200f2a63cbc,PodSandboxId:4d3916bf084ff3002b4d491c8418e852c68c65921e6f4de12ca04e86e56fe5f5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722860612589317430,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bf1ff096dd95d095391f6be6da0fb24,},Annotations:map[string
]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5117bb87b2f82290d53e273522bc4c8c828f19edcea13a3790022d30ee6f3650,PodSandboxId:d0c548acf7266dda3b49cc063799473ddfe9acb87560165e9b7292c7ed9b71cf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722860612563889753,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 769ccf6183f9b4c42bf8977e06c6180b,},Annotations:map[string]string{io.kub
ernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:695d028e1c305ae52f913f9ca1f0162d430e5829f7b2fd485ab1c928bfcd102c,PodSandboxId:72e039ff71c700ce91d9ed0f4ec05f88a6302e9680edf2c3f969e5049bd7d9b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722860612588706040,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bcce528bbdec41e89d8e795d3f250d7,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 2e13678b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f3ade224e3cf45fffdb29f70932afb418c9cd74b9345c096d14c2f17988cff3,PodSandboxId:39dab2174e03330ef93d464e584ffe6fd9028e026f68f3a18cca54a619cae32b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722860612551299409,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 480fe03e83fb027d0a44921d69591762,},Annotations:map[string]string{io.kubernetes.container.hash: c7d3e2c7,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4263e00071665fb62a27bd960d8a9141bf062de62f4e128f4f46034dfd628236,PodSandboxId:0680c63e48eecf32f4db50456d2cdbf763f72ef81b253e077df0622cc05d3e4f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722860293050416784,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7lqm2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f10ce9f2-7971-4942-836f-143b674e5cb4,},Annotations:map[string]string{io.kubernetes.container.hash: 34ccb7c2,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be9aabcf660e72d03801ba3b30950b9c6f57ba94086ce1cc291dd5c5e32f8933,PodSandboxId:5415052d80d9bc352f5a9a1e80c1fdc4965d8f486e997c14d63784a90abd792c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722860237453831575,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zrs8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41dfbd90-fc76-49cd-8127-41d05f965cee,},Annotations:map[string]string{io.kubernetes.container.hash: 70240e0c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbcbcd0cbb8aac419f5d87c3ef071d0756652f64b3149ff41829c99197eb025f,PodSandboxId:efbda2f5a062a7b3105c305106d35b07929873007d072f5afb089e7faa09219b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722860237396959152,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d4d95110-27dc-4a02-810d-c60f43201bde,},Annotations:map[string]string{io.kubernetes.container.hash: b6859de0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc02fb96e19d7d6a667ebd81c3e5cdcecb15fbfb47330274fb4b86c710474f10,PodSandboxId:eb09f0acc4db3f91aef14462a298f0f24c2c63e7152d2c04625fffd9c0a5d319,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722860225375143490,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cwklz,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 3de46bbd-b3ee-4132-927a-2abded24a986,},Annotations:map[string]string{io.kubernetes.container.hash: c2fe2da6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6441cea8b8c78541b9bd98f0b805d148d801178138fb4e18bb54327800d11f1,PodSandboxId:ba809f3556888e01562cee1a8fd8a7d639f1406ab3c3bc9a89f1a95153c37fce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722860221370951881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h2bf5,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: cadf65c5-0bf5-4c49-9ab5-442c0b3c6f49,},Annotations:map[string]string{io.kubernetes.container.hash: d65d9610,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f5bff1b0b6709c5cec533eaef857f59d546deee5fb23e9647d7dbdcd5b6645a,PodSandboxId:ec56389b09fd970d770bdcd650f65185042d1847f47c201302765071934665e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722860202228031552,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-841883,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 1bf1ff096dd95d095391f6be6da0fb24,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38b97b9b3cf57db4b8524e9f2c6d9ba04d00d56377c723b6c3868713d10fa6fe,PodSandboxId:5658839e595f8ae657238db457616865e02d80ab7b8bf244c41874a829c054e7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722860202152441421,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 769ccf6183f9b4c42bf8977e06c6180b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c65eb324f492bf364da9ea9a47631827809e1542734c857fabec4020e9dc3d7,PodSandboxId:aea5e35a8af16f80e782e0b0deb57cb886bf1ae41f9a252d1c212eb2f7e3fe22,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722860202179572149,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 480fe03e83fb027d0a44921d69591762,
},Annotations:map[string]string{io.kubernetes.container.hash: c7d3e2c7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ad7f7b96f84996531bb595e6e5e24fb9e8a513373562f78426f1a2175bafea1,PodSandboxId:4851d727499f1b8298a50dce48f87c8655f9fd8066eaf100567ccf06e7463a08,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722860202118566732,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bcce528bbdec41e89d8e795d3f250d7,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 2e13678b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2b7a570b-1a15-4f45-b89e-6acce4243cb9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:25:14 multinode-841883 crio[2878]: time="2024-08-05 12:25:14.586879974Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a7739d69-8b81-468f-84e5-d18fffe2990f name=/runtime.v1.RuntimeService/Version
	Aug 05 12:25:14 multinode-841883 crio[2878]: time="2024-08-05 12:25:14.586955213Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a7739d69-8b81-468f-84e5-d18fffe2990f name=/runtime.v1.RuntimeService/Version
	Aug 05 12:25:14 multinode-841883 crio[2878]: time="2024-08-05 12:25:14.588071857Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7dc46610-f0ce-44b0-9c4e-0f01700280cb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 12:25:14 multinode-841883 crio[2878]: time="2024-08-05 12:25:14.588482948Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722860714588460828,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7dc46610-f0ce-44b0-9c4e-0f01700280cb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 12:25:14 multinode-841883 crio[2878]: time="2024-08-05 12:25:14.588901718Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8b017eab-03cf-40c2-ae76-234fe8911b29 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:25:14 multinode-841883 crio[2878]: time="2024-08-05 12:25:14.588951490Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8b017eab-03cf-40c2-ae76-234fe8911b29 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:25:14 multinode-841883 crio[2878]: time="2024-08-05 12:25:14.589689906Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:98795ff3de72c2452e43ef9b281090d775f986ad948912fa4f818d95b00050c0,PodSandboxId:752056d3ae54b22f231f0c9cd31b2306a402026a1079aaed2e2583afd64aab14,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722860643511564748,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7lqm2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f10ce9f2-7971-4942-836f-143b674e5cb4,},Annotations:map[string]string{io.kubernetes.container.hash: 34ccb7c2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a98aae9aaee6555cf56d0a63b8a6aa7840e4775625a04ad762cc70b4247c868,PodSandboxId:336818d1a255e5029842bdf1b80f7f275a776db50f36b23e492188fb4d37e62c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722860616434897404,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cwklz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3de46bbd-b3ee-4132-927a-2abded24a986,},Annotations:map[string]string{io.kubernetes.container.hash: c2fe2da6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69b19862fff81457c30fc3f2c95dd1ddb95078eabced6e3d18ede6ff578fc015,PodSandboxId:1cadc7450b91bc1439026f7673ee1f59769ab98d26506a1aef946d7a0d0a047e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722860616423518560,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h2bf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cadf65c5-0bf5-4c49-9ab5-442c0b3c6f49,},Annotations:map[string]string{io.kubernetes.container.hash: d65d9610,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce0d95b1a263838ecc2145f0186c2edc7664b77d6da72ca2d16fc7c59dbfb40c,PodSandboxId:72c1220da3ab072588cbe0f6408518211563aae2e6a48189a99f8db6721a1332,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722860616406911622,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zrs8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41dfbd90-fc76-49cd-8127-41d05f965cee,},Annotations:map[string]string{io.kubernetes.container.hash: 70240e0c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",
\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8317ecc61c341dfc6af1131856011dd807492b30baf5cab804eae543fea0eebc,PodSandboxId:e52865878a5061aec21758aa35a895f3d44460b5d0706d36e3b5371c8cf78b27,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722860616393433208,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4d95110-27dc-4a02-810d-c60f43201bde,},Annotations:map[string]string{io.ku
bernetes.container.hash: b6859de0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1f79275fe330946cd4b64487589ea3c51d9ccbd7d29eece0acaf200f2a63cbc,PodSandboxId:4d3916bf084ff3002b4d491c8418e852c68c65921e6f4de12ca04e86e56fe5f5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722860612589317430,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bf1ff096dd95d095391f6be6da0fb24,},Annotations:map[string
]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5117bb87b2f82290d53e273522bc4c8c828f19edcea13a3790022d30ee6f3650,PodSandboxId:d0c548acf7266dda3b49cc063799473ddfe9acb87560165e9b7292c7ed9b71cf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722860612563889753,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 769ccf6183f9b4c42bf8977e06c6180b,},Annotations:map[string]string{io.kub
ernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:695d028e1c305ae52f913f9ca1f0162d430e5829f7b2fd485ab1c928bfcd102c,PodSandboxId:72e039ff71c700ce91d9ed0f4ec05f88a6302e9680edf2c3f969e5049bd7d9b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722860612588706040,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bcce528bbdec41e89d8e795d3f250d7,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 2e13678b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f3ade224e3cf45fffdb29f70932afb418c9cd74b9345c096d14c2f17988cff3,PodSandboxId:39dab2174e03330ef93d464e584ffe6fd9028e026f68f3a18cca54a619cae32b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722860612551299409,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 480fe03e83fb027d0a44921d69591762,},Annotations:map[string]string{io.kubernetes.container.hash: c7d3e2c7,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4263e00071665fb62a27bd960d8a9141bf062de62f4e128f4f46034dfd628236,PodSandboxId:0680c63e48eecf32f4db50456d2cdbf763f72ef81b253e077df0622cc05d3e4f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722860293050416784,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7lqm2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f10ce9f2-7971-4942-836f-143b674e5cb4,},Annotations:map[string]string{io.kubernetes.container.hash: 34ccb7c2,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be9aabcf660e72d03801ba3b30950b9c6f57ba94086ce1cc291dd5c5e32f8933,PodSandboxId:5415052d80d9bc352f5a9a1e80c1fdc4965d8f486e997c14d63784a90abd792c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722860237453831575,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zrs8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41dfbd90-fc76-49cd-8127-41d05f965cee,},Annotations:map[string]string{io.kubernetes.container.hash: 70240e0c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbcbcd0cbb8aac419f5d87c3ef071d0756652f64b3149ff41829c99197eb025f,PodSandboxId:efbda2f5a062a7b3105c305106d35b07929873007d072f5afb089e7faa09219b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722860237396959152,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d4d95110-27dc-4a02-810d-c60f43201bde,},Annotations:map[string]string{io.kubernetes.container.hash: b6859de0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc02fb96e19d7d6a667ebd81c3e5cdcecb15fbfb47330274fb4b86c710474f10,PodSandboxId:eb09f0acc4db3f91aef14462a298f0f24c2c63e7152d2c04625fffd9c0a5d319,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722860225375143490,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cwklz,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 3de46bbd-b3ee-4132-927a-2abded24a986,},Annotations:map[string]string{io.kubernetes.container.hash: c2fe2da6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6441cea8b8c78541b9bd98f0b805d148d801178138fb4e18bb54327800d11f1,PodSandboxId:ba809f3556888e01562cee1a8fd8a7d639f1406ab3c3bc9a89f1a95153c37fce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722860221370951881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h2bf5,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: cadf65c5-0bf5-4c49-9ab5-442c0b3c6f49,},Annotations:map[string]string{io.kubernetes.container.hash: d65d9610,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f5bff1b0b6709c5cec533eaef857f59d546deee5fb23e9647d7dbdcd5b6645a,PodSandboxId:ec56389b09fd970d770bdcd650f65185042d1847f47c201302765071934665e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722860202228031552,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-841883,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 1bf1ff096dd95d095391f6be6da0fb24,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38b97b9b3cf57db4b8524e9f2c6d9ba04d00d56377c723b6c3868713d10fa6fe,PodSandboxId:5658839e595f8ae657238db457616865e02d80ab7b8bf244c41874a829c054e7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722860202152441421,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 769ccf6183f9b4c42bf8977e06c6180b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c65eb324f492bf364da9ea9a47631827809e1542734c857fabec4020e9dc3d7,PodSandboxId:aea5e35a8af16f80e782e0b0deb57cb886bf1ae41f9a252d1c212eb2f7e3fe22,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722860202179572149,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 480fe03e83fb027d0a44921d69591762,
},Annotations:map[string]string{io.kubernetes.container.hash: c7d3e2c7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ad7f7b96f84996531bb595e6e5e24fb9e8a513373562f78426f1a2175bafea1,PodSandboxId:4851d727499f1b8298a50dce48f87c8655f9fd8066eaf100567ccf06e7463a08,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722860202118566732,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bcce528bbdec41e89d8e795d3f250d7,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 2e13678b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8b017eab-03cf-40c2-ae76-234fe8911b29 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	98795ff3de72c       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   752056d3ae54b       busybox-fc5497c4f-7lqm2
	7a98aae9aaee6       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      About a minute ago   Running             kindnet-cni               1                   336818d1a255e       kindnet-cwklz
	69b19862fff81       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      About a minute ago   Running             kube-proxy                1                   1cadc7450b91b       kube-proxy-h2bf5
	ce0d95b1a2638       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   72c1220da3ab0       coredns-7db6d8ff4d-zrs8r
	8317ecc61c341       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   e52865878a506       storage-provisioner
	e1f79275fe330       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   1                   4d3916bf084ff       kube-controller-manager-multinode-841883
	695d028e1c305       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            1                   72e039ff71c70       kube-apiserver-multinode-841883
	5117bb87b2f82       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      About a minute ago   Running             kube-scheduler            1                   d0c548acf7266       kube-scheduler-multinode-841883
	2f3ade224e3cf       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   39dab2174e033       etcd-multinode-841883
	4263e00071665       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   0680c63e48eec       busybox-fc5497c4f-7lqm2
	be9aabcf660e7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   5415052d80d9b       coredns-7db6d8ff4d-zrs8r
	bbcbcd0cbb8aa       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   efbda2f5a062a       storage-provisioner
	cc02fb96e19d7       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    8 minutes ago        Exited              kindnet-cni               0                   eb09f0acc4db3       kindnet-cwklz
	e6441cea8b8c7       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      8 minutes ago        Exited              kube-proxy                0                   ba809f3556888       kube-proxy-h2bf5
	9f5bff1b0b670       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      8 minutes ago        Exited              kube-controller-manager   0                   ec56389b09fd9       kube-controller-manager-multinode-841883
	4c65eb324f492       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago        Exited              etcd                      0                   aea5e35a8af16       etcd-multinode-841883
	38b97b9b3cf57       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      8 minutes ago        Exited              kube-scheduler            0                   5658839e595f8       kube-scheduler-multinode-841883
	7ad7f7b96f849       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      8 minutes ago        Exited              kube-apiserver            0                   4851d727499f1       kube-apiserver-multinode-841883
	
	
	==> coredns [be9aabcf660e72d03801ba3b30950b9c6f57ba94086ce1cc291dd5c5e32f8933] <==
	[INFO] 10.244.1.2:39925 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001821251s
	[INFO] 10.244.1.2:46445 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000127982s
	[INFO] 10.244.1.2:56510 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010266s
	[INFO] 10.244.1.2:37286 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001245769s
	[INFO] 10.244.1.2:57588 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000099668s
	[INFO] 10.244.1.2:45841 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000106901s
	[INFO] 10.244.1.2:34459 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110108s
	[INFO] 10.244.0.3:34006 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013481s
	[INFO] 10.244.0.3:41161 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000067302s
	[INFO] 10.244.0.3:37785 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000056333s
	[INFO] 10.244.0.3:34587 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000048812s
	[INFO] 10.244.1.2:51284 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137526s
	[INFO] 10.244.1.2:43623 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096345s
	[INFO] 10.244.1.2:53591 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092229s
	[INFO] 10.244.1.2:43422 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076128s
	[INFO] 10.244.0.3:57865 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113278s
	[INFO] 10.244.0.3:48031 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000343192s
	[INFO] 10.244.0.3:58137 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000104051s
	[INFO] 10.244.0.3:38594 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000091942s
	[INFO] 10.244.1.2:38327 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00010431s
	[INFO] 10.244.1.2:40574 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000068802s
	[INFO] 10.244.1.2:48699 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000059382s
	[INFO] 10.244.1.2:49670 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000094872s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ce0d95b1a263838ecc2145f0186c2edc7664b77d6da72ca2d16fc7c59dbfb40c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:41583 - 53781 "HINFO IN 37412991472444561.7596293966948227027. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.015584976s
	
	
	==> describe nodes <==
	Name:               multinode-841883
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-841883
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f
	                    minikube.k8s.io/name=multinode-841883
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_05T12_16_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 12:16:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-841883
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 12:25:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 12:23:35 +0000   Mon, 05 Aug 2024 12:16:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 12:23:35 +0000   Mon, 05 Aug 2024 12:16:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 12:23:35 +0000   Mon, 05 Aug 2024 12:16:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 12:23:35 +0000   Mon, 05 Aug 2024 12:17:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.86
	  Hostname:    multinode-841883
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4adf07e287e34edea1723ba3f4587bda
	  System UUID:                4adf07e2-87e3-4ede-a172-3ba3f4587bda
	  Boot ID:                    0d30dd89-98f9-436f-8b69-49a330751387
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-7lqm2                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m4s
	  kube-system                 coredns-7db6d8ff4d-zrs8r                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m14s
	  kube-system                 etcd-multinode-841883                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m27s
	  kube-system                 kindnet-cwklz                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m14s
	  kube-system                 kube-apiserver-multinode-841883             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m27s
	  kube-system                 kube-controller-manager-multinode-841883    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m27s
	  kube-system                 kube-proxy-h2bf5                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m14s
	  kube-system                 kube-scheduler-multinode-841883             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m27s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 8m13s                kube-proxy       
	  Normal  Starting                 98s                  kube-proxy       
	  Normal  NodeHasSufficientPID     8m27s                kubelet          Node multinode-841883 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m27s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m27s                kubelet          Node multinode-841883 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m27s                kubelet          Node multinode-841883 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 8m27s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m15s                node-controller  Node multinode-841883 event: Registered Node multinode-841883 in Controller
	  Normal  NodeReady                7m58s                kubelet          Node multinode-841883 status is now: NodeReady
	  Normal  Starting                 102s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  102s (x8 over 102s)  kubelet          Node multinode-841883 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    102s (x8 over 102s)  kubelet          Node multinode-841883 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     102s (x7 over 102s)  kubelet          Node multinode-841883 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  102s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           86s                  node-controller  Node multinode-841883 event: Registered Node multinode-841883 in Controller
	
	
	Name:               multinode-841883-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-841883-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f
	                    minikube.k8s.io/name=multinode-841883
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_05T12_24_13_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 12:24:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-841883-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 12:25:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 12:24:43 +0000   Mon, 05 Aug 2024 12:24:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 12:24:43 +0000   Mon, 05 Aug 2024 12:24:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 12:24:43 +0000   Mon, 05 Aug 2024 12:24:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 12:24:43 +0000   Mon, 05 Aug 2024 12:24:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.205
	  Hostname:    multinode-841883-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6b7d544f73564d0ebdb16ab281242cf5
	  System UUID:                6b7d544f-7356-4d0e-bdb1-6ab281242cf5
	  Boot ID:                    d2634bd5-8967-4bb9-83cf-280ed6dcee00
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-jtgc2    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kube-system                 kindnet-w4fdf              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m26s
	  kube-system                 kube-proxy-6q2pz           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m22s                  kube-proxy  
	  Normal  Starting                 57s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m27s (x2 over 7m27s)  kubelet     Node multinode-841883-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m27s (x2 over 7m27s)  kubelet     Node multinode-841883-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m27s (x2 over 7m27s)  kubelet     Node multinode-841883-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m27s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m7s                   kubelet     Node multinode-841883-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  61s (x2 over 61s)      kubelet     Node multinode-841883-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x2 over 61s)      kubelet     Node multinode-841883-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x2 over 61s)      kubelet     Node multinode-841883-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  61s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                42s                    kubelet     Node multinode-841883-m02 status is now: NodeReady
	
	
	Name:               multinode-841883-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-841883-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f
	                    minikube.k8s.io/name=multinode-841883
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_05T12_24_52_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 12:24:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-841883-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 12:25:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 12:25:11 +0000   Mon, 05 Aug 2024 12:24:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 12:25:11 +0000   Mon, 05 Aug 2024 12:24:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 12:25:11 +0000   Mon, 05 Aug 2024 12:24:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 12:25:11 +0000   Mon, 05 Aug 2024 12:25:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.3
	  Hostname:    multinode-841883-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a65e650014b343aebfa866027bd2e3ed
	  System UUID:                a65e6500-14b3-43ae-bfa8-66027bd2e3ed
	  Boot ID:                    c5c1a832-52f1-4985-a8e0-44eddc29c990
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-572vf       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m33s
	  kube-system                 kube-proxy-fjx7z    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m29s                  kube-proxy  
	  Normal  Starting                 18s                    kube-proxy  
	  Normal  Starting                 5m40s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m33s (x2 over 6m33s)  kubelet     Node multinode-841883-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m33s (x2 over 6m33s)  kubelet     Node multinode-841883-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m33s (x2 over 6m33s)  kubelet     Node multinode-841883-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m33s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m14s                  kubelet     Node multinode-841883-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m44s (x2 over 5m44s)  kubelet     Node multinode-841883-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m44s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m44s (x2 over 5m44s)  kubelet     Node multinode-841883-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m44s (x2 over 5m44s)  kubelet     Node multinode-841883-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m24s                  kubelet     Node multinode-841883-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  22s (x2 over 22s)      kubelet     Node multinode-841883-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x2 over 22s)      kubelet     Node multinode-841883-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x2 over 22s)      kubelet     Node multinode-841883-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet     Node multinode-841883-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.062248] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064651] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.193215] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.132128] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.259895] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +4.142872] systemd-fstab-generator[758]: Ignoring "noauto" option for root device
	[  +4.084513] systemd-fstab-generator[934]: Ignoring "noauto" option for root device
	[  +0.060374] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.489444] systemd-fstab-generator[1271]: Ignoring "noauto" option for root device
	[  +0.084526] kauditd_printk_skb: 69 callbacks suppressed
	[  +7.067464] kauditd_printk_skb: 18 callbacks suppressed
	[Aug 5 12:17] systemd-fstab-generator[1466]: Ignoring "noauto" option for root device
	[  +5.722932] kauditd_printk_skb: 56 callbacks suppressed
	[Aug 5 12:18] kauditd_printk_skb: 12 callbacks suppressed
	[Aug 5 12:23] systemd-fstab-generator[2796]: Ignoring "noauto" option for root device
	[  +0.148580] systemd-fstab-generator[2808]: Ignoring "noauto" option for root device
	[  +0.169768] systemd-fstab-generator[2822]: Ignoring "noauto" option for root device
	[  +0.142903] systemd-fstab-generator[2834]: Ignoring "noauto" option for root device
	[  +0.268851] systemd-fstab-generator[2862]: Ignoring "noauto" option for root device
	[  +0.693965] systemd-fstab-generator[2961]: Ignoring "noauto" option for root device
	[  +3.055018] systemd-fstab-generator[3366]: Ignoring "noauto" option for root device
	[  +0.799838] kauditd_printk_skb: 184 callbacks suppressed
	[ +15.846388] systemd-fstab-generator[3924]: Ignoring "noauto" option for root device
	[  +0.102279] kauditd_printk_skb: 32 callbacks suppressed
	[Aug 5 12:24] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [2f3ade224e3cf45fffdb29f70932afb418c9cd74b9345c096d14c2f17988cff3] <==
	{"level":"info","ts":"2024-08-05T12:23:32.995592Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-05T12:23:32.997667Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-05T12:23:32.997969Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5e65f7c667250dae switched to configuration voters=(6802115243719069102)"}
	{"level":"info","ts":"2024-08-05T12:23:32.998035Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1e2108b476944475","local-member-id":"5e65f7c667250dae","added-peer-id":"5e65f7c667250dae","added-peer-peer-urls":["https://192.168.39.86:2380"]}
	{"level":"info","ts":"2024-08-05T12:23:32.998163Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1e2108b476944475","local-member-id":"5e65f7c667250dae","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T12:23:32.998202Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T12:23:33.013799Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-05T12:23:33.014011Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"5e65f7c667250dae","initial-advertise-peer-urls":["https://192.168.39.86:2380"],"listen-peer-urls":["https://192.168.39.86:2380"],"advertise-client-urls":["https://192.168.39.86:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.86:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-05T12:23:33.014053Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-05T12:23:33.014169Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.86:2380"}
	{"level":"info","ts":"2024-08-05T12:23:33.014192Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.86:2380"}
	{"level":"info","ts":"2024-08-05T12:23:34.4597Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5e65f7c667250dae is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-05T12:23:34.459757Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5e65f7c667250dae became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-05T12:23:34.459854Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5e65f7c667250dae received MsgPreVoteResp from 5e65f7c667250dae at term 2"}
	{"level":"info","ts":"2024-08-05T12:23:34.459885Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5e65f7c667250dae became candidate at term 3"}
	{"level":"info","ts":"2024-08-05T12:23:34.459893Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5e65f7c667250dae received MsgVoteResp from 5e65f7c667250dae at term 3"}
	{"level":"info","ts":"2024-08-05T12:23:34.459913Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5e65f7c667250dae became leader at term 3"}
	{"level":"info","ts":"2024-08-05T12:23:34.45994Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 5e65f7c667250dae elected leader 5e65f7c667250dae at term 3"}
	{"level":"info","ts":"2024-08-05T12:23:34.466438Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"5e65f7c667250dae","local-member-attributes":"{Name:multinode-841883 ClientURLs:[https://192.168.39.86:2379]}","request-path":"/0/members/5e65f7c667250dae/attributes","cluster-id":"1e2108b476944475","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-05T12:23:34.466534Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T12:23:34.466725Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T12:23:34.46706Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-05T12:23:34.46711Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-05T12:23:34.468872Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-05T12:23:34.468877Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.86:2379"}
	
	
	==> etcd [4c65eb324f492bf364da9ea9a47631827809e1542734c857fabec4020e9dc3d7] <==
	{"level":"info","ts":"2024-08-05T12:16:42.576748Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T12:16:42.57685Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T12:16:42.596161Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.86:2379"}
	{"level":"info","ts":"2024-08-05T12:17:47.987047Z","caller":"traceutil/trace.go:171","msg":"trace[1517432004] linearizableReadLoop","detail":"{readStateIndex:463; appliedIndex:461; }","duration":"109.233877ms","start":"2024-08-05T12:17:47.877793Z","end":"2024-08-05T12:17:47.987027Z","steps":["trace[1517432004] 'read index received'  (duration: 31.410545ms)","trace[1517432004] 'applied index is now lower than readState.Index'  (duration: 77.822851ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-05T12:17:47.987591Z","caller":"traceutil/trace.go:171","msg":"trace[386703741] transaction","detail":"{read_only:false; response_revision:443; number_of_response:1; }","duration":"152.518744ms","start":"2024-08-05T12:17:47.835062Z","end":"2024-08-05T12:17:47.987581Z","steps":["trace[386703741] 'process raft request'  (duration: 151.807825ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-05T12:17:47.987983Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.110614ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-841883-m02\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-08-05T12:17:47.988565Z","caller":"traceutil/trace.go:171","msg":"trace[370504870] range","detail":"{range_begin:/registry/minions/multinode-841883-m02; range_end:; response_count:1; response_revision:443; }","duration":"110.762702ms","start":"2024-08-05T12:17:47.877788Z","end":"2024-08-05T12:17:47.988551Z","steps":["trace[370504870] 'agreement among raft nodes before linearized reading'  (duration: 109.973069ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T12:18:41.748142Z","caller":"traceutil/trace.go:171","msg":"trace[1817276155] transaction","detail":"{read_only:false; response_revision:576; number_of_response:1; }","duration":"187.275505ms","start":"2024-08-05T12:18:41.56083Z","end":"2024-08-05T12:18:41.748106Z","steps":["trace[1817276155] 'process raft request'  (duration: 182.299104ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T12:18:41.748395Z","caller":"traceutil/trace.go:171","msg":"trace[1505216082] transaction","detail":"{read_only:false; response_revision:577; number_of_response:1; }","duration":"151.245753ms","start":"2024-08-05T12:18:41.597131Z","end":"2024-08-05T12:18:41.748377Z","steps":["trace[1505216082] 'process raft request'  (duration: 150.803801ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T12:18:41.755811Z","caller":"traceutil/trace.go:171","msg":"trace[1386503716] linearizableReadLoop","detail":"{readStateIndex:611; appliedIndex:608; }","duration":"124.756732ms","start":"2024-08-05T12:18:41.631041Z","end":"2024-08-05T12:18:41.755798Z","steps":["trace[1386503716] 'read index received'  (duration: 111.997993ms)","trace[1386503716] 'applied index is now lower than readState.Index'  (duration: 12.75796ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-05T12:18:41.755963Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.892543ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-05T12:18:41.756Z","caller":"traceutil/trace.go:171","msg":"trace[754973419] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:577; }","duration":"124.979561ms","start":"2024-08-05T12:18:41.631015Z","end":"2024-08-05T12:18:41.755995Z","steps":["trace[754973419] 'agreement among raft nodes before linearized reading'  (duration: 124.871517ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-05T12:19:38.550459Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.491356ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-841883-m03\" ","response":"range_response_count:1 size:3116"}
	{"level":"info","ts":"2024-08-05T12:19:38.55081Z","caller":"traceutil/trace.go:171","msg":"trace[2146832427] range","detail":"{range_begin:/registry/minions/multinode-841883-m03; range_end:; response_count:1; response_revision:710; }","duration":"144.882733ms","start":"2024-08-05T12:19:38.405905Z","end":"2024-08-05T12:19:38.550787Z","steps":["trace[2146832427] 'range keys from in-memory index tree'  (duration: 144.218068ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T12:19:38.552525Z","caller":"traceutil/trace.go:171","msg":"trace[803482962] transaction","detail":"{read_only:false; response_revision:711; number_of_response:1; }","duration":"112.809359ms","start":"2024-08-05T12:19:38.439706Z","end":"2024-08-05T12:19:38.552515Z","steps":["trace[803482962] 'process raft request'  (duration: 112.673289ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T12:21:55.944235Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-05T12:21:55.94436Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-841883","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.86:2380"],"advertise-client-urls":["https://192.168.39.86:2379"]}
	{"level":"warn","ts":"2024-08-05T12:21:55.944507Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.86:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-05T12:21:55.944543Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.86:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-05T12:21:55.946475Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-05T12:21:55.946548Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-05T12:21:55.991981Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"5e65f7c667250dae","current-leader-member-id":"5e65f7c667250dae"}
	{"level":"info","ts":"2024-08-05T12:21:55.994811Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.86:2380"}
	{"level":"info","ts":"2024-08-05T12:21:55.994959Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.86:2380"}
	{"level":"info","ts":"2024-08-05T12:21:55.994992Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-841883","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.86:2380"],"advertise-client-urls":["https://192.168.39.86:2379"]}
	
	
	==> kernel <==
	 12:25:15 up 9 min,  0 users,  load average: 0.14, 0.21, 0.12
	Linux multinode-841883 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [7a98aae9aaee6555cf56d0a63b8a6aa7840e4775625a04ad762cc70b4247c868] <==
	I0805 12:24:27.485251       1 main.go:299] handling current node
	I0805 12:24:37.481787       1 main.go:295] Handling node with IPs: map[192.168.39.86:{}]
	I0805 12:24:37.481889       1 main.go:299] handling current node
	I0805 12:24:37.481926       1 main.go:295] Handling node with IPs: map[192.168.39.205:{}]
	I0805 12:24:37.481945       1 main.go:322] Node multinode-841883-m02 has CIDR [10.244.1.0/24] 
	I0805 12:24:37.482101       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0805 12:24:37.482122       1 main.go:322] Node multinode-841883-m03 has CIDR [10.244.3.0/24] 
	I0805 12:24:47.481358       1 main.go:295] Handling node with IPs: map[192.168.39.86:{}]
	I0805 12:24:47.481387       1 main.go:299] handling current node
	I0805 12:24:47.481415       1 main.go:295] Handling node with IPs: map[192.168.39.205:{}]
	I0805 12:24:47.481420       1 main.go:322] Node multinode-841883-m02 has CIDR [10.244.1.0/24] 
	I0805 12:24:47.481712       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0805 12:24:47.481788       1 main.go:322] Node multinode-841883-m03 has CIDR [10.244.3.0/24] 
	I0805 12:24:57.483560       1 main.go:295] Handling node with IPs: map[192.168.39.205:{}]
	I0805 12:24:57.483773       1 main.go:322] Node multinode-841883-m02 has CIDR [10.244.1.0/24] 
	I0805 12:24:57.483942       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0805 12:24:57.483970       1 main.go:322] Node multinode-841883-m03 has CIDR [10.244.2.0/24] 
	I0805 12:24:57.484037       1 main.go:295] Handling node with IPs: map[192.168.39.86:{}]
	I0805 12:24:57.484058       1 main.go:299] handling current node
	I0805 12:25:07.481450       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0805 12:25:07.481683       1 main.go:322] Node multinode-841883-m03 has CIDR [10.244.2.0/24] 
	I0805 12:25:07.481880       1 main.go:295] Handling node with IPs: map[192.168.39.86:{}]
	I0805 12:25:07.481932       1 main.go:299] handling current node
	I0805 12:25:07.481960       1 main.go:295] Handling node with IPs: map[192.168.39.205:{}]
	I0805 12:25:07.481978       1 main.go:322] Node multinode-841883-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [cc02fb96e19d7d6a667ebd81c3e5cdcecb15fbfb47330274fb4b86c710474f10] <==
	I0805 12:21:06.464363       1 main.go:322] Node multinode-841883-m03 has CIDR [10.244.3.0/24] 
	I0805 12:21:16.463987       1 main.go:295] Handling node with IPs: map[192.168.39.86:{}]
	I0805 12:21:16.464122       1 main.go:299] handling current node
	I0805 12:21:16.464155       1 main.go:295] Handling node with IPs: map[192.168.39.205:{}]
	I0805 12:21:16.464174       1 main.go:322] Node multinode-841883-m02 has CIDR [10.244.1.0/24] 
	I0805 12:21:16.464312       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0805 12:21:16.464334       1 main.go:322] Node multinode-841883-m03 has CIDR [10.244.3.0/24] 
	I0805 12:21:26.472845       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0805 12:21:26.472957       1 main.go:322] Node multinode-841883-m03 has CIDR [10.244.3.0/24] 
	I0805 12:21:26.473187       1 main.go:295] Handling node with IPs: map[192.168.39.86:{}]
	I0805 12:21:26.473218       1 main.go:299] handling current node
	I0805 12:21:26.473252       1 main.go:295] Handling node with IPs: map[192.168.39.205:{}]
	I0805 12:21:26.473285       1 main.go:322] Node multinode-841883-m02 has CIDR [10.244.1.0/24] 
	I0805 12:21:36.466152       1 main.go:295] Handling node with IPs: map[192.168.39.86:{}]
	I0805 12:21:36.466320       1 main.go:299] handling current node
	I0805 12:21:36.466359       1 main.go:295] Handling node with IPs: map[192.168.39.205:{}]
	I0805 12:21:36.466377       1 main.go:322] Node multinode-841883-m02 has CIDR [10.244.1.0/24] 
	I0805 12:21:36.466567       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0805 12:21:36.466589       1 main.go:322] Node multinode-841883-m03 has CIDR [10.244.3.0/24] 
	I0805 12:21:46.468155       1 main.go:295] Handling node with IPs: map[192.168.39.86:{}]
	I0805 12:21:46.468198       1 main.go:299] handling current node
	I0805 12:21:46.468215       1 main.go:295] Handling node with IPs: map[192.168.39.205:{}]
	I0805 12:21:46.468221       1 main.go:322] Node multinode-841883-m02 has CIDR [10.244.1.0/24] 
	I0805 12:21:46.468400       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0805 12:21:46.468423       1 main.go:322] Node multinode-841883-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [695d028e1c305ae52f913f9ca1f0162d430e5829f7b2fd485ab1c928bfcd102c] <==
	I0805 12:23:35.761657       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0805 12:23:35.763452       1 aggregator.go:165] initial CRD sync complete...
	I0805 12:23:35.763597       1 autoregister_controller.go:141] Starting autoregister controller
	I0805 12:23:35.763690       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0805 12:23:35.805474       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0805 12:23:35.805969       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0805 12:23:35.811531       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0805 12:23:35.811729       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0805 12:23:35.811755       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0805 12:23:35.819425       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0805 12:23:35.820174       1 shared_informer.go:320] Caches are synced for configmaps
	E0805 12:23:35.825927       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0805 12:23:35.835010       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0805 12:23:35.864950       1 cache.go:39] Caches are synced for autoregister controller
	I0805 12:23:35.865134       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0805 12:23:35.865174       1 policy_source.go:224] refreshing policies
	I0805 12:23:35.870011       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0805 12:23:36.737861       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0805 12:23:37.612030       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0805 12:23:37.729437       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0805 12:23:37.746440       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0805 12:23:37.830920       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0805 12:23:37.840923       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0805 12:23:48.916067       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0805 12:23:49.017496       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [7ad7f7b96f84996531bb595e6e5e24fb9e8a513373562f78426f1a2175bafea1] <==
	I0805 12:21:55.970932       1 controller.go:157] Shutting down quota evaluator
	I0805 12:21:55.970998       1 controller.go:176] quota evaluator worker shutdown
	W0805 12:21:55.971245       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 12:21:55.971586       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 12:21:55.972015       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 12:21:55.972092       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 12:21:55.972665       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 12:21:55.972801       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0805 12:21:55.974131       1 controller.go:176] quota evaluator worker shutdown
	I0805 12:21:55.974186       1 controller.go:176] quota evaluator worker shutdown
	I0805 12:21:55.974211       1 controller.go:176] quota evaluator worker shutdown
	I0805 12:21:55.974233       1 controller.go:176] quota evaluator worker shutdown
	W0805 12:21:55.974298       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 12:21:55.976907       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 12:21:55.977110       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 12:21:55.977180       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 12:21:55.977242       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 12:21:55.977303       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 12:21:55.977369       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 12:21:55.977428       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 12:21:55.977496       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 12:21:55.977556       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 12:21:55.977726       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 12:21:55.978469       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 12:21:55.980204       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [9f5bff1b0b6709c5cec533eaef857f59d546deee5fb23e9647d7dbdcd5b6645a] <==
	I0805 12:17:19.787254       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0805 12:17:47.991235       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-841883-m02\" does not exist"
	I0805 12:17:48.002561       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-841883-m02" podCIDRs=["10.244.1.0/24"]
	I0805 12:17:49.791209       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-841883-m02"
	I0805 12:18:07.823311       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-841883-m02"
	I0805 12:18:10.093339       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.228981ms"
	I0805 12:18:10.108340       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.658887ms"
	I0805 12:18:10.108429       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.947µs"
	I0805 12:18:13.537868       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.564433ms"
	I0805 12:18:13.538035       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.096µs"
	I0805 12:18:13.863829       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.454982ms"
	I0805 12:18:13.864125       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="78.326µs"
	I0805 12:18:41.751566       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-841883-m02"
	I0805 12:18:41.753696       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-841883-m03\" does not exist"
	I0805 12:18:41.797709       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-841883-m03" podCIDRs=["10.244.2.0/24"]
	I0805 12:18:44.810671       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-841883-m03"
	I0805 12:19:00.998936       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-841883-m02"
	I0805 12:19:29.398495       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-841883-m02"
	I0805 12:19:30.369500       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-841883-m02"
	I0805 12:19:30.370916       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-841883-m03\" does not exist"
	I0805 12:19:30.384126       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-841883-m03" podCIDRs=["10.244.3.0/24"]
	I0805 12:19:50.231580       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-841883-m02"
	I0805 12:20:34.868908       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-841883-m02"
	I0805 12:20:34.934799       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.872132ms"
	I0805 12:20:34.934933       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.575µs"
	
	
	==> kube-controller-manager [e1f79275fe330946cd4b64487589ea3c51d9ccbd7d29eece0acaf200f2a63cbc] <==
	I0805 12:23:49.357331       1 shared_informer.go:320] Caches are synced for garbage collector
	I0805 12:23:49.404920       1 shared_informer.go:320] Caches are synced for garbage collector
	I0805 12:23:49.405005       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0805 12:24:08.657242       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.090407ms"
	I0805 12:24:08.657377       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.827µs"
	I0805 12:24:08.663398       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.602949ms"
	I0805 12:24:08.663524       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.042µs"
	I0805 12:24:11.912884       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.025µs"
	I0805 12:24:13.178708       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-841883-m02\" does not exist"
	I0805 12:24:13.193833       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-841883-m02" podCIDRs=["10.244.1.0/24"]
	I0805 12:24:15.099960       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="120.521µs"
	I0805 12:24:15.108195       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.456µs"
	I0805 12:24:15.117427       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="127.663µs"
	I0805 12:24:15.125520       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.217µs"
	I0805 12:24:15.129824       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.572µs"
	I0805 12:24:32.677209       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-841883-m02"
	I0805 12:24:32.706257       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.87µs"
	I0805 12:24:32.730018       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.336µs"
	I0805 12:24:36.391198       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.727843ms"
	I0805 12:24:36.391398       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.011µs"
	I0805 12:24:50.887271       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-841883-m02"
	I0805 12:24:52.179046       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-841883-m03\" does not exist"
	I0805 12:24:52.179545       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-841883-m02"
	I0805 12:24:52.188497       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-841883-m03" podCIDRs=["10.244.2.0/24"]
	I0805 12:25:11.751900       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-841883-m02"
	
	
	==> kube-proxy [69b19862fff81457c30fc3f2c95dd1ddb95078eabced6e3d18ede6ff578fc015] <==
	I0805 12:23:36.668510       1 server_linux.go:69] "Using iptables proxy"
	I0805 12:23:36.690534       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.86"]
	I0805 12:23:36.765501       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0805 12:23:36.765562       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0805 12:23:36.765580       1 server_linux.go:165] "Using iptables Proxier"
	I0805 12:23:36.773929       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0805 12:23:36.774145       1 server.go:872] "Version info" version="v1.30.3"
	I0805 12:23:36.774174       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 12:23:36.779064       1 config.go:192] "Starting service config controller"
	I0805 12:23:36.779099       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 12:23:36.779214       1 config.go:101] "Starting endpoint slice config controller"
	I0805 12:23:36.779234       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 12:23:36.791177       1 config.go:319] "Starting node config controller"
	I0805 12:23:36.791208       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 12:23:36.880188       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0805 12:23:36.880257       1 shared_informer.go:320] Caches are synced for service config
	I0805 12:23:36.891577       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [e6441cea8b8c78541b9bd98f0b805d148d801178138fb4e18bb54327800d11f1] <==
	I0805 12:17:01.751827       1 server_linux.go:69] "Using iptables proxy"
	I0805 12:17:01.765482       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.86"]
	I0805 12:17:01.811177       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0805 12:17:01.811231       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0805 12:17:01.811287       1 server_linux.go:165] "Using iptables Proxier"
	I0805 12:17:01.814313       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0805 12:17:01.814694       1 server.go:872] "Version info" version="v1.30.3"
	I0805 12:17:01.814724       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 12:17:01.816046       1 config.go:192] "Starting service config controller"
	I0805 12:17:01.816304       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 12:17:01.816360       1 config.go:101] "Starting endpoint slice config controller"
	I0805 12:17:01.816366       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 12:17:01.817272       1 config.go:319] "Starting node config controller"
	I0805 12:17:01.817392       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 12:17:01.916967       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0805 12:17:01.917026       1 shared_informer.go:320] Caches are synced for service config
	I0805 12:17:01.919262       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [38b97b9b3cf57db4b8524e9f2c6d9ba04d00d56377c723b6c3868713d10fa6fe] <==
	E0805 12:16:45.517953       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0805 12:16:45.601382       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0805 12:16:45.601829       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0805 12:16:45.625794       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0805 12:16:45.626303       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0805 12:16:45.629890       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0805 12:16:45.629948       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0805 12:16:45.657222       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0805 12:16:45.657266       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0805 12:16:45.724725       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0805 12:16:45.724834       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0805 12:16:45.761907       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0805 12:16:45.762005       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0805 12:16:45.769549       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0805 12:16:45.769672       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0805 12:16:45.776575       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0805 12:16:45.776668       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0805 12:16:45.838418       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0805 12:16:45.838472       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0805 12:16:45.979469       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0805 12:16:45.979518       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0805 12:16:48.604422       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0805 12:21:55.957744       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0805 12:21:55.957872       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0805 12:21:55.958027       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [5117bb87b2f82290d53e273522bc4c8c828f19edcea13a3790022d30ee6f3650] <==
	I0805 12:23:33.432210       1 serving.go:380] Generated self-signed cert in-memory
	W0805 12:23:35.770938       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0805 12:23:35.771016       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0805 12:23:35.771660       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0805 12:23:35.771714       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0805 12:23:35.790725       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0805 12:23:35.790868       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 12:23:35.793195       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0805 12:23:35.794151       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0805 12:23:35.794235       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0805 12:23:35.794276       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0805 12:23:35.895291       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 05 12:23:32 multinode-841883 kubelet[3373]: E0805 12:23:32.922979    3373 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.86:8443: connect: connection refused
	Aug 05 12:23:33 multinode-841883 kubelet[3373]: I0805 12:23:33.570414    3373 kubelet_node_status.go:73] "Attempting to register node" node="multinode-841883"
	Aug 05 12:23:35 multinode-841883 kubelet[3373]: I0805 12:23:35.964938    3373 kubelet_node_status.go:112] "Node was previously registered" node="multinode-841883"
	Aug 05 12:23:35 multinode-841883 kubelet[3373]: I0805 12:23:35.965039    3373 kubelet_node_status.go:76] "Successfully registered node" node="multinode-841883"
	Aug 05 12:23:35 multinode-841883 kubelet[3373]: I0805 12:23:35.966421    3373 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 05 12:23:35 multinode-841883 kubelet[3373]: I0805 12:23:35.967811    3373 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 05 12:23:36 multinode-841883 kubelet[3373]: I0805 12:23:36.054928    3373 apiserver.go:52] "Watching apiserver"
	Aug 05 12:23:36 multinode-841883 kubelet[3373]: I0805 12:23:36.058476    3373 topology_manager.go:215] "Topology Admit Handler" podUID="cadf65c5-0bf5-4c49-9ab5-442c0b3c6f49" podNamespace="kube-system" podName="kube-proxy-h2bf5"
	Aug 05 12:23:36 multinode-841883 kubelet[3373]: I0805 12:23:36.059825    3373 topology_manager.go:215] "Topology Admit Handler" podUID="41dfbd90-fc76-49cd-8127-41d05f965cee" podNamespace="kube-system" podName="coredns-7db6d8ff4d-zrs8r"
	Aug 05 12:23:36 multinode-841883 kubelet[3373]: I0805 12:23:36.059959    3373 topology_manager.go:215] "Topology Admit Handler" podUID="3de46bbd-b3ee-4132-927a-2abded24a986" podNamespace="kube-system" podName="kindnet-cwklz"
	Aug 05 12:23:36 multinode-841883 kubelet[3373]: I0805 12:23:36.060060    3373 topology_manager.go:215] "Topology Admit Handler" podUID="d4d95110-27dc-4a02-810d-c60f43201bde" podNamespace="kube-system" podName="storage-provisioner"
	Aug 05 12:23:36 multinode-841883 kubelet[3373]: I0805 12:23:36.060135    3373 topology_manager.go:215] "Topology Admit Handler" podUID="f10ce9f2-7971-4942-836f-143b674e5cb4" podNamespace="default" podName="busybox-fc5497c4f-7lqm2"
	Aug 05 12:23:36 multinode-841883 kubelet[3373]: I0805 12:23:36.061334    3373 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Aug 05 12:23:36 multinode-841883 kubelet[3373]: I0805 12:23:36.067962    3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/3de46bbd-b3ee-4132-927a-2abded24a986-cni-cfg\") pod \"kindnet-cwklz\" (UID: \"3de46bbd-b3ee-4132-927a-2abded24a986\") " pod="kube-system/kindnet-cwklz"
	Aug 05 12:23:36 multinode-841883 kubelet[3373]: I0805 12:23:36.068049    3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3de46bbd-b3ee-4132-927a-2abded24a986-lib-modules\") pod \"kindnet-cwklz\" (UID: \"3de46bbd-b3ee-4132-927a-2abded24a986\") " pod="kube-system/kindnet-cwklz"
	Aug 05 12:23:36 multinode-841883 kubelet[3373]: I0805 12:23:36.068114    3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cadf65c5-0bf5-4c49-9ab5-442c0b3c6f49-xtables-lock\") pod \"kube-proxy-h2bf5\" (UID: \"cadf65c5-0bf5-4c49-9ab5-442c0b3c6f49\") " pod="kube-system/kube-proxy-h2bf5"
	Aug 05 12:23:36 multinode-841883 kubelet[3373]: I0805 12:23:36.068189    3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3de46bbd-b3ee-4132-927a-2abded24a986-xtables-lock\") pod \"kindnet-cwklz\" (UID: \"3de46bbd-b3ee-4132-927a-2abded24a986\") " pod="kube-system/kindnet-cwklz"
	Aug 05 12:23:36 multinode-841883 kubelet[3373]: I0805 12:23:36.068252    3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d4d95110-27dc-4a02-810d-c60f43201bde-tmp\") pod \"storage-provisioner\" (UID: \"d4d95110-27dc-4a02-810d-c60f43201bde\") " pod="kube-system/storage-provisioner"
	Aug 05 12:23:36 multinode-841883 kubelet[3373]: I0805 12:23:36.068314    3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cadf65c5-0bf5-4c49-9ab5-442c0b3c6f49-lib-modules\") pod \"kube-proxy-h2bf5\" (UID: \"cadf65c5-0bf5-4c49-9ab5-442c0b3c6f49\") " pod="kube-system/kube-proxy-h2bf5"
	Aug 05 12:23:38 multinode-841883 kubelet[3373]: I0805 12:23:38.849726    3373 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Aug 05 12:24:32 multinode-841883 kubelet[3373]: E0805 12:24:32.142413    3373 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 12:24:32 multinode-841883 kubelet[3373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 12:24:32 multinode-841883 kubelet[3373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 12:24:32 multinode-841883 kubelet[3373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 12:24:32 multinode-841883 kubelet[3373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0805 12:25:14.166323  422623 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19377-383955/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-841883 -n multinode-841883
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-841883 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (323.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841883 stop
E0805 12:25:27.757322  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-841883 stop: exit status 82 (2m0.468316469s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-841883-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-841883 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841883 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-841883 status: exit status 3 (18.874926405s)

                                                
                                                
-- stdout --
	multinode-841883
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-841883-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0805 12:27:37.444128  423299 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.205:22: connect: no route to host
	E0805 12:27:37.444170  423299 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.205:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-841883 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-841883 -n multinode-841883
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841883 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-841883 logs -n 25: (1.421313474s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-841883 ssh -n                                                                 | multinode-841883 | jenkins | v1.33.1 | 05 Aug 24 12:19 UTC | 05 Aug 24 12:19 UTC |
	|         | multinode-841883-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-841883 cp multinode-841883-m02:/home/docker/cp-test.txt                       | multinode-841883 | jenkins | v1.33.1 | 05 Aug 24 12:19 UTC | 05 Aug 24 12:19 UTC |
	|         | multinode-841883:/home/docker/cp-test_multinode-841883-m02_multinode-841883.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-841883 ssh -n                                                                 | multinode-841883 | jenkins | v1.33.1 | 05 Aug 24 12:19 UTC | 05 Aug 24 12:19 UTC |
	|         | multinode-841883-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-841883 ssh -n multinode-841883 sudo cat                                       | multinode-841883 | jenkins | v1.33.1 | 05 Aug 24 12:19 UTC | 05 Aug 24 12:19 UTC |
	|         | /home/docker/cp-test_multinode-841883-m02_multinode-841883.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-841883 cp multinode-841883-m02:/home/docker/cp-test.txt                       | multinode-841883 | jenkins | v1.33.1 | 05 Aug 24 12:19 UTC | 05 Aug 24 12:19 UTC |
	|         | multinode-841883-m03:/home/docker/cp-test_multinode-841883-m02_multinode-841883-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-841883 ssh -n                                                                 | multinode-841883 | jenkins | v1.33.1 | 05 Aug 24 12:19 UTC | 05 Aug 24 12:19 UTC |
	|         | multinode-841883-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-841883 ssh -n multinode-841883-m03 sudo cat                                   | multinode-841883 | jenkins | v1.33.1 | 05 Aug 24 12:19 UTC | 05 Aug 24 12:19 UTC |
	|         | /home/docker/cp-test_multinode-841883-m02_multinode-841883-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-841883 cp testdata/cp-test.txt                                                | multinode-841883 | jenkins | v1.33.1 | 05 Aug 24 12:19 UTC | 05 Aug 24 12:19 UTC |
	|         | multinode-841883-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-841883 ssh -n                                                                 | multinode-841883 | jenkins | v1.33.1 | 05 Aug 24 12:19 UTC | 05 Aug 24 12:19 UTC |
	|         | multinode-841883-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-841883 cp multinode-841883-m03:/home/docker/cp-test.txt                       | multinode-841883 | jenkins | v1.33.1 | 05 Aug 24 12:19 UTC | 05 Aug 24 12:19 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2344340306/001/cp-test_multinode-841883-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-841883 ssh -n                                                                 | multinode-841883 | jenkins | v1.33.1 | 05 Aug 24 12:19 UTC | 05 Aug 24 12:19 UTC |
	|         | multinode-841883-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-841883 cp multinode-841883-m03:/home/docker/cp-test.txt                       | multinode-841883 | jenkins | v1.33.1 | 05 Aug 24 12:19 UTC | 05 Aug 24 12:19 UTC |
	|         | multinode-841883:/home/docker/cp-test_multinode-841883-m03_multinode-841883.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-841883 ssh -n                                                                 | multinode-841883 | jenkins | v1.33.1 | 05 Aug 24 12:19 UTC | 05 Aug 24 12:19 UTC |
	|         | multinode-841883-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-841883 ssh -n multinode-841883 sudo cat                                       | multinode-841883 | jenkins | v1.33.1 | 05 Aug 24 12:19 UTC | 05 Aug 24 12:19 UTC |
	|         | /home/docker/cp-test_multinode-841883-m03_multinode-841883.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-841883 cp multinode-841883-m03:/home/docker/cp-test.txt                       | multinode-841883 | jenkins | v1.33.1 | 05 Aug 24 12:19 UTC | 05 Aug 24 12:19 UTC |
	|         | multinode-841883-m02:/home/docker/cp-test_multinode-841883-m03_multinode-841883-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-841883 ssh -n                                                                 | multinode-841883 | jenkins | v1.33.1 | 05 Aug 24 12:19 UTC | 05 Aug 24 12:19 UTC |
	|         | multinode-841883-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-841883 ssh -n multinode-841883-m02 sudo cat                                   | multinode-841883 | jenkins | v1.33.1 | 05 Aug 24 12:19 UTC | 05 Aug 24 12:19 UTC |
	|         | /home/docker/cp-test_multinode-841883-m03_multinode-841883-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-841883 node stop m03                                                          | multinode-841883 | jenkins | v1.33.1 | 05 Aug 24 12:19 UTC | 05 Aug 24 12:19 UTC |
	| node    | multinode-841883 node start                                                             | multinode-841883 | jenkins | v1.33.1 | 05 Aug 24 12:19 UTC | 05 Aug 24 12:19 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-841883                                                                | multinode-841883 | jenkins | v1.33.1 | 05 Aug 24 12:19 UTC |                     |
	| stop    | -p multinode-841883                                                                     | multinode-841883 | jenkins | v1.33.1 | 05 Aug 24 12:19 UTC |                     |
	| start   | -p multinode-841883                                                                     | multinode-841883 | jenkins | v1.33.1 | 05 Aug 24 12:21 UTC | 05 Aug 24 12:25 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-841883                                                                | multinode-841883 | jenkins | v1.33.1 | 05 Aug 24 12:25 UTC |                     |
	| node    | multinode-841883 node delete                                                            | multinode-841883 | jenkins | v1.33.1 | 05 Aug 24 12:25 UTC | 05 Aug 24 12:25 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-841883 stop                                                                   | multinode-841883 | jenkins | v1.33.1 | 05 Aug 24 12:25 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 12:21:54
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 12:21:54.825660  421523 out.go:291] Setting OutFile to fd 1 ...
	I0805 12:21:54.825898  421523 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:21:54.825927  421523 out.go:304] Setting ErrFile to fd 2...
	I0805 12:21:54.825944  421523 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:21:54.826537  421523 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
	I0805 12:21:54.827270  421523 out.go:298] Setting JSON to false
	I0805 12:21:54.828356  421523 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":7462,"bootTime":1722853053,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0805 12:21:54.828422  421523 start.go:139] virtualization: kvm guest
	I0805 12:21:54.830928  421523 out.go:177] * [multinode-841883] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0805 12:21:54.832466  421523 notify.go:220] Checking for updates...
	I0805 12:21:54.832493  421523 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 12:21:54.834048  421523 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 12:21:54.835661  421523 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 12:21:54.837247  421523 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 12:21:54.838901  421523 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0805 12:21:54.840066  421523 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 12:21:54.841693  421523 config.go:182] Loaded profile config "multinode-841883": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:21:54.841782  421523 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 12:21:54.842202  421523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:21:54.842265  421523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:21:54.858025  421523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44289
	I0805 12:21:54.858470  421523 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:21:54.858965  421523 main.go:141] libmachine: Using API Version  1
	I0805 12:21:54.858985  421523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:21:54.859457  421523 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:21:54.859679  421523 main.go:141] libmachine: (multinode-841883) Calling .DriverName
	I0805 12:21:54.894646  421523 out.go:177] * Using the kvm2 driver based on existing profile
	I0805 12:21:54.895874  421523 start.go:297] selected driver: kvm2
	I0805 12:21:54.895892  421523 start.go:901] validating driver "kvm2" against &{Name:multinode-841883 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-841883 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.205 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.3 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingres
s-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:21:54.896034  421523 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 12:21:54.896365  421523 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 12:21:54.896441  421523 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19377-383955/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0805 12:21:54.911831  421523 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0805 12:21:54.912595  421523 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 12:21:54.912680  421523 cni.go:84] Creating CNI manager for ""
	I0805 12:21:54.912697  421523 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0805 12:21:54.912779  421523 start.go:340] cluster config:
	{Name:multinode-841883 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-841883 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.205 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.3 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kon
g:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:21:54.912949  421523 iso.go:125] acquiring lock: {Name:mk78a4988ea0dfb86bb6f7367e362683a39fd912 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 12:21:54.914924  421523 out.go:177] * Starting "multinode-841883" primary control-plane node in "multinode-841883" cluster
	I0805 12:21:54.916494  421523 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 12:21:54.916542  421523 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0805 12:21:54.916553  421523 cache.go:56] Caching tarball of preloaded images
	I0805 12:21:54.916651  421523 preload.go:172] Found /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0805 12:21:54.916669  421523 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0805 12:21:54.916793  421523 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/multinode-841883/config.json ...
	I0805 12:21:54.917032  421523 start.go:360] acquireMachinesLock for multinode-841883: {Name:mk3babe91d55c30c0b650587cdec6489eb3a7ed6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 12:21:54.917105  421523 start.go:364] duration metric: took 42.097µs to acquireMachinesLock for "multinode-841883"
	I0805 12:21:54.917122  421523 start.go:96] Skipping create...Using existing machine configuration
	I0805 12:21:54.917128  421523 fix.go:54] fixHost starting: 
	I0805 12:21:54.917394  421523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:21:54.917429  421523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:21:54.931256  421523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37635
	I0805 12:21:54.931699  421523 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:21:54.932193  421523 main.go:141] libmachine: Using API Version  1
	I0805 12:21:54.932215  421523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:21:54.932499  421523 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:21:54.932663  421523 main.go:141] libmachine: (multinode-841883) Calling .DriverName
	I0805 12:21:54.932924  421523 main.go:141] libmachine: (multinode-841883) Calling .GetState
	I0805 12:21:54.934746  421523 fix.go:112] recreateIfNeeded on multinode-841883: state=Running err=<nil>
	W0805 12:21:54.934765  421523 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 12:21:54.936728  421523 out.go:177] * Updating the running kvm2 "multinode-841883" VM ...
	I0805 12:21:54.937971  421523 machine.go:94] provisionDockerMachine start ...
	I0805 12:21:54.938011  421523 main.go:141] libmachine: (multinode-841883) Calling .DriverName
	I0805 12:21:54.938256  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHHostname
	I0805 12:21:54.941112  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:21:54.941603  421523 main.go:141] libmachine: (multinode-841883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:b1:cd", ip: ""} in network mk-multinode-841883: {Iface:virbr1 ExpiryTime:2024-08-05 13:16:23 +0000 UTC Type:0 Mac:52:54:00:e6:b1:cd Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-841883 Clientid:01:52:54:00:e6:b1:cd}
	I0805 12:21:54.941625  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined IP address 192.168.39.86 and MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:21:54.941807  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHPort
	I0805 12:21:54.942000  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHKeyPath
	I0805 12:21:54.942171  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHKeyPath
	I0805 12:21:54.942285  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHUsername
	I0805 12:21:54.942436  421523 main.go:141] libmachine: Using SSH client type: native
	I0805 12:21:54.942645  421523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0805 12:21:54.942664  421523 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 12:21:55.056589  421523 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-841883
	
	I0805 12:21:55.056615  421523 main.go:141] libmachine: (multinode-841883) Calling .GetMachineName
	I0805 12:21:55.056906  421523 buildroot.go:166] provisioning hostname "multinode-841883"
	I0805 12:21:55.056938  421523 main.go:141] libmachine: (multinode-841883) Calling .GetMachineName
	I0805 12:21:55.057193  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHHostname
	I0805 12:21:55.059679  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:21:55.060027  421523 main.go:141] libmachine: (multinode-841883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:b1:cd", ip: ""} in network mk-multinode-841883: {Iface:virbr1 ExpiryTime:2024-08-05 13:16:23 +0000 UTC Type:0 Mac:52:54:00:e6:b1:cd Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-841883 Clientid:01:52:54:00:e6:b1:cd}
	I0805 12:21:55.060054  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined IP address 192.168.39.86 and MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:21:55.060159  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHPort
	I0805 12:21:55.060359  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHKeyPath
	I0805 12:21:55.060634  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHKeyPath
	I0805 12:21:55.060792  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHUsername
	I0805 12:21:55.060957  421523 main.go:141] libmachine: Using SSH client type: native
	I0805 12:21:55.061294  421523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0805 12:21:55.061327  421523 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-841883 && echo "multinode-841883" | sudo tee /etc/hostname
	I0805 12:21:55.182960  421523 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-841883
	
	I0805 12:21:55.182988  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHHostname
	I0805 12:21:55.186110  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:21:55.186502  421523 main.go:141] libmachine: (multinode-841883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:b1:cd", ip: ""} in network mk-multinode-841883: {Iface:virbr1 ExpiryTime:2024-08-05 13:16:23 +0000 UTC Type:0 Mac:52:54:00:e6:b1:cd Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-841883 Clientid:01:52:54:00:e6:b1:cd}
	I0805 12:21:55.186561  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined IP address 192.168.39.86 and MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:21:55.186681  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHPort
	I0805 12:21:55.186874  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHKeyPath
	I0805 12:21:55.187069  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHKeyPath
	I0805 12:21:55.187184  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHUsername
	I0805 12:21:55.187381  421523 main.go:141] libmachine: Using SSH client type: native
	I0805 12:21:55.187597  421523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0805 12:21:55.187620  421523 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-841883' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-841883/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-841883' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 12:21:55.296831  421523 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 12:21:55.296862  421523 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19377-383955/.minikube CaCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19377-383955/.minikube}
	I0805 12:21:55.296901  421523 buildroot.go:174] setting up certificates
	I0805 12:21:55.296910  421523 provision.go:84] configureAuth start
	I0805 12:21:55.296920  421523 main.go:141] libmachine: (multinode-841883) Calling .GetMachineName
	I0805 12:21:55.297203  421523 main.go:141] libmachine: (multinode-841883) Calling .GetIP
	I0805 12:21:55.300056  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:21:55.300460  421523 main.go:141] libmachine: (multinode-841883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:b1:cd", ip: ""} in network mk-multinode-841883: {Iface:virbr1 ExpiryTime:2024-08-05 13:16:23 +0000 UTC Type:0 Mac:52:54:00:e6:b1:cd Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-841883 Clientid:01:52:54:00:e6:b1:cd}
	I0805 12:21:55.300485  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined IP address 192.168.39.86 and MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:21:55.300805  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHHostname
	I0805 12:21:55.303190  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:21:55.303554  421523 main.go:141] libmachine: (multinode-841883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:b1:cd", ip: ""} in network mk-multinode-841883: {Iface:virbr1 ExpiryTime:2024-08-05 13:16:23 +0000 UTC Type:0 Mac:52:54:00:e6:b1:cd Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-841883 Clientid:01:52:54:00:e6:b1:cd}
	I0805 12:21:55.303592  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined IP address 192.168.39.86 and MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:21:55.303777  421523 provision.go:143] copyHostCerts
	I0805 12:21:55.303808  421523 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 12:21:55.303854  421523 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem, removing ...
	I0805 12:21:55.303875  421523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 12:21:55.303956  421523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem (1082 bytes)
	I0805 12:21:55.304061  421523 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 12:21:55.304088  421523 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem, removing ...
	I0805 12:21:55.304098  421523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 12:21:55.304136  421523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem (1123 bytes)
	I0805 12:21:55.304237  421523 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 12:21:55.304334  421523 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem, removing ...
	I0805 12:21:55.304362  421523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 12:21:55.304423  421523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem (1675 bytes)
	I0805 12:21:55.304532  421523 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem org=jenkins.multinode-841883 san=[127.0.0.1 192.168.39.86 localhost minikube multinode-841883]
	I0805 12:21:55.647089  421523 provision.go:177] copyRemoteCerts
	I0805 12:21:55.647168  421523 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 12:21:55.647200  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHHostname
	I0805 12:21:55.650056  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:21:55.650501  421523 main.go:141] libmachine: (multinode-841883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:b1:cd", ip: ""} in network mk-multinode-841883: {Iface:virbr1 ExpiryTime:2024-08-05 13:16:23 +0000 UTC Type:0 Mac:52:54:00:e6:b1:cd Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-841883 Clientid:01:52:54:00:e6:b1:cd}
	I0805 12:21:55.650563  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined IP address 192.168.39.86 and MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:21:55.650694  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHPort
	I0805 12:21:55.650894  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHKeyPath
	I0805 12:21:55.651118  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHUsername
	I0805 12:21:55.651319  421523 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/multinode-841883/id_rsa Username:docker}
	I0805 12:21:55.738988  421523 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 12:21:55.739071  421523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 12:21:55.765888  421523 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 12:21:55.765963  421523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0805 12:21:55.793111  421523 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 12:21:55.793206  421523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0805 12:21:55.818208  421523 provision.go:87] duration metric: took 521.279177ms to configureAuth
	I0805 12:21:55.818244  421523 buildroot.go:189] setting minikube options for container-runtime
	I0805 12:21:55.818480  421523 config.go:182] Loaded profile config "multinode-841883": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:21:55.818568  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHHostname
	I0805 12:21:55.821361  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:21:55.821753  421523 main.go:141] libmachine: (multinode-841883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:b1:cd", ip: ""} in network mk-multinode-841883: {Iface:virbr1 ExpiryTime:2024-08-05 13:16:23 +0000 UTC Type:0 Mac:52:54:00:e6:b1:cd Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-841883 Clientid:01:52:54:00:e6:b1:cd}
	I0805 12:21:55.821788  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined IP address 192.168.39.86 and MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:21:55.821910  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHPort
	I0805 12:21:55.822134  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHKeyPath
	I0805 12:21:55.822304  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHKeyPath
	I0805 12:21:55.822473  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHUsername
	I0805 12:21:55.822691  421523 main.go:141] libmachine: Using SSH client type: native
	I0805 12:21:55.822880  421523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0805 12:21:55.822896  421523 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 12:23:26.705502  421523 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 12:23:26.705543  421523 machine.go:97] duration metric: took 1m31.767551252s to provisionDockerMachine
	I0805 12:23:26.705560  421523 start.go:293] postStartSetup for "multinode-841883" (driver="kvm2")
	I0805 12:23:26.705577  421523 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 12:23:26.705601  421523 main.go:141] libmachine: (multinode-841883) Calling .DriverName
	I0805 12:23:26.705984  421523 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 12:23:26.706016  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHHostname
	I0805 12:23:26.709515  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:23:26.709983  421523 main.go:141] libmachine: (multinode-841883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:b1:cd", ip: ""} in network mk-multinode-841883: {Iface:virbr1 ExpiryTime:2024-08-05 13:16:23 +0000 UTC Type:0 Mac:52:54:00:e6:b1:cd Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-841883 Clientid:01:52:54:00:e6:b1:cd}
	I0805 12:23:26.710017  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined IP address 192.168.39.86 and MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:23:26.710152  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHPort
	I0805 12:23:26.710349  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHKeyPath
	I0805 12:23:26.710538  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHUsername
	I0805 12:23:26.710715  421523 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/multinode-841883/id_rsa Username:docker}
	I0805 12:23:26.795819  421523 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 12:23:26.799957  421523 command_runner.go:130] > NAME=Buildroot
	I0805 12:23:26.799974  421523 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0805 12:23:26.799980  421523 command_runner.go:130] > ID=buildroot
	I0805 12:23:26.799988  421523 command_runner.go:130] > VERSION_ID=2023.02.9
	I0805 12:23:26.799995  421523 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0805 12:23:26.800048  421523 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 12:23:26.800090  421523 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/addons for local assets ...
	I0805 12:23:26.800185  421523 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/files for local assets ...
	I0805 12:23:26.800290  421523 filesync.go:149] local asset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> 3912192.pem in /etc/ssl/certs
	I0805 12:23:26.800303  421523 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> /etc/ssl/certs/3912192.pem
	I0805 12:23:26.800439  421523 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 12:23:26.810657  421523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:23:26.833987  421523 start.go:296] duration metric: took 128.408207ms for postStartSetup
	I0805 12:23:26.834050  421523 fix.go:56] duration metric: took 1m31.916920326s for fixHost
	I0805 12:23:26.834077  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHHostname
	I0805 12:23:26.836778  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:23:26.837247  421523 main.go:141] libmachine: (multinode-841883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:b1:cd", ip: ""} in network mk-multinode-841883: {Iface:virbr1 ExpiryTime:2024-08-05 13:16:23 +0000 UTC Type:0 Mac:52:54:00:e6:b1:cd Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-841883 Clientid:01:52:54:00:e6:b1:cd}
	I0805 12:23:26.837278  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined IP address 192.168.39.86 and MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:23:26.837473  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHPort
	I0805 12:23:26.837682  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHKeyPath
	I0805 12:23:26.837847  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHKeyPath
	I0805 12:23:26.837982  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHUsername
	I0805 12:23:26.838141  421523 main.go:141] libmachine: Using SSH client type: native
	I0805 12:23:26.838366  421523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0805 12:23:26.838381  421523 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 12:23:26.944755  421523 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722860606.923364013
	
	I0805 12:23:26.944784  421523 fix.go:216] guest clock: 1722860606.923364013
	I0805 12:23:26.944792  421523 fix.go:229] Guest: 2024-08-05 12:23:26.923364013 +0000 UTC Remote: 2024-08-05 12:23:26.834055772 +0000 UTC m=+92.046937845 (delta=89.308241ms)
	I0805 12:23:26.944834  421523 fix.go:200] guest clock delta is within tolerance: 89.308241ms
	I0805 12:23:26.944845  421523 start.go:83] releasing machines lock for "multinode-841883", held for 1m32.027729676s
	I0805 12:23:26.944875  421523 main.go:141] libmachine: (multinode-841883) Calling .DriverName
	I0805 12:23:26.945139  421523 main.go:141] libmachine: (multinode-841883) Calling .GetIP
	I0805 12:23:26.947812  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:23:26.948246  421523 main.go:141] libmachine: (multinode-841883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:b1:cd", ip: ""} in network mk-multinode-841883: {Iface:virbr1 ExpiryTime:2024-08-05 13:16:23 +0000 UTC Type:0 Mac:52:54:00:e6:b1:cd Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-841883 Clientid:01:52:54:00:e6:b1:cd}
	I0805 12:23:26.948286  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined IP address 192.168.39.86 and MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:23:26.948432  421523 main.go:141] libmachine: (multinode-841883) Calling .DriverName
	I0805 12:23:26.948935  421523 main.go:141] libmachine: (multinode-841883) Calling .DriverName
	I0805 12:23:26.949147  421523 main.go:141] libmachine: (multinode-841883) Calling .DriverName
	I0805 12:23:26.949246  421523 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 12:23:26.949293  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHHostname
	I0805 12:23:26.949388  421523 ssh_runner.go:195] Run: cat /version.json
	I0805 12:23:26.949429  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHHostname
	I0805 12:23:26.952088  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:23:26.952431  421523 main.go:141] libmachine: (multinode-841883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:b1:cd", ip: ""} in network mk-multinode-841883: {Iface:virbr1 ExpiryTime:2024-08-05 13:16:23 +0000 UTC Type:0 Mac:52:54:00:e6:b1:cd Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-841883 Clientid:01:52:54:00:e6:b1:cd}
	I0805 12:23:26.952456  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:23:26.952493  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined IP address 192.168.39.86 and MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:23:26.952657  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHPort
	I0805 12:23:26.952825  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHKeyPath
	I0805 12:23:26.952932  421523 main.go:141] libmachine: (multinode-841883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:b1:cd", ip: ""} in network mk-multinode-841883: {Iface:virbr1 ExpiryTime:2024-08-05 13:16:23 +0000 UTC Type:0 Mac:52:54:00:e6:b1:cd Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-841883 Clientid:01:52:54:00:e6:b1:cd}
	I0805 12:23:26.952958  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined IP address 192.168.39.86 and MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:23:26.952963  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHUsername
	I0805 12:23:26.953081  421523 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/multinode-841883/id_rsa Username:docker}
	I0805 12:23:26.953137  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHPort
	I0805 12:23:26.953251  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHKeyPath
	I0805 12:23:26.953434  421523 main.go:141] libmachine: (multinode-841883) Calling .GetSSHUsername
	I0805 12:23:26.953605  421523 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/multinode-841883/id_rsa Username:docker}
	I0805 12:23:27.052081  421523 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0805 12:23:27.052940  421523 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0805 12:23:27.053115  421523 ssh_runner.go:195] Run: systemctl --version
	I0805 12:23:27.058902  421523 command_runner.go:130] > systemd 252 (252)
	I0805 12:23:27.058936  421523 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0805 12:23:27.059188  421523 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 12:23:27.234197  421523 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0805 12:23:27.240736  421523 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0805 12:23:27.240786  421523 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 12:23:27.240852  421523 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 12:23:27.250419  421523 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0805 12:23:27.250445  421523 start.go:495] detecting cgroup driver to use...
	I0805 12:23:27.250529  421523 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 12:23:27.269813  421523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 12:23:27.288676  421523 docker.go:217] disabling cri-docker service (if available) ...
	I0805 12:23:27.288736  421523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 12:23:27.303402  421523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 12:23:27.318886  421523 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 12:23:27.487837  421523 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 12:23:27.625251  421523 docker.go:233] disabling docker service ...
	I0805 12:23:27.625317  421523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 12:23:27.647064  421523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 12:23:27.661886  421523 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 12:23:27.794504  421523 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 12:23:27.930845  421523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 12:23:27.945315  421523 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 12:23:27.963525  421523 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0805 12:23:27.963813  421523 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0805 12:23:27.963877  421523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:23:27.974620  421523 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 12:23:27.974684  421523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:23:27.985649  421523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:23:27.995775  421523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:23:28.006085  421523 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 12:23:28.016497  421523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:23:28.027028  421523 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:23:28.037341  421523 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:23:28.047576  421523 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 12:23:28.057292  421523 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0805 12:23:28.057391  421523 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 12:23:28.066643  421523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:23:28.199392  421523 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 12:23:28.447994  421523 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 12:23:28.448076  421523 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 12:23:28.457074  421523 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0805 12:23:28.457111  421523 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0805 12:23:28.457118  421523 command_runner.go:130] > Device: 0,22	Inode: 1317        Links: 1
	I0805 12:23:28.457125  421523 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0805 12:23:28.457130  421523 command_runner.go:130] > Access: 2024-08-05 12:23:28.411505442 +0000
	I0805 12:23:28.457138  421523 command_runner.go:130] > Modify: 2024-08-05 12:23:28.321503260 +0000
	I0805 12:23:28.457143  421523 command_runner.go:130] > Change: 2024-08-05 12:23:28.321503260 +0000
	I0805 12:23:28.457146  421523 command_runner.go:130] >  Birth: -
	I0805 12:23:28.457449  421523 start.go:563] Will wait 60s for crictl version
	I0805 12:23:28.457510  421523 ssh_runner.go:195] Run: which crictl
	I0805 12:23:28.461231  421523 command_runner.go:130] > /usr/bin/crictl
	I0805 12:23:28.461499  421523 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 12:23:28.496308  421523 command_runner.go:130] > Version:  0.1.0
	I0805 12:23:28.496331  421523 command_runner.go:130] > RuntimeName:  cri-o
	I0805 12:23:28.496338  421523 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0805 12:23:28.496346  421523 command_runner.go:130] > RuntimeApiVersion:  v1
	I0805 12:23:28.496492  421523 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 12:23:28.496570  421523 ssh_runner.go:195] Run: crio --version
	I0805 12:23:28.524237  421523 command_runner.go:130] > crio version 1.29.1
	I0805 12:23:28.524261  421523 command_runner.go:130] > Version:        1.29.1
	I0805 12:23:28.524267  421523 command_runner.go:130] > GitCommit:      unknown
	I0805 12:23:28.524272  421523 command_runner.go:130] > GitCommitDate:  unknown
	I0805 12:23:28.524275  421523 command_runner.go:130] > GitTreeState:   clean
	I0805 12:23:28.524281  421523 command_runner.go:130] > BuildDate:      2024-07-29T16:04:01Z
	I0805 12:23:28.524285  421523 command_runner.go:130] > GoVersion:      go1.21.6
	I0805 12:23:28.524289  421523 command_runner.go:130] > Compiler:       gc
	I0805 12:23:28.524294  421523 command_runner.go:130] > Platform:       linux/amd64
	I0805 12:23:28.524301  421523 command_runner.go:130] > Linkmode:       dynamic
	I0805 12:23:28.524308  421523 command_runner.go:130] > BuildTags:      
	I0805 12:23:28.524327  421523 command_runner.go:130] >   containers_image_ostree_stub
	I0805 12:23:28.524335  421523 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0805 12:23:28.524343  421523 command_runner.go:130] >   btrfs_noversion
	I0805 12:23:28.524350  421523 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0805 12:23:28.524361  421523 command_runner.go:130] >   libdm_no_deferred_remove
	I0805 12:23:28.524365  421523 command_runner.go:130] >   seccomp
	I0805 12:23:28.524370  421523 command_runner.go:130] > LDFlags:          unknown
	I0805 12:23:28.524377  421523 command_runner.go:130] > SeccompEnabled:   true
	I0805 12:23:28.524381  421523 command_runner.go:130] > AppArmorEnabled:  false
	I0805 12:23:28.524462  421523 ssh_runner.go:195] Run: crio --version
	I0805 12:23:28.551812  421523 command_runner.go:130] > crio version 1.29.1
	I0805 12:23:28.551840  421523 command_runner.go:130] > Version:        1.29.1
	I0805 12:23:28.551850  421523 command_runner.go:130] > GitCommit:      unknown
	I0805 12:23:28.551856  421523 command_runner.go:130] > GitCommitDate:  unknown
	I0805 12:23:28.551863  421523 command_runner.go:130] > GitTreeState:   clean
	I0805 12:23:28.551872  421523 command_runner.go:130] > BuildDate:      2024-07-29T16:04:01Z
	I0805 12:23:28.551879  421523 command_runner.go:130] > GoVersion:      go1.21.6
	I0805 12:23:28.551885  421523 command_runner.go:130] > Compiler:       gc
	I0805 12:23:28.551892  421523 command_runner.go:130] > Platform:       linux/amd64
	I0805 12:23:28.551896  421523 command_runner.go:130] > Linkmode:       dynamic
	I0805 12:23:28.551901  421523 command_runner.go:130] > BuildTags:      
	I0805 12:23:28.551906  421523 command_runner.go:130] >   containers_image_ostree_stub
	I0805 12:23:28.551911  421523 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0805 12:23:28.551915  421523 command_runner.go:130] >   btrfs_noversion
	I0805 12:23:28.551920  421523 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0805 12:23:28.551924  421523 command_runner.go:130] >   libdm_no_deferred_remove
	I0805 12:23:28.551928  421523 command_runner.go:130] >   seccomp
	I0805 12:23:28.551932  421523 command_runner.go:130] > LDFlags:          unknown
	I0805 12:23:28.551936  421523 command_runner.go:130] > SeccompEnabled:   true
	I0805 12:23:28.551941  421523 command_runner.go:130] > AppArmorEnabled:  false
	I0805 12:23:28.554053  421523 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0805 12:23:28.555418  421523 main.go:141] libmachine: (multinode-841883) Calling .GetIP
	I0805 12:23:28.558354  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:23:28.558703  421523 main.go:141] libmachine: (multinode-841883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:b1:cd", ip: ""} in network mk-multinode-841883: {Iface:virbr1 ExpiryTime:2024-08-05 13:16:23 +0000 UTC Type:0 Mac:52:54:00:e6:b1:cd Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-841883 Clientid:01:52:54:00:e6:b1:cd}
	I0805 12:23:28.558738  421523 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined IP address 192.168.39.86 and MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:23:28.558910  421523 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0805 12:23:28.563510  421523 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0805 12:23:28.563636  421523 kubeadm.go:883] updating cluster {Name:multinode-841883 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-841883 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.205 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.3 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 12:23:28.563839  421523 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 12:23:28.563910  421523 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:23:28.607922  421523 command_runner.go:130] > {
	I0805 12:23:28.607950  421523 command_runner.go:130] >   "images": [
	I0805 12:23:28.607955  421523 command_runner.go:130] >     {
	I0805 12:23:28.607962  421523 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0805 12:23:28.607967  421523 command_runner.go:130] >       "repoTags": [
	I0805 12:23:28.607973  421523 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0805 12:23:28.607977  421523 command_runner.go:130] >       ],
	I0805 12:23:28.607981  421523 command_runner.go:130] >       "repoDigests": [
	I0805 12:23:28.607991  421523 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0805 12:23:28.608003  421523 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0805 12:23:28.608010  421523 command_runner.go:130] >       ],
	I0805 12:23:28.608018  421523 command_runner.go:130] >       "size": "87165492",
	I0805 12:23:28.608028  421523 command_runner.go:130] >       "uid": null,
	I0805 12:23:28.608034  421523 command_runner.go:130] >       "username": "",
	I0805 12:23:28.608046  421523 command_runner.go:130] >       "spec": null,
	I0805 12:23:28.608053  421523 command_runner.go:130] >       "pinned": false
	I0805 12:23:28.608060  421523 command_runner.go:130] >     },
	I0805 12:23:28.608065  421523 command_runner.go:130] >     {
	I0805 12:23:28.608077  421523 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0805 12:23:28.608087  421523 command_runner.go:130] >       "repoTags": [
	I0805 12:23:28.608100  421523 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0805 12:23:28.608106  421523 command_runner.go:130] >       ],
	I0805 12:23:28.608115  421523 command_runner.go:130] >       "repoDigests": [
	I0805 12:23:28.608127  421523 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0805 12:23:28.608144  421523 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0805 12:23:28.608152  421523 command_runner.go:130] >       ],
	I0805 12:23:28.608158  421523 command_runner.go:130] >       "size": "87174707",
	I0805 12:23:28.608162  421523 command_runner.go:130] >       "uid": null,
	I0805 12:23:28.608171  421523 command_runner.go:130] >       "username": "",
	I0805 12:23:28.608181  421523 command_runner.go:130] >       "spec": null,
	I0805 12:23:28.608193  421523 command_runner.go:130] >       "pinned": false
	I0805 12:23:28.608203  421523 command_runner.go:130] >     },
	I0805 12:23:28.608211  421523 command_runner.go:130] >     {
	I0805 12:23:28.608221  421523 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0805 12:23:28.608230  421523 command_runner.go:130] >       "repoTags": [
	I0805 12:23:28.608239  421523 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0805 12:23:28.608245  421523 command_runner.go:130] >       ],
	I0805 12:23:28.608249  421523 command_runner.go:130] >       "repoDigests": [
	I0805 12:23:28.608262  421523 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0805 12:23:28.608277  421523 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0805 12:23:28.608286  421523 command_runner.go:130] >       ],
	I0805 12:23:28.608296  421523 command_runner.go:130] >       "size": "1363676",
	I0805 12:23:28.608305  421523 command_runner.go:130] >       "uid": null,
	I0805 12:23:28.608398  421523 command_runner.go:130] >       "username": "",
	I0805 12:23:28.608439  421523 command_runner.go:130] >       "spec": null,
	I0805 12:23:28.608447  421523 command_runner.go:130] >       "pinned": false
	I0805 12:23:28.608456  421523 command_runner.go:130] >     },
	I0805 12:23:28.608462  421523 command_runner.go:130] >     {
	I0805 12:23:28.608476  421523 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0805 12:23:28.608486  421523 command_runner.go:130] >       "repoTags": [
	I0805 12:23:28.608502  421523 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0805 12:23:28.608513  421523 command_runner.go:130] >       ],
	I0805 12:23:28.608521  421523 command_runner.go:130] >       "repoDigests": [
	I0805 12:23:28.608531  421523 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0805 12:23:28.608558  421523 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0805 12:23:28.608567  421523 command_runner.go:130] >       ],
	I0805 12:23:28.608574  421523 command_runner.go:130] >       "size": "31470524",
	I0805 12:23:28.608583  421523 command_runner.go:130] >       "uid": null,
	I0805 12:23:28.608590  421523 command_runner.go:130] >       "username": "",
	I0805 12:23:28.608599  421523 command_runner.go:130] >       "spec": null,
	I0805 12:23:28.608604  421523 command_runner.go:130] >       "pinned": false
	I0805 12:23:28.608611  421523 command_runner.go:130] >     },
	I0805 12:23:28.608616  421523 command_runner.go:130] >     {
	I0805 12:23:28.608627  421523 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0805 12:23:28.608637  421523 command_runner.go:130] >       "repoTags": [
	I0805 12:23:28.608645  421523 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0805 12:23:28.608656  421523 command_runner.go:130] >       ],
	I0805 12:23:28.608664  421523 command_runner.go:130] >       "repoDigests": [
	I0805 12:23:28.608678  421523 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0805 12:23:28.608691  421523 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0805 12:23:28.608697  421523 command_runner.go:130] >       ],
	I0805 12:23:28.608703  421523 command_runner.go:130] >       "size": "61245718",
	I0805 12:23:28.608712  421523 command_runner.go:130] >       "uid": null,
	I0805 12:23:28.608720  421523 command_runner.go:130] >       "username": "nonroot",
	I0805 12:23:28.608728  421523 command_runner.go:130] >       "spec": null,
	I0805 12:23:28.608735  421523 command_runner.go:130] >       "pinned": false
	I0805 12:23:28.608744  421523 command_runner.go:130] >     },
	I0805 12:23:28.608750  421523 command_runner.go:130] >     {
	I0805 12:23:28.608763  421523 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0805 12:23:28.608772  421523 command_runner.go:130] >       "repoTags": [
	I0805 12:23:28.608778  421523 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0805 12:23:28.608784  421523 command_runner.go:130] >       ],
	I0805 12:23:28.608791  421523 command_runner.go:130] >       "repoDigests": [
	I0805 12:23:28.608805  421523 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0805 12:23:28.608819  421523 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0805 12:23:28.608828  421523 command_runner.go:130] >       ],
	I0805 12:23:28.608835  421523 command_runner.go:130] >       "size": "150779692",
	I0805 12:23:28.608843  421523 command_runner.go:130] >       "uid": {
	I0805 12:23:28.608849  421523 command_runner.go:130] >         "value": "0"
	I0805 12:23:28.608858  421523 command_runner.go:130] >       },
	I0805 12:23:28.608863  421523 command_runner.go:130] >       "username": "",
	I0805 12:23:28.608869  421523 command_runner.go:130] >       "spec": null,
	I0805 12:23:28.608875  421523 command_runner.go:130] >       "pinned": false
	I0805 12:23:28.608884  421523 command_runner.go:130] >     },
	I0805 12:23:28.608890  421523 command_runner.go:130] >     {
	I0805 12:23:28.608903  421523 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0805 12:23:28.608912  421523 command_runner.go:130] >       "repoTags": [
	I0805 12:23:28.608927  421523 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0805 12:23:28.608936  421523 command_runner.go:130] >       ],
	I0805 12:23:28.608943  421523 command_runner.go:130] >       "repoDigests": [
	I0805 12:23:28.608953  421523 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0805 12:23:28.608966  421523 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0805 12:23:28.608978  421523 command_runner.go:130] >       ],
	I0805 12:23:28.608987  421523 command_runner.go:130] >       "size": "117609954",
	I0805 12:23:28.608993  421523 command_runner.go:130] >       "uid": {
	I0805 12:23:28.609013  421523 command_runner.go:130] >         "value": "0"
	I0805 12:23:28.609023  421523 command_runner.go:130] >       },
	I0805 12:23:28.609032  421523 command_runner.go:130] >       "username": "",
	I0805 12:23:28.609037  421523 command_runner.go:130] >       "spec": null,
	I0805 12:23:28.609041  421523 command_runner.go:130] >       "pinned": false
	I0805 12:23:28.609047  421523 command_runner.go:130] >     },
	I0805 12:23:28.609059  421523 command_runner.go:130] >     {
	I0805 12:23:28.609072  421523 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0805 12:23:28.609085  421523 command_runner.go:130] >       "repoTags": [
	I0805 12:23:28.609096  421523 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0805 12:23:28.609105  421523 command_runner.go:130] >       ],
	I0805 12:23:28.609112  421523 command_runner.go:130] >       "repoDigests": [
	I0805 12:23:28.609134  421523 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0805 12:23:28.609147  421523 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0805 12:23:28.609153  421523 command_runner.go:130] >       ],
	I0805 12:23:28.609164  421523 command_runner.go:130] >       "size": "112198984",
	I0805 12:23:28.609171  421523 command_runner.go:130] >       "uid": {
	I0805 12:23:28.609180  421523 command_runner.go:130] >         "value": "0"
	I0805 12:23:28.609185  421523 command_runner.go:130] >       },
	I0805 12:23:28.609194  421523 command_runner.go:130] >       "username": "",
	I0805 12:23:28.609200  421523 command_runner.go:130] >       "spec": null,
	I0805 12:23:28.609206  421523 command_runner.go:130] >       "pinned": false
	I0805 12:23:28.609211  421523 command_runner.go:130] >     },
	I0805 12:23:28.609215  421523 command_runner.go:130] >     {
	I0805 12:23:28.609221  421523 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0805 12:23:28.609225  421523 command_runner.go:130] >       "repoTags": [
	I0805 12:23:28.609232  421523 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0805 12:23:28.609237  421523 command_runner.go:130] >       ],
	I0805 12:23:28.609246  421523 command_runner.go:130] >       "repoDigests": [
	I0805 12:23:28.609258  421523 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0805 12:23:28.609269  421523 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0805 12:23:28.609283  421523 command_runner.go:130] >       ],
	I0805 12:23:28.609290  421523 command_runner.go:130] >       "size": "85953945",
	I0805 12:23:28.609297  421523 command_runner.go:130] >       "uid": null,
	I0805 12:23:28.609303  421523 command_runner.go:130] >       "username": "",
	I0805 12:23:28.609310  421523 command_runner.go:130] >       "spec": null,
	I0805 12:23:28.609317  421523 command_runner.go:130] >       "pinned": false
	I0805 12:23:28.609323  421523 command_runner.go:130] >     },
	I0805 12:23:28.609333  421523 command_runner.go:130] >     {
	I0805 12:23:28.609352  421523 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0805 12:23:28.609361  421523 command_runner.go:130] >       "repoTags": [
	I0805 12:23:28.609370  421523 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0805 12:23:28.609378  421523 command_runner.go:130] >       ],
	I0805 12:23:28.609382  421523 command_runner.go:130] >       "repoDigests": [
	I0805 12:23:28.609390  421523 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0805 12:23:28.609400  421523 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0805 12:23:28.609405  421523 command_runner.go:130] >       ],
	I0805 12:23:28.609414  421523 command_runner.go:130] >       "size": "63051080",
	I0805 12:23:28.609420  421523 command_runner.go:130] >       "uid": {
	I0805 12:23:28.609429  421523 command_runner.go:130] >         "value": "0"
	I0805 12:23:28.609435  421523 command_runner.go:130] >       },
	I0805 12:23:28.609445  421523 command_runner.go:130] >       "username": "",
	I0805 12:23:28.609451  421523 command_runner.go:130] >       "spec": null,
	I0805 12:23:28.609460  421523 command_runner.go:130] >       "pinned": false
	I0805 12:23:28.609466  421523 command_runner.go:130] >     },
	I0805 12:23:28.609471  421523 command_runner.go:130] >     {
	I0805 12:23:28.609482  421523 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0805 12:23:28.609491  421523 command_runner.go:130] >       "repoTags": [
	I0805 12:23:28.609496  421523 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0805 12:23:28.609500  421523 command_runner.go:130] >       ],
	I0805 12:23:28.609505  421523 command_runner.go:130] >       "repoDigests": [
	I0805 12:23:28.609514  421523 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0805 12:23:28.609521  421523 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0805 12:23:28.609526  421523 command_runner.go:130] >       ],
	I0805 12:23:28.609530  421523 command_runner.go:130] >       "size": "750414",
	I0805 12:23:28.609534  421523 command_runner.go:130] >       "uid": {
	I0805 12:23:28.609538  421523 command_runner.go:130] >         "value": "65535"
	I0805 12:23:28.609541  421523 command_runner.go:130] >       },
	I0805 12:23:28.609545  421523 command_runner.go:130] >       "username": "",
	I0805 12:23:28.609551  421523 command_runner.go:130] >       "spec": null,
	I0805 12:23:28.609557  421523 command_runner.go:130] >       "pinned": true
	I0805 12:23:28.609560  421523 command_runner.go:130] >     }
	I0805 12:23:28.609564  421523 command_runner.go:130] >   ]
	I0805 12:23:28.609567  421523 command_runner.go:130] > }
	I0805 12:23:28.609787  421523 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 12:23:28.609799  421523 crio.go:433] Images already preloaded, skipping extraction
	I0805 12:23:28.609852  421523 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:23:28.642887  421523 command_runner.go:130] > {
	I0805 12:23:28.642909  421523 command_runner.go:130] >   "images": [
	I0805 12:23:28.642914  421523 command_runner.go:130] >     {
	I0805 12:23:28.642922  421523 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0805 12:23:28.642936  421523 command_runner.go:130] >       "repoTags": [
	I0805 12:23:28.642943  421523 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0805 12:23:28.642947  421523 command_runner.go:130] >       ],
	I0805 12:23:28.642951  421523 command_runner.go:130] >       "repoDigests": [
	I0805 12:23:28.642959  421523 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0805 12:23:28.642966  421523 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0805 12:23:28.642969  421523 command_runner.go:130] >       ],
	I0805 12:23:28.642974  421523 command_runner.go:130] >       "size": "87165492",
	I0805 12:23:28.642977  421523 command_runner.go:130] >       "uid": null,
	I0805 12:23:28.642981  421523 command_runner.go:130] >       "username": "",
	I0805 12:23:28.642990  421523 command_runner.go:130] >       "spec": null,
	I0805 12:23:28.642994  421523 command_runner.go:130] >       "pinned": false
	I0805 12:23:28.642998  421523 command_runner.go:130] >     },
	I0805 12:23:28.643001  421523 command_runner.go:130] >     {
	I0805 12:23:28.643008  421523 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0805 12:23:28.643015  421523 command_runner.go:130] >       "repoTags": [
	I0805 12:23:28.643020  421523 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0805 12:23:28.643024  421523 command_runner.go:130] >       ],
	I0805 12:23:28.643028  421523 command_runner.go:130] >       "repoDigests": [
	I0805 12:23:28.643037  421523 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0805 12:23:28.643044  421523 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0805 12:23:28.643050  421523 command_runner.go:130] >       ],
	I0805 12:23:28.643054  421523 command_runner.go:130] >       "size": "87174707",
	I0805 12:23:28.643059  421523 command_runner.go:130] >       "uid": null,
	I0805 12:23:28.643067  421523 command_runner.go:130] >       "username": "",
	I0805 12:23:28.643071  421523 command_runner.go:130] >       "spec": null,
	I0805 12:23:28.643075  421523 command_runner.go:130] >       "pinned": false
	I0805 12:23:28.643080  421523 command_runner.go:130] >     },
	I0805 12:23:28.643084  421523 command_runner.go:130] >     {
	I0805 12:23:28.643092  421523 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0805 12:23:28.643099  421523 command_runner.go:130] >       "repoTags": [
	I0805 12:23:28.643104  421523 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0805 12:23:28.643108  421523 command_runner.go:130] >       ],
	I0805 12:23:28.643112  421523 command_runner.go:130] >       "repoDigests": [
	I0805 12:23:28.643119  421523 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0805 12:23:28.643125  421523 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0805 12:23:28.643129  421523 command_runner.go:130] >       ],
	I0805 12:23:28.643133  421523 command_runner.go:130] >       "size": "1363676",
	I0805 12:23:28.643136  421523 command_runner.go:130] >       "uid": null,
	I0805 12:23:28.643142  421523 command_runner.go:130] >       "username": "",
	I0805 12:23:28.643164  421523 command_runner.go:130] >       "spec": null,
	I0805 12:23:28.643171  421523 command_runner.go:130] >       "pinned": false
	I0805 12:23:28.643174  421523 command_runner.go:130] >     },
	I0805 12:23:28.643177  421523 command_runner.go:130] >     {
	I0805 12:23:28.643183  421523 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0805 12:23:28.643187  421523 command_runner.go:130] >       "repoTags": [
	I0805 12:23:28.643192  421523 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0805 12:23:28.643198  421523 command_runner.go:130] >       ],
	I0805 12:23:28.643202  421523 command_runner.go:130] >       "repoDigests": [
	I0805 12:23:28.643209  421523 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0805 12:23:28.643222  421523 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0805 12:23:28.643228  421523 command_runner.go:130] >       ],
	I0805 12:23:28.643232  421523 command_runner.go:130] >       "size": "31470524",
	I0805 12:23:28.643236  421523 command_runner.go:130] >       "uid": null,
	I0805 12:23:28.643240  421523 command_runner.go:130] >       "username": "",
	I0805 12:23:28.643244  421523 command_runner.go:130] >       "spec": null,
	I0805 12:23:28.643248  421523 command_runner.go:130] >       "pinned": false
	I0805 12:23:28.643252  421523 command_runner.go:130] >     },
	I0805 12:23:28.643255  421523 command_runner.go:130] >     {
	I0805 12:23:28.643262  421523 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0805 12:23:28.643269  421523 command_runner.go:130] >       "repoTags": [
	I0805 12:23:28.643274  421523 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0805 12:23:28.643278  421523 command_runner.go:130] >       ],
	I0805 12:23:28.643281  421523 command_runner.go:130] >       "repoDigests": [
	I0805 12:23:28.643288  421523 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0805 12:23:28.643297  421523 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0805 12:23:28.643301  421523 command_runner.go:130] >       ],
	I0805 12:23:28.643305  421523 command_runner.go:130] >       "size": "61245718",
	I0805 12:23:28.643309  421523 command_runner.go:130] >       "uid": null,
	I0805 12:23:28.643313  421523 command_runner.go:130] >       "username": "nonroot",
	I0805 12:23:28.643317  421523 command_runner.go:130] >       "spec": null,
	I0805 12:23:28.643321  421523 command_runner.go:130] >       "pinned": false
	I0805 12:23:28.643324  421523 command_runner.go:130] >     },
	I0805 12:23:28.643327  421523 command_runner.go:130] >     {
	I0805 12:23:28.643335  421523 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0805 12:23:28.643341  421523 command_runner.go:130] >       "repoTags": [
	I0805 12:23:28.643346  421523 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0805 12:23:28.643350  421523 command_runner.go:130] >       ],
	I0805 12:23:28.643354  421523 command_runner.go:130] >       "repoDigests": [
	I0805 12:23:28.643363  421523 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0805 12:23:28.643369  421523 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0805 12:23:28.643375  421523 command_runner.go:130] >       ],
	I0805 12:23:28.643378  421523 command_runner.go:130] >       "size": "150779692",
	I0805 12:23:28.643382  421523 command_runner.go:130] >       "uid": {
	I0805 12:23:28.643388  421523 command_runner.go:130] >         "value": "0"
	I0805 12:23:28.643394  421523 command_runner.go:130] >       },
	I0805 12:23:28.643400  421523 command_runner.go:130] >       "username": "",
	I0805 12:23:28.643404  421523 command_runner.go:130] >       "spec": null,
	I0805 12:23:28.643407  421523 command_runner.go:130] >       "pinned": false
	I0805 12:23:28.643411  421523 command_runner.go:130] >     },
	I0805 12:23:28.643414  421523 command_runner.go:130] >     {
	I0805 12:23:28.643420  421523 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0805 12:23:28.643426  421523 command_runner.go:130] >       "repoTags": [
	I0805 12:23:28.643431  421523 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0805 12:23:28.643436  421523 command_runner.go:130] >       ],
	I0805 12:23:28.643449  421523 command_runner.go:130] >       "repoDigests": [
	I0805 12:23:28.643458  421523 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0805 12:23:28.643465  421523 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0805 12:23:28.643469  421523 command_runner.go:130] >       ],
	I0805 12:23:28.643473  421523 command_runner.go:130] >       "size": "117609954",
	I0805 12:23:28.643479  421523 command_runner.go:130] >       "uid": {
	I0805 12:23:28.643483  421523 command_runner.go:130] >         "value": "0"
	I0805 12:23:28.643487  421523 command_runner.go:130] >       },
	I0805 12:23:28.643491  421523 command_runner.go:130] >       "username": "",
	I0805 12:23:28.643494  421523 command_runner.go:130] >       "spec": null,
	I0805 12:23:28.643498  421523 command_runner.go:130] >       "pinned": false
	I0805 12:23:28.643504  421523 command_runner.go:130] >     },
	I0805 12:23:28.643507  421523 command_runner.go:130] >     {
	I0805 12:23:28.643515  421523 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0805 12:23:28.643520  421523 command_runner.go:130] >       "repoTags": [
	I0805 12:23:28.643527  421523 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0805 12:23:28.643530  421523 command_runner.go:130] >       ],
	I0805 12:23:28.643537  421523 command_runner.go:130] >       "repoDigests": [
	I0805 12:23:28.643550  421523 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0805 12:23:28.643559  421523 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0805 12:23:28.643563  421523 command_runner.go:130] >       ],
	I0805 12:23:28.643567  421523 command_runner.go:130] >       "size": "112198984",
	I0805 12:23:28.643572  421523 command_runner.go:130] >       "uid": {
	I0805 12:23:28.643576  421523 command_runner.go:130] >         "value": "0"
	I0805 12:23:28.643581  421523 command_runner.go:130] >       },
	I0805 12:23:28.643585  421523 command_runner.go:130] >       "username": "",
	I0805 12:23:28.643591  421523 command_runner.go:130] >       "spec": null,
	I0805 12:23:28.643595  421523 command_runner.go:130] >       "pinned": false
	I0805 12:23:28.643600  421523 command_runner.go:130] >     },
	I0805 12:23:28.643604  421523 command_runner.go:130] >     {
	I0805 12:23:28.643610  421523 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0805 12:23:28.643615  421523 command_runner.go:130] >       "repoTags": [
	I0805 12:23:28.643620  421523 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0805 12:23:28.643623  421523 command_runner.go:130] >       ],
	I0805 12:23:28.643627  421523 command_runner.go:130] >       "repoDigests": [
	I0805 12:23:28.643636  421523 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0805 12:23:28.643646  421523 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0805 12:23:28.643653  421523 command_runner.go:130] >       ],
	I0805 12:23:28.643657  421523 command_runner.go:130] >       "size": "85953945",
	I0805 12:23:28.643660  421523 command_runner.go:130] >       "uid": null,
	I0805 12:23:28.643665  421523 command_runner.go:130] >       "username": "",
	I0805 12:23:28.643668  421523 command_runner.go:130] >       "spec": null,
	I0805 12:23:28.643672  421523 command_runner.go:130] >       "pinned": false
	I0805 12:23:28.643675  421523 command_runner.go:130] >     },
	I0805 12:23:28.643679  421523 command_runner.go:130] >     {
	I0805 12:23:28.643685  421523 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0805 12:23:28.643691  421523 command_runner.go:130] >       "repoTags": [
	I0805 12:23:28.643697  421523 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0805 12:23:28.643702  421523 command_runner.go:130] >       ],
	I0805 12:23:28.643706  421523 command_runner.go:130] >       "repoDigests": [
	I0805 12:23:28.643713  421523 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0805 12:23:28.643721  421523 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0805 12:23:28.643725  421523 command_runner.go:130] >       ],
	I0805 12:23:28.643729  421523 command_runner.go:130] >       "size": "63051080",
	I0805 12:23:28.643735  421523 command_runner.go:130] >       "uid": {
	I0805 12:23:28.643749  421523 command_runner.go:130] >         "value": "0"
	I0805 12:23:28.643755  421523 command_runner.go:130] >       },
	I0805 12:23:28.643759  421523 command_runner.go:130] >       "username": "",
	I0805 12:23:28.643763  421523 command_runner.go:130] >       "spec": null,
	I0805 12:23:28.643769  421523 command_runner.go:130] >       "pinned": false
	I0805 12:23:28.643773  421523 command_runner.go:130] >     },
	I0805 12:23:28.643778  421523 command_runner.go:130] >     {
	I0805 12:23:28.643784  421523 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0805 12:23:28.643790  421523 command_runner.go:130] >       "repoTags": [
	I0805 12:23:28.643794  421523 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0805 12:23:28.643798  421523 command_runner.go:130] >       ],
	I0805 12:23:28.643803  421523 command_runner.go:130] >       "repoDigests": [
	I0805 12:23:28.643812  421523 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0805 12:23:28.643821  421523 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0805 12:23:28.643824  421523 command_runner.go:130] >       ],
	I0805 12:23:28.643828  421523 command_runner.go:130] >       "size": "750414",
	I0805 12:23:28.643833  421523 command_runner.go:130] >       "uid": {
	I0805 12:23:28.643838  421523 command_runner.go:130] >         "value": "65535"
	I0805 12:23:28.643844  421523 command_runner.go:130] >       },
	I0805 12:23:28.643848  421523 command_runner.go:130] >       "username": "",
	I0805 12:23:28.643851  421523 command_runner.go:130] >       "spec": null,
	I0805 12:23:28.643855  421523 command_runner.go:130] >       "pinned": true
	I0805 12:23:28.643859  421523 command_runner.go:130] >     }
	I0805 12:23:28.643862  421523 command_runner.go:130] >   ]
	I0805 12:23:28.643865  421523 command_runner.go:130] > }
	I0805 12:23:28.644810  421523 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 12:23:28.644830  421523 cache_images.go:84] Images are preloaded, skipping loading
	I0805 12:23:28.644841  421523 kubeadm.go:934] updating node { 192.168.39.86 8443 v1.30.3 crio true true} ...
	I0805 12:23:28.644972  421523 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-841883 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.86
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-841883 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 12:23:28.645057  421523 ssh_runner.go:195] Run: crio config
	I0805 12:23:28.685241  421523 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0805 12:23:28.685264  421523 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0805 12:23:28.685271  421523 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0805 12:23:28.685274  421523 command_runner.go:130] > #
	I0805 12:23:28.685287  421523 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0805 12:23:28.685297  421523 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0805 12:23:28.685307  421523 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0805 12:23:28.685320  421523 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0805 12:23:28.685324  421523 command_runner.go:130] > # reload'.
	I0805 12:23:28.685331  421523 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0805 12:23:28.685337  421523 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0805 12:23:28.685343  421523 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0805 12:23:28.685351  421523 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0805 12:23:28.685355  421523 command_runner.go:130] > [crio]
	I0805 12:23:28.685361  421523 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0805 12:23:28.685370  421523 command_runner.go:130] > # containers images, in this directory.
	I0805 12:23:28.685377  421523 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0805 12:23:28.685389  421523 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0805 12:23:28.685400  421523 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0805 12:23:28.685411  421523 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0805 12:23:28.685421  421523 command_runner.go:130] > # imagestore = ""
	I0805 12:23:28.685432  421523 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0805 12:23:28.685438  421523 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0805 12:23:28.685443  421523 command_runner.go:130] > storage_driver = "overlay"
	I0805 12:23:28.685449  421523 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0805 12:23:28.685461  421523 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0805 12:23:28.685467  421523 command_runner.go:130] > storage_option = [
	I0805 12:23:28.685481  421523 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0805 12:23:28.685489  421523 command_runner.go:130] > ]
	I0805 12:23:28.685498  421523 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0805 12:23:28.685516  421523 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0805 12:23:28.685522  421523 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0805 12:23:28.685530  421523 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0805 12:23:28.685538  421523 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0805 12:23:28.685549  421523 command_runner.go:130] > # always happen on a node reboot
	I0805 12:23:28.685557  421523 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0805 12:23:28.685572  421523 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0805 12:23:28.685584  421523 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0805 12:23:28.685593  421523 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0805 12:23:28.685601  421523 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0805 12:23:28.685616  421523 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0805 12:23:28.685631  421523 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0805 12:23:28.685641  421523 command_runner.go:130] > # internal_wipe = true
	I0805 12:23:28.685654  421523 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0805 12:23:28.685666  421523 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0805 12:23:28.685672  421523 command_runner.go:130] > # internal_repair = false
	I0805 12:23:28.685680  421523 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0805 12:23:28.685689  421523 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0805 12:23:28.685701  421523 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0805 12:23:28.685711  421523 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0805 12:23:28.685723  421523 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0805 12:23:28.685732  421523 command_runner.go:130] > [crio.api]
	I0805 12:23:28.685740  421523 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0805 12:23:28.685749  421523 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0805 12:23:28.685759  421523 command_runner.go:130] > # IP address on which the stream server will listen.
	I0805 12:23:28.685765  421523 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0805 12:23:28.685774  421523 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0805 12:23:28.685786  421523 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0805 12:23:28.685796  421523 command_runner.go:130] > # stream_port = "0"
	I0805 12:23:28.685806  421523 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0805 12:23:28.685817  421523 command_runner.go:130] > # stream_enable_tls = false
	I0805 12:23:28.685829  421523 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0805 12:23:28.685841  421523 command_runner.go:130] > # stream_idle_timeout = ""
	I0805 12:23:28.685852  421523 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0805 12:23:28.685864  421523 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0805 12:23:28.685874  421523 command_runner.go:130] > # minutes.
	I0805 12:23:28.685881  421523 command_runner.go:130] > # stream_tls_cert = ""
	I0805 12:23:28.685898  421523 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0805 12:23:28.685910  421523 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0805 12:23:28.685920  421523 command_runner.go:130] > # stream_tls_key = ""
	I0805 12:23:28.685929  421523 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0805 12:23:28.685938  421523 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0805 12:23:28.685954  421523 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0805 12:23:28.685964  421523 command_runner.go:130] > # stream_tls_ca = ""
	I0805 12:23:28.685977  421523 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0805 12:23:28.685987  421523 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0805 12:23:28.686005  421523 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0805 12:23:28.686015  421523 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0805 12:23:28.686022  421523 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0805 12:23:28.686030  421523 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0805 12:23:28.686040  421523 command_runner.go:130] > [crio.runtime]
	I0805 12:23:28.686052  421523 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0805 12:23:28.686127  421523 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0805 12:23:28.686134  421523 command_runner.go:130] > # "nofile=1024:2048"
	I0805 12:23:28.686144  421523 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0805 12:23:28.686150  421523 command_runner.go:130] > # default_ulimits = [
	I0805 12:23:28.686155  421523 command_runner.go:130] > # ]
	I0805 12:23:28.686172  421523 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0805 12:23:28.686179  421523 command_runner.go:130] > # no_pivot = false
	I0805 12:23:28.686185  421523 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0805 12:23:28.686192  421523 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0805 12:23:28.686199  421523 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0805 12:23:28.686207  421523 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0805 12:23:28.686214  421523 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0805 12:23:28.686226  421523 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0805 12:23:28.686237  421523 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0805 12:23:28.686244  421523 command_runner.go:130] > # Cgroup setting for conmon
	I0805 12:23:28.686258  421523 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0805 12:23:28.686268  421523 command_runner.go:130] > conmon_cgroup = "pod"
	I0805 12:23:28.686279  421523 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0805 12:23:28.686287  421523 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0805 12:23:28.686297  421523 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0805 12:23:28.686306  421523 command_runner.go:130] > conmon_env = [
	I0805 12:23:28.686318  421523 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0805 12:23:28.686334  421523 command_runner.go:130] > ]
	I0805 12:23:28.686345  421523 command_runner.go:130] > # Additional environment variables to set for all the
	I0805 12:23:28.686355  421523 command_runner.go:130] > # containers. These are overridden if set in the
	I0805 12:23:28.686368  421523 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0805 12:23:28.686377  421523 command_runner.go:130] > # default_env = [
	I0805 12:23:28.686383  421523 command_runner.go:130] > # ]
	I0805 12:23:28.686395  421523 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0805 12:23:28.686411  421523 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0805 12:23:28.686420  421523 command_runner.go:130] > # selinux = false
	I0805 12:23:28.686430  421523 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0805 12:23:28.686442  421523 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0805 12:23:28.686452  421523 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0805 12:23:28.686458  421523 command_runner.go:130] > # seccomp_profile = ""
	I0805 12:23:28.686471  421523 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0805 12:23:28.686483  421523 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0805 12:23:28.686495  421523 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0805 12:23:28.686506  421523 command_runner.go:130] > # which might increase security.
	I0805 12:23:28.686517  421523 command_runner.go:130] > # This option is currently deprecated,
	I0805 12:23:28.686526  421523 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0805 12:23:28.686536  421523 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0805 12:23:28.686546  421523 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0805 12:23:28.686556  421523 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0805 12:23:28.686562  421523 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0805 12:23:28.686570  421523 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0805 12:23:28.686577  421523 command_runner.go:130] > # This option supports live configuration reload.
	I0805 12:23:28.686582  421523 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0805 12:23:28.686588  421523 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0805 12:23:28.686593  421523 command_runner.go:130] > # the cgroup blockio controller.
	I0805 12:23:28.686597  421523 command_runner.go:130] > # blockio_config_file = ""
	I0805 12:23:28.686607  421523 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0805 12:23:28.686616  421523 command_runner.go:130] > # blockio parameters.
	I0805 12:23:28.686624  421523 command_runner.go:130] > # blockio_reload = false
	I0805 12:23:28.686638  421523 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0805 12:23:28.686648  421523 command_runner.go:130] > # irqbalance daemon.
	I0805 12:23:28.686658  421523 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0805 12:23:28.686670  421523 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0805 12:23:28.686681  421523 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0805 12:23:28.686695  421523 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0805 12:23:28.686712  421523 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0805 12:23:28.686727  421523 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0805 12:23:28.686739  421523 command_runner.go:130] > # This option supports live configuration reload.
	I0805 12:23:28.686749  421523 command_runner.go:130] > # rdt_config_file = ""
	I0805 12:23:28.686760  421523 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0805 12:23:28.686770  421523 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0805 12:23:28.686791  421523 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0805 12:23:28.686801  421523 command_runner.go:130] > # separate_pull_cgroup = ""
	I0805 12:23:28.686811  421523 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0805 12:23:28.686823  421523 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0805 12:23:28.686833  421523 command_runner.go:130] > # will be added.
	I0805 12:23:28.686840  421523 command_runner.go:130] > # default_capabilities = [
	I0805 12:23:28.686848  421523 command_runner.go:130] > # 	"CHOWN",
	I0805 12:23:28.686853  421523 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0805 12:23:28.686859  421523 command_runner.go:130] > # 	"FSETID",
	I0805 12:23:28.686862  421523 command_runner.go:130] > # 	"FOWNER",
	I0805 12:23:28.686868  421523 command_runner.go:130] > # 	"SETGID",
	I0805 12:23:28.686874  421523 command_runner.go:130] > # 	"SETUID",
	I0805 12:23:28.686879  421523 command_runner.go:130] > # 	"SETPCAP",
	I0805 12:23:28.686888  421523 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0805 12:23:28.686894  421523 command_runner.go:130] > # 	"KILL",
	I0805 12:23:28.686903  421523 command_runner.go:130] > # ]
	I0805 12:23:28.686914  421523 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0805 12:23:28.686927  421523 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0805 12:23:28.686935  421523 command_runner.go:130] > # add_inheritable_capabilities = false
	I0805 12:23:28.686947  421523 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0805 12:23:28.686961  421523 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0805 12:23:28.686971  421523 command_runner.go:130] > default_sysctls = [
	I0805 12:23:28.686978  421523 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0805 12:23:28.686988  421523 command_runner.go:130] > ]
	I0805 12:23:28.687000  421523 command_runner.go:130] > # List of devices on the host that a
	I0805 12:23:28.687012  421523 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0805 12:23:28.687022  421523 command_runner.go:130] > # allowed_devices = [
	I0805 12:23:28.687028  421523 command_runner.go:130] > # 	"/dev/fuse",
	I0805 12:23:28.687034  421523 command_runner.go:130] > # ]
	I0805 12:23:28.687042  421523 command_runner.go:130] > # List of additional devices. specified as
	I0805 12:23:28.687056  421523 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0805 12:23:28.687068  421523 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0805 12:23:28.687079  421523 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0805 12:23:28.687086  421523 command_runner.go:130] > # additional_devices = [
	I0805 12:23:28.687091  421523 command_runner.go:130] > # ]
	I0805 12:23:28.687100  421523 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0805 12:23:28.687113  421523 command_runner.go:130] > # cdi_spec_dirs = [
	I0805 12:23:28.687121  421523 command_runner.go:130] > # 	"/etc/cdi",
	I0805 12:23:28.687128  421523 command_runner.go:130] > # 	"/var/run/cdi",
	I0805 12:23:28.687137  421523 command_runner.go:130] > # ]
	I0805 12:23:28.687147  421523 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0805 12:23:28.687165  421523 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0805 12:23:28.687174  421523 command_runner.go:130] > # Defaults to false.
	I0805 12:23:28.687183  421523 command_runner.go:130] > # device_ownership_from_security_context = false
	I0805 12:23:28.687195  421523 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0805 12:23:28.687205  421523 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0805 12:23:28.687215  421523 command_runner.go:130] > # hooks_dir = [
	I0805 12:23:28.687222  421523 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0805 12:23:28.687229  421523 command_runner.go:130] > # ]
	I0805 12:23:28.687239  421523 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0805 12:23:28.687251  421523 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0805 12:23:28.687263  421523 command_runner.go:130] > # its default mounts from the following two files:
	I0805 12:23:28.687271  421523 command_runner.go:130] > #
	I0805 12:23:28.687281  421523 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0805 12:23:28.687293  421523 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0805 12:23:28.687305  421523 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0805 12:23:28.687313  421523 command_runner.go:130] > #
	I0805 12:23:28.687322  421523 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0805 12:23:28.687336  421523 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0805 12:23:28.687348  421523 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0805 12:23:28.687360  421523 command_runner.go:130] > #      only add mounts it finds in this file.
	I0805 12:23:28.687368  421523 command_runner.go:130] > #
	I0805 12:23:28.687374  421523 command_runner.go:130] > # default_mounts_file = ""
	I0805 12:23:28.687385  421523 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0805 12:23:28.687395  421523 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0805 12:23:28.687403  421523 command_runner.go:130] > pids_limit = 1024
	I0805 12:23:28.687412  421523 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0805 12:23:28.687424  421523 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0805 12:23:28.687438  421523 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0805 12:23:28.687453  421523 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0805 12:23:28.687462  421523 command_runner.go:130] > # log_size_max = -1
	I0805 12:23:28.687473  421523 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0805 12:23:28.687483  421523 command_runner.go:130] > # log_to_journald = false
	I0805 12:23:28.687492  421523 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0805 12:23:28.687507  421523 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0805 12:23:28.687522  421523 command_runner.go:130] > # Path to directory for container attach sockets.
	I0805 12:23:28.687533  421523 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0805 12:23:28.687541  421523 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0805 12:23:28.687550  421523 command_runner.go:130] > # bind_mount_prefix = ""
	I0805 12:23:28.687560  421523 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0805 12:23:28.687569  421523 command_runner.go:130] > # read_only = false
	I0805 12:23:28.687580  421523 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0805 12:23:28.687593  421523 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0805 12:23:28.687600  421523 command_runner.go:130] > # live configuration reload.
	I0805 12:23:28.687609  421523 command_runner.go:130] > # log_level = "info"
	I0805 12:23:28.687620  421523 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0805 12:23:28.687632  421523 command_runner.go:130] > # This option supports live configuration reload.
	I0805 12:23:28.687640  421523 command_runner.go:130] > # log_filter = ""
	I0805 12:23:28.687650  421523 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0805 12:23:28.687661  421523 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0805 12:23:28.687666  421523 command_runner.go:130] > # separated by comma.
	I0805 12:23:28.687679  421523 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0805 12:23:28.687688  421523 command_runner.go:130] > # uid_mappings = ""
	I0805 12:23:28.687698  421523 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0805 12:23:28.687711  421523 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0805 12:23:28.687722  421523 command_runner.go:130] > # separated by comma.
	I0805 12:23:28.687734  421523 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0805 12:23:28.687752  421523 command_runner.go:130] > # gid_mappings = ""
	I0805 12:23:28.687762  421523 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0805 12:23:28.687773  421523 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0805 12:23:28.687781  421523 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0805 12:23:28.687803  421523 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0805 12:23:28.687814  421523 command_runner.go:130] > # minimum_mappable_uid = -1
	I0805 12:23:28.687823  421523 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0805 12:23:28.687835  421523 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0805 12:23:28.687846  421523 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0805 12:23:28.687860  421523 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0805 12:23:28.687870  421523 command_runner.go:130] > # minimum_mappable_gid = -1
	I0805 12:23:28.687879  421523 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0805 12:23:28.687892  421523 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0805 12:23:28.687902  421523 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0805 12:23:28.687915  421523 command_runner.go:130] > # ctr_stop_timeout = 30
	I0805 12:23:28.687927  421523 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0805 12:23:28.687939  421523 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0805 12:23:28.687949  421523 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0805 12:23:28.687958  421523 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0805 12:23:28.687962  421523 command_runner.go:130] > drop_infra_ctr = false
	I0805 12:23:28.687970  421523 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0805 12:23:28.687980  421523 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0805 12:23:28.687992  421523 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0805 12:23:28.688002  421523 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0805 12:23:28.688013  421523 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0805 12:23:28.688024  421523 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0805 12:23:28.688037  421523 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0805 12:23:28.688047  421523 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0805 12:23:28.688053  421523 command_runner.go:130] > # shared_cpuset = ""
	I0805 12:23:28.688061  421523 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0805 12:23:28.688072  421523 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0805 12:23:28.688079  421523 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0805 12:23:28.688092  421523 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0805 12:23:28.688102  421523 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0805 12:23:28.688112  421523 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0805 12:23:28.688126  421523 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0805 12:23:28.688134  421523 command_runner.go:130] > # enable_criu_support = false
	I0805 12:23:28.688142  421523 command_runner.go:130] > # Enable/disable the generation of the container,
	I0805 12:23:28.688154  421523 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0805 12:23:28.688168  421523 command_runner.go:130] > # enable_pod_events = false
	I0805 12:23:28.688178  421523 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0805 12:23:28.688192  421523 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0805 12:23:28.688203  421523 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0805 12:23:28.688210  421523 command_runner.go:130] > # default_runtime = "runc"
	I0805 12:23:28.688222  421523 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0805 12:23:28.688235  421523 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0805 12:23:28.688248  421523 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0805 12:23:28.688258  421523 command_runner.go:130] > # creation as a file is not desired either.
	I0805 12:23:28.688272  421523 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0805 12:23:28.688286  421523 command_runner.go:130] > # the hostname is being managed dynamically.
	I0805 12:23:28.688296  421523 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0805 12:23:28.688303  421523 command_runner.go:130] > # ]
	I0805 12:23:28.688314  421523 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0805 12:23:28.688325  421523 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0805 12:23:28.688334  421523 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0805 12:23:28.688342  421523 command_runner.go:130] > # Each entry in the table should follow the format:
	I0805 12:23:28.688351  421523 command_runner.go:130] > #
	I0805 12:23:28.688358  421523 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0805 12:23:28.688368  421523 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0805 12:23:28.688394  421523 command_runner.go:130] > # runtime_type = "oci"
	I0805 12:23:28.688404  421523 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0805 12:23:28.688413  421523 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0805 12:23:28.688423  421523 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0805 12:23:28.688431  421523 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0805 12:23:28.688440  421523 command_runner.go:130] > # monitor_env = []
	I0805 12:23:28.688448  421523 command_runner.go:130] > # privileged_without_host_devices = false
	I0805 12:23:28.688459  421523 command_runner.go:130] > # allowed_annotations = []
	I0805 12:23:28.688471  421523 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0805 12:23:28.688480  421523 command_runner.go:130] > # Where:
	I0805 12:23:28.688489  421523 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0805 12:23:28.688502  421523 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0805 12:23:28.688513  421523 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0805 12:23:28.688519  421523 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0805 12:23:28.688528  421523 command_runner.go:130] > #   in $PATH.
	I0805 12:23:28.688538  421523 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0805 12:23:28.688549  421523 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0805 12:23:28.688557  421523 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0805 12:23:28.688568  421523 command_runner.go:130] > #   state.
	I0805 12:23:28.688582  421523 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0805 12:23:28.688595  421523 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0805 12:23:28.688608  421523 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0805 12:23:28.688618  421523 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0805 12:23:28.688627  421523 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0805 12:23:28.688637  421523 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0805 12:23:28.688648  421523 command_runner.go:130] > #   The currently recognized values are:
	I0805 12:23:28.688658  421523 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0805 12:23:28.688673  421523 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0805 12:23:28.688690  421523 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0805 12:23:28.688703  421523 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0805 12:23:28.688716  421523 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0805 12:23:28.688725  421523 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0805 12:23:28.688735  421523 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0805 12:23:28.688748  421523 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0805 12:23:28.688761  421523 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0805 12:23:28.688774  421523 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0805 12:23:28.688784  421523 command_runner.go:130] > #   deprecated option "conmon".
	I0805 12:23:28.688798  421523 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0805 12:23:28.688809  421523 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0805 12:23:28.688816  421523 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0805 12:23:28.688825  421523 command_runner.go:130] > #   should be moved to the container's cgroup
	I0805 12:23:28.688842  421523 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0805 12:23:28.688853  421523 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0805 12:23:28.688863  421523 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0805 12:23:28.688874  421523 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0805 12:23:28.688882  421523 command_runner.go:130] > #
	I0805 12:23:28.688891  421523 command_runner.go:130] > # Using the seccomp notifier feature:
	I0805 12:23:28.688900  421523 command_runner.go:130] > #
	I0805 12:23:28.688909  421523 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0805 12:23:28.688921  421523 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0805 12:23:28.688928  421523 command_runner.go:130] > #
	I0805 12:23:28.688935  421523 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0805 12:23:28.688948  421523 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0805 12:23:28.688957  421523 command_runner.go:130] > #
	I0805 12:23:28.688967  421523 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0805 12:23:28.688975  421523 command_runner.go:130] > # feature.
	I0805 12:23:28.688980  421523 command_runner.go:130] > #
	I0805 12:23:28.688991  421523 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0805 12:23:28.689003  421523 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0805 12:23:28.689012  421523 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0805 12:23:28.689018  421523 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0805 12:23:28.689031  421523 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0805 12:23:28.689039  421523 command_runner.go:130] > #
	I0805 12:23:28.689049  421523 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0805 12:23:28.689065  421523 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0805 12:23:28.689073  421523 command_runner.go:130] > #
	I0805 12:23:28.689082  421523 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0805 12:23:28.689095  421523 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0805 12:23:28.689103  421523 command_runner.go:130] > #
	I0805 12:23:28.689111  421523 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0805 12:23:28.689119  421523 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0805 12:23:28.689125  421523 command_runner.go:130] > # limitation.
	I0805 12:23:28.689135  421523 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0805 12:23:28.689145  421523 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0805 12:23:28.689173  421523 command_runner.go:130] > runtime_type = "oci"
	I0805 12:23:28.689190  421523 command_runner.go:130] > runtime_root = "/run/runc"
	I0805 12:23:28.689196  421523 command_runner.go:130] > runtime_config_path = ""
	I0805 12:23:28.689202  421523 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0805 12:23:28.689206  421523 command_runner.go:130] > monitor_cgroup = "pod"
	I0805 12:23:28.689215  421523 command_runner.go:130] > monitor_exec_cgroup = ""
	I0805 12:23:28.689221  421523 command_runner.go:130] > monitor_env = [
	I0805 12:23:28.689235  421523 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0805 12:23:28.689240  421523 command_runner.go:130] > ]
	I0805 12:23:28.689253  421523 command_runner.go:130] > privileged_without_host_devices = false
	I0805 12:23:28.689265  421523 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0805 12:23:28.689276  421523 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0805 12:23:28.689288  421523 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0805 12:23:28.689301  421523 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0805 12:23:28.689312  421523 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0805 12:23:28.689323  421523 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0805 12:23:28.689340  421523 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0805 12:23:28.689356  421523 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0805 12:23:28.689368  421523 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0805 12:23:28.689380  421523 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0805 12:23:28.689388  421523 command_runner.go:130] > # Example:
	I0805 12:23:28.689395  421523 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0805 12:23:28.689401  421523 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0805 12:23:28.689406  421523 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0805 12:23:28.689413  421523 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0805 12:23:28.689419  421523 command_runner.go:130] > # cpuset = 0
	I0805 12:23:28.689425  421523 command_runner.go:130] > # cpushares = "0-1"
	I0805 12:23:28.689430  421523 command_runner.go:130] > # Where:
	I0805 12:23:28.689441  421523 command_runner.go:130] > # The workload name is workload-type.
	I0805 12:23:28.689452  421523 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0805 12:23:28.689461  421523 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0805 12:23:28.689469  421523 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0805 12:23:28.689481  421523 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0805 12:23:28.689488  421523 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0805 12:23:28.689493  421523 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0805 12:23:28.689501  421523 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0805 12:23:28.689507  421523 command_runner.go:130] > # Default value is set to true
	I0805 12:23:28.689514  421523 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0805 12:23:28.689523  421523 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0805 12:23:28.689532  421523 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0805 12:23:28.689539  421523 command_runner.go:130] > # Default value is set to 'false'
	I0805 12:23:28.689546  421523 command_runner.go:130] > # disable_hostport_mapping = false
	I0805 12:23:28.689556  421523 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0805 12:23:28.689562  421523 command_runner.go:130] > #
	I0805 12:23:28.689571  421523 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0805 12:23:28.689583  421523 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0805 12:23:28.689592  421523 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0805 12:23:28.689602  421523 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0805 12:23:28.689610  421523 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0805 12:23:28.689617  421523 command_runner.go:130] > [crio.image]
	I0805 12:23:28.689629  421523 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0805 12:23:28.689639  421523 command_runner.go:130] > # default_transport = "docker://"
	I0805 12:23:28.689648  421523 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0805 12:23:28.689660  421523 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0805 12:23:28.689665  421523 command_runner.go:130] > # global_auth_file = ""
	I0805 12:23:28.689672  421523 command_runner.go:130] > # The image used to instantiate infra containers.
	I0805 12:23:28.689683  421523 command_runner.go:130] > # This option supports live configuration reload.
	I0805 12:23:28.689694  421523 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0805 12:23:28.689707  421523 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0805 12:23:28.689720  421523 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0805 12:23:28.689731  421523 command_runner.go:130] > # This option supports live configuration reload.
	I0805 12:23:28.689740  421523 command_runner.go:130] > # pause_image_auth_file = ""
	I0805 12:23:28.689749  421523 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0805 12:23:28.689758  421523 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0805 12:23:28.689774  421523 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0805 12:23:28.689786  421523 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0805 12:23:28.689793  421523 command_runner.go:130] > # pause_command = "/pause"
	I0805 12:23:28.689809  421523 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0805 12:23:28.689821  421523 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0805 12:23:28.689834  421523 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0805 12:23:28.689843  421523 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0805 12:23:28.689854  421523 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0805 12:23:28.689862  421523 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0805 12:23:28.689870  421523 command_runner.go:130] > # pinned_images = [
	I0805 12:23:28.689879  421523 command_runner.go:130] > # ]
	I0805 12:23:28.689888  421523 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0805 12:23:28.689901  421523 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0805 12:23:28.689913  421523 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0805 12:23:28.689926  421523 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0805 12:23:28.689936  421523 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0805 12:23:28.689945  421523 command_runner.go:130] > # signature_policy = ""
	I0805 12:23:28.689953  421523 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0805 12:23:28.689966  421523 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0805 12:23:28.689979  421523 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0805 12:23:28.689993  421523 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0805 12:23:28.690006  421523 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0805 12:23:28.690017  421523 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0805 12:23:28.690029  421523 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0805 12:23:28.690041  421523 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0805 12:23:28.690047  421523 command_runner.go:130] > # changing them here.
	I0805 12:23:28.690052  421523 command_runner.go:130] > # insecure_registries = [
	I0805 12:23:28.690060  421523 command_runner.go:130] > # ]
	I0805 12:23:28.690071  421523 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0805 12:23:28.690083  421523 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0805 12:23:28.690093  421523 command_runner.go:130] > # image_volumes = "mkdir"
	I0805 12:23:28.690104  421523 command_runner.go:130] > # Temporary directory to use for storing big files
	I0805 12:23:28.690114  421523 command_runner.go:130] > # big_files_temporary_dir = ""
	I0805 12:23:28.690125  421523 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0805 12:23:28.690132  421523 command_runner.go:130] > # CNI plugins.
	I0805 12:23:28.690136  421523 command_runner.go:130] > [crio.network]
	I0805 12:23:28.690148  421523 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0805 12:23:28.690167  421523 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0805 12:23:28.690177  421523 command_runner.go:130] > # cni_default_network = ""
	I0805 12:23:28.690188  421523 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0805 12:23:28.690198  421523 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0805 12:23:28.690211  421523 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0805 12:23:28.690220  421523 command_runner.go:130] > # plugin_dirs = [
	I0805 12:23:28.690227  421523 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0805 12:23:28.690231  421523 command_runner.go:130] > # ]
	I0805 12:23:28.690242  421523 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0805 12:23:28.690251  421523 command_runner.go:130] > [crio.metrics]
	I0805 12:23:28.690258  421523 command_runner.go:130] > # Globally enable or disable metrics support.
	I0805 12:23:28.690268  421523 command_runner.go:130] > enable_metrics = true
	I0805 12:23:28.690278  421523 command_runner.go:130] > # Specify enabled metrics collectors.
	I0805 12:23:28.690288  421523 command_runner.go:130] > # Per default all metrics are enabled.
	I0805 12:23:28.690300  421523 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0805 12:23:28.690313  421523 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0805 12:23:28.690322  421523 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0805 12:23:28.690330  421523 command_runner.go:130] > # metrics_collectors = [
	I0805 12:23:28.690336  421523 command_runner.go:130] > # 	"operations",
	I0805 12:23:28.690347  421523 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0805 12:23:28.690358  421523 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0805 12:23:28.690368  421523 command_runner.go:130] > # 	"operations_errors",
	I0805 12:23:28.690377  421523 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0805 12:23:28.690387  421523 command_runner.go:130] > # 	"image_pulls_by_name",
	I0805 12:23:28.690394  421523 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0805 12:23:28.690403  421523 command_runner.go:130] > # 	"image_pulls_failures",
	I0805 12:23:28.690410  421523 command_runner.go:130] > # 	"image_pulls_successes",
	I0805 12:23:28.690418  421523 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0805 12:23:28.690422  421523 command_runner.go:130] > # 	"image_layer_reuse",
	I0805 12:23:28.690432  421523 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0805 12:23:28.690442  421523 command_runner.go:130] > # 	"containers_oom_total",
	I0805 12:23:28.690449  421523 command_runner.go:130] > # 	"containers_oom",
	I0805 12:23:28.690460  421523 command_runner.go:130] > # 	"processes_defunct",
	I0805 12:23:28.690469  421523 command_runner.go:130] > # 	"operations_total",
	I0805 12:23:28.690479  421523 command_runner.go:130] > # 	"operations_latency_seconds",
	I0805 12:23:28.690490  421523 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0805 12:23:28.690500  421523 command_runner.go:130] > # 	"operations_errors_total",
	I0805 12:23:28.690508  421523 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0805 12:23:28.690515  421523 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0805 12:23:28.690520  421523 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0805 12:23:28.690530  421523 command_runner.go:130] > # 	"image_pulls_success_total",
	I0805 12:23:28.690540  421523 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0805 12:23:28.690546  421523 command_runner.go:130] > # 	"containers_oom_count_total",
	I0805 12:23:28.690557  421523 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0805 12:23:28.690567  421523 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0805 12:23:28.690580  421523 command_runner.go:130] > # ]
	I0805 12:23:28.690591  421523 command_runner.go:130] > # The port on which the metrics server will listen.
	I0805 12:23:28.690600  421523 command_runner.go:130] > # metrics_port = 9090
	I0805 12:23:28.690609  421523 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0805 12:23:28.690615  421523 command_runner.go:130] > # metrics_socket = ""
	I0805 12:23:28.690623  421523 command_runner.go:130] > # The certificate for the secure metrics server.
	I0805 12:23:28.690636  421523 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0805 12:23:28.690650  421523 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0805 12:23:28.690660  421523 command_runner.go:130] > # certificate on any modification event.
	I0805 12:23:28.690669  421523 command_runner.go:130] > # metrics_cert = ""
	I0805 12:23:28.690682  421523 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0805 12:23:28.690691  421523 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0805 12:23:28.690698  421523 command_runner.go:130] > # metrics_key = ""
	I0805 12:23:28.690704  421523 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0805 12:23:28.690713  421523 command_runner.go:130] > [crio.tracing]
	I0805 12:23:28.690727  421523 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0805 12:23:28.690737  421523 command_runner.go:130] > # enable_tracing = false
	I0805 12:23:28.690748  421523 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0805 12:23:28.690758  421523 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0805 12:23:28.690770  421523 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0805 12:23:28.690781  421523 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0805 12:23:28.690790  421523 command_runner.go:130] > # CRI-O NRI configuration.
	I0805 12:23:28.690796  421523 command_runner.go:130] > [crio.nri]
	I0805 12:23:28.690804  421523 command_runner.go:130] > # Globally enable or disable NRI.
	I0805 12:23:28.690813  421523 command_runner.go:130] > # enable_nri = false
	I0805 12:23:28.690823  421523 command_runner.go:130] > # NRI socket to listen on.
	I0805 12:23:28.690834  421523 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0805 12:23:28.690843  421523 command_runner.go:130] > # NRI plugin directory to use.
	I0805 12:23:28.690854  421523 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0805 12:23:28.690864  421523 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0805 12:23:28.690874  421523 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0805 12:23:28.690882  421523 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0805 12:23:28.690888  421523 command_runner.go:130] > # nri_disable_connections = false
	I0805 12:23:28.690899  421523 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0805 12:23:28.690910  421523 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0805 12:23:28.690919  421523 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0805 12:23:28.690929  421523 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0805 12:23:28.690941  421523 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0805 12:23:28.690949  421523 command_runner.go:130] > [crio.stats]
	I0805 12:23:28.690961  421523 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0805 12:23:28.690972  421523 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0805 12:23:28.690980  421523 command_runner.go:130] > # stats_collection_period = 0
	I0805 12:23:28.691008  421523 command_runner.go:130] ! time="2024-08-05 12:23:28.655914775Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0805 12:23:28.691035  421523 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0805 12:23:28.691180  421523 cni.go:84] Creating CNI manager for ""
	I0805 12:23:28.691191  421523 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0805 12:23:28.691201  421523 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 12:23:28.691236  421523 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.86 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-841883 NodeName:multinode-841883 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.86"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.86 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 12:23:28.691423  421523 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.86
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-841883"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.86
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.86"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 12:23:28.691502  421523 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 12:23:28.701122  421523 command_runner.go:130] > kubeadm
	I0805 12:23:28.701142  421523 command_runner.go:130] > kubectl
	I0805 12:23:28.701148  421523 command_runner.go:130] > kubelet
	I0805 12:23:28.701178  421523 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 12:23:28.701239  421523 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 12:23:28.710476  421523 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0805 12:23:28.726603  421523 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 12:23:28.742700  421523 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0805 12:23:28.759256  421523 ssh_runner.go:195] Run: grep 192.168.39.86	control-plane.minikube.internal$ /etc/hosts
	I0805 12:23:28.763112  421523 command_runner.go:130] > 192.168.39.86	control-plane.minikube.internal
	I0805 12:23:28.763279  421523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:23:28.902875  421523 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 12:23:28.917549  421523 certs.go:68] Setting up /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/multinode-841883 for IP: 192.168.39.86
	I0805 12:23:28.917578  421523 certs.go:194] generating shared ca certs ...
	I0805 12:23:28.917609  421523 certs.go:226] acquiring lock for ca certs: {Name:mk0abfcaff3883fbb5243c47b487f9200d9166d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:23:28.917815  421523 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key
	I0805 12:23:28.917874  421523 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key
	I0805 12:23:28.917888  421523 certs.go:256] generating profile certs ...
	I0805 12:23:28.917965  421523 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/multinode-841883/client.key
	I0805 12:23:28.918024  421523 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/multinode-841883/apiserver.key.993fd26d
	I0805 12:23:28.918060  421523 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/multinode-841883/proxy-client.key
	I0805 12:23:28.918071  421523 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0805 12:23:28.918083  421523 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0805 12:23:28.918097  421523 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0805 12:23:28.918109  421523 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0805 12:23:28.918121  421523 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/multinode-841883/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0805 12:23:28.918136  421523 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/multinode-841883/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0805 12:23:28.918157  421523 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/multinode-841883/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0805 12:23:28.918169  421523 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/multinode-841883/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0805 12:23:28.918220  421523 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem (1338 bytes)
	W0805 12:23:28.918248  421523 certs.go:480] ignoring /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219_empty.pem, impossibly tiny 0 bytes
	I0805 12:23:28.918255  421523 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 12:23:28.918275  421523 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem (1082 bytes)
	I0805 12:23:28.918316  421523 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem (1123 bytes)
	I0805 12:23:28.918352  421523 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem (1675 bytes)
	I0805 12:23:28.918389  421523 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:23:28.918417  421523 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem -> /usr/share/ca-certificates/391219.pem
	I0805 12:23:28.918432  421523 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> /usr/share/ca-certificates/3912192.pem
	I0805 12:23:28.918444  421523 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:23:28.919126  421523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 12:23:28.943798  421523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 12:23:28.967412  421523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 12:23:28.990644  421523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 12:23:29.014148  421523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/multinode-841883/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0805 12:23:29.037380  421523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/multinode-841883/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0805 12:23:29.061282  421523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/multinode-841883/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 12:23:29.084297  421523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/multinode-841883/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0805 12:23:29.107203  421523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem --> /usr/share/ca-certificates/391219.pem (1338 bytes)
	I0805 12:23:29.131157  421523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /usr/share/ca-certificates/3912192.pem (1708 bytes)
	I0805 12:23:29.154664  421523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 12:23:29.178549  421523 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 12:23:29.194632  421523 ssh_runner.go:195] Run: openssl version
	I0805 12:23:29.200242  421523 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0805 12:23:29.200462  421523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391219.pem && ln -fs /usr/share/ca-certificates/391219.pem /etc/ssl/certs/391219.pem"
	I0805 12:23:29.210871  421523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/391219.pem
	I0805 12:23:29.215112  421523 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  5 11:39 /usr/share/ca-certificates/391219.pem
	I0805 12:23:29.215270  421523 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 11:39 /usr/share/ca-certificates/391219.pem
	I0805 12:23:29.215309  421523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391219.pem
	I0805 12:23:29.220667  421523 command_runner.go:130] > 51391683
	I0805 12:23:29.220924  421523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391219.pem /etc/ssl/certs/51391683.0"
	I0805 12:23:29.229662  421523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3912192.pem && ln -fs /usr/share/ca-certificates/3912192.pem /etc/ssl/certs/3912192.pem"
	I0805 12:23:29.240468  421523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3912192.pem
	I0805 12:23:29.244957  421523 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  5 11:39 /usr/share/ca-certificates/3912192.pem
	I0805 12:23:29.245135  421523 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 11:39 /usr/share/ca-certificates/3912192.pem
	I0805 12:23:29.245194  421523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3912192.pem
	I0805 12:23:29.250714  421523 command_runner.go:130] > 3ec20f2e
	I0805 12:23:29.250778  421523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3912192.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 12:23:29.259658  421523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 12:23:29.270082  421523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:23:29.274403  421523 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  5 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:23:29.274544  421523 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:23:29.274622  421523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:23:29.280223  421523 command_runner.go:130] > b5213941
	I0805 12:23:29.280295  421523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 12:23:29.289224  421523 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 12:23:29.293855  421523 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 12:23:29.293880  421523 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0805 12:23:29.293889  421523 command_runner.go:130] > Device: 253,1	Inode: 7339051     Links: 1
	I0805 12:23:29.293901  421523 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0805 12:23:29.293920  421523 command_runner.go:130] > Access: 2024-08-05 12:16:38.163467123 +0000
	I0805 12:23:29.293932  421523 command_runner.go:130] > Modify: 2024-08-05 12:16:38.163467123 +0000
	I0805 12:23:29.293939  421523 command_runner.go:130] > Change: 2024-08-05 12:16:38.163467123 +0000
	I0805 12:23:29.293947  421523 command_runner.go:130] >  Birth: 2024-08-05 12:16:38.163467123 +0000
	I0805 12:23:29.294012  421523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 12:23:29.299445  421523 command_runner.go:130] > Certificate will not expire
	I0805 12:23:29.299730  421523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 12:23:29.305138  421523 command_runner.go:130] > Certificate will not expire
	I0805 12:23:29.305283  421523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 12:23:29.310504  421523 command_runner.go:130] > Certificate will not expire
	I0805 12:23:29.310684  421523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 12:23:29.316024  421523 command_runner.go:130] > Certificate will not expire
	I0805 12:23:29.316070  421523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 12:23:29.321282  421523 command_runner.go:130] > Certificate will not expire
	I0805 12:23:29.321327  421523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 12:23:29.326794  421523 command_runner.go:130] > Certificate will not expire
	I0805 12:23:29.326887  421523 kubeadm.go:392] StartCluster: {Name:multinode-841883 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-841883 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.205 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.3 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:23:29.327045  421523 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 12:23:29.327260  421523 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:23:29.361122  421523 command_runner.go:130] > be9aabcf660e72d03801ba3b30950b9c6f57ba94086ce1cc291dd5c5e32f8933
	I0805 12:23:29.361156  421523 command_runner.go:130] > bbcbcd0cbb8aac419f5d87c3ef071d0756652f64b3149ff41829c99197eb025f
	I0805 12:23:29.361166  421523 command_runner.go:130] > cc02fb96e19d7d6a667ebd81c3e5cdcecb15fbfb47330274fb4b86c710474f10
	I0805 12:23:29.361177  421523 command_runner.go:130] > e6441cea8b8c78541b9bd98f0b805d148d801178138fb4e18bb54327800d11f1
	I0805 12:23:29.361186  421523 command_runner.go:130] > 9f5bff1b0b6709c5cec533eaef857f59d546deee5fb23e9647d7dbdcd5b6645a
	I0805 12:23:29.361238  421523 command_runner.go:130] > 4c65eb324f492bf364da9ea9a47631827809e1542734c857fabec4020e9dc3d7
	I0805 12:23:29.361262  421523 command_runner.go:130] > 38b97b9b3cf57db4b8524e9f2c6d9ba04d00d56377c723b6c3868713d10fa6fe
	I0805 12:23:29.361336  421523 command_runner.go:130] > 7ad7f7b96f84996531bb595e6e5e24fb9e8a513373562f78426f1a2175bafea1
	I0805 12:23:29.362757  421523 cri.go:89] found id: "be9aabcf660e72d03801ba3b30950b9c6f57ba94086ce1cc291dd5c5e32f8933"
	I0805 12:23:29.362774  421523 cri.go:89] found id: "bbcbcd0cbb8aac419f5d87c3ef071d0756652f64b3149ff41829c99197eb025f"
	I0805 12:23:29.362780  421523 cri.go:89] found id: "cc02fb96e19d7d6a667ebd81c3e5cdcecb15fbfb47330274fb4b86c710474f10"
	I0805 12:23:29.362792  421523 cri.go:89] found id: "e6441cea8b8c78541b9bd98f0b805d148d801178138fb4e18bb54327800d11f1"
	I0805 12:23:29.362797  421523 cri.go:89] found id: "9f5bff1b0b6709c5cec533eaef857f59d546deee5fb23e9647d7dbdcd5b6645a"
	I0805 12:23:29.362801  421523 cri.go:89] found id: "4c65eb324f492bf364da9ea9a47631827809e1542734c857fabec4020e9dc3d7"
	I0805 12:23:29.362809  421523 cri.go:89] found id: "38b97b9b3cf57db4b8524e9f2c6d9ba04d00d56377c723b6c3868713d10fa6fe"
	I0805 12:23:29.362813  421523 cri.go:89] found id: "7ad7f7b96f84996531bb595e6e5e24fb9e8a513373562f78426f1a2175bafea1"
	I0805 12:23:29.362818  421523 cri.go:89] found id: ""
	I0805 12:23:29.362874  421523 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 05 12:27:38 multinode-841883 crio[2878]: time="2024-08-05 12:27:38.090412477Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b9353315-893d-42e7-86a1-7146e8d36c8a name=/runtime.v1.RuntimeService/Version
	Aug 05 12:27:38 multinode-841883 crio[2878]: time="2024-08-05 12:27:38.091439659Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f9678301-4ca0-4f2c-8cf2-d579ec4281d5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 12:27:38 multinode-841883 crio[2878]: time="2024-08-05 12:27:38.092109706Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722860858092085691,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f9678301-4ca0-4f2c-8cf2-d579ec4281d5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 12:27:38 multinode-841883 crio[2878]: time="2024-08-05 12:27:38.092917869Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e80ed7d7-9fb0-4fe0-a281-3ebe11e4270f name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:27:38 multinode-841883 crio[2878]: time="2024-08-05 12:27:38.093215112Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e80ed7d7-9fb0-4fe0-a281-3ebe11e4270f name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:27:38 multinode-841883 crio[2878]: time="2024-08-05 12:27:38.094042814Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:98795ff3de72c2452e43ef9b281090d775f986ad948912fa4f818d95b00050c0,PodSandboxId:752056d3ae54b22f231f0c9cd31b2306a402026a1079aaed2e2583afd64aab14,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722860643511564748,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7lqm2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f10ce9f2-7971-4942-836f-143b674e5cb4,},Annotations:map[string]string{io.kubernetes.container.hash: 34ccb7c2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a98aae9aaee6555cf56d0a63b8a6aa7840e4775625a04ad762cc70b4247c868,PodSandboxId:336818d1a255e5029842bdf1b80f7f275a776db50f36b23e492188fb4d37e62c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722860616434897404,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cwklz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3de46bbd-b3ee-4132-927a-2abded24a986,},Annotations:map[string]string{io.kubernetes.container.hash: c2fe2da6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69b19862fff81457c30fc3f2c95dd1ddb95078eabced6e3d18ede6ff578fc015,PodSandboxId:1cadc7450b91bc1439026f7673ee1f59769ab98d26506a1aef946d7a0d0a047e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722860616423518560,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h2bf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cadf65c5-0bf5-4c49-9ab5-442c0b3c6f49,},Annotations:map[string]string{io.kubernetes.container.hash: d65d9610,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce0d95b1a263838ecc2145f0186c2edc7664b77d6da72ca2d16fc7c59dbfb40c,PodSandboxId:72c1220da3ab072588cbe0f6408518211563aae2e6a48189a99f8db6721a1332,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722860616406911622,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zrs8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41dfbd90-fc76-49cd-8127-41d05f965cee,},Annotations:map[string]string{io.kubernetes.container.hash: 70240e0c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",
\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8317ecc61c341dfc6af1131856011dd807492b30baf5cab804eae543fea0eebc,PodSandboxId:e52865878a5061aec21758aa35a895f3d44460b5d0706d36e3b5371c8cf78b27,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722860616393433208,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4d95110-27dc-4a02-810d-c60f43201bde,},Annotations:map[string]string{io.ku
bernetes.container.hash: b6859de0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1f79275fe330946cd4b64487589ea3c51d9ccbd7d29eece0acaf200f2a63cbc,PodSandboxId:4d3916bf084ff3002b4d491c8418e852c68c65921e6f4de12ca04e86e56fe5f5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722860612589317430,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bf1ff096dd95d095391f6be6da0fb24,},Annotations:map[string
]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5117bb87b2f82290d53e273522bc4c8c828f19edcea13a3790022d30ee6f3650,PodSandboxId:d0c548acf7266dda3b49cc063799473ddfe9acb87560165e9b7292c7ed9b71cf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722860612563889753,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 769ccf6183f9b4c42bf8977e06c6180b,},Annotations:map[string]string{io.kub
ernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:695d028e1c305ae52f913f9ca1f0162d430e5829f7b2fd485ab1c928bfcd102c,PodSandboxId:72e039ff71c700ce91d9ed0f4ec05f88a6302e9680edf2c3f969e5049bd7d9b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722860612588706040,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bcce528bbdec41e89d8e795d3f250d7,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 2e13678b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f3ade224e3cf45fffdb29f70932afb418c9cd74b9345c096d14c2f17988cff3,PodSandboxId:39dab2174e03330ef93d464e584ffe6fd9028e026f68f3a18cca54a619cae32b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722860612551299409,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 480fe03e83fb027d0a44921d69591762,},Annotations:map[string]string{io.kubernetes.container.hash: c7d3e2c7,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4263e00071665fb62a27bd960d8a9141bf062de62f4e128f4f46034dfd628236,PodSandboxId:0680c63e48eecf32f4db50456d2cdbf763f72ef81b253e077df0622cc05d3e4f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722860293050416784,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7lqm2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f10ce9f2-7971-4942-836f-143b674e5cb4,},Annotations:map[string]string{io.kubernetes.container.hash: 34ccb7c2,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be9aabcf660e72d03801ba3b30950b9c6f57ba94086ce1cc291dd5c5e32f8933,PodSandboxId:5415052d80d9bc352f5a9a1e80c1fdc4965d8f486e997c14d63784a90abd792c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722860237453831575,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zrs8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41dfbd90-fc76-49cd-8127-41d05f965cee,},Annotations:map[string]string{io.kubernetes.container.hash: 70240e0c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbcbcd0cbb8aac419f5d87c3ef071d0756652f64b3149ff41829c99197eb025f,PodSandboxId:efbda2f5a062a7b3105c305106d35b07929873007d072f5afb089e7faa09219b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722860237396959152,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d4d95110-27dc-4a02-810d-c60f43201bde,},Annotations:map[string]string{io.kubernetes.container.hash: b6859de0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc02fb96e19d7d6a667ebd81c3e5cdcecb15fbfb47330274fb4b86c710474f10,PodSandboxId:eb09f0acc4db3f91aef14462a298f0f24c2c63e7152d2c04625fffd9c0a5d319,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722860225375143490,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cwklz,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 3de46bbd-b3ee-4132-927a-2abded24a986,},Annotations:map[string]string{io.kubernetes.container.hash: c2fe2da6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6441cea8b8c78541b9bd98f0b805d148d801178138fb4e18bb54327800d11f1,PodSandboxId:ba809f3556888e01562cee1a8fd8a7d639f1406ab3c3bc9a89f1a95153c37fce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722860221370951881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h2bf5,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: cadf65c5-0bf5-4c49-9ab5-442c0b3c6f49,},Annotations:map[string]string{io.kubernetes.container.hash: d65d9610,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f5bff1b0b6709c5cec533eaef857f59d546deee5fb23e9647d7dbdcd5b6645a,PodSandboxId:ec56389b09fd970d770bdcd650f65185042d1847f47c201302765071934665e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722860202228031552,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-841883,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 1bf1ff096dd95d095391f6be6da0fb24,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38b97b9b3cf57db4b8524e9f2c6d9ba04d00d56377c723b6c3868713d10fa6fe,PodSandboxId:5658839e595f8ae657238db457616865e02d80ab7b8bf244c41874a829c054e7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722860202152441421,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 769ccf6183f9b4c42bf8977e06c6180b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c65eb324f492bf364da9ea9a47631827809e1542734c857fabec4020e9dc3d7,PodSandboxId:aea5e35a8af16f80e782e0b0deb57cb886bf1ae41f9a252d1c212eb2f7e3fe22,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722860202179572149,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 480fe03e83fb027d0a44921d69591762,
},Annotations:map[string]string{io.kubernetes.container.hash: c7d3e2c7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ad7f7b96f84996531bb595e6e5e24fb9e8a513373562f78426f1a2175bafea1,PodSandboxId:4851d727499f1b8298a50dce48f87c8655f9fd8066eaf100567ccf06e7463a08,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722860202118566732,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bcce528bbdec41e89d8e795d3f250d7,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 2e13678b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e80ed7d7-9fb0-4fe0-a281-3ebe11e4270f name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:27:38 multinode-841883 crio[2878]: time="2024-08-05 12:27:38.123294410Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0b2220db-2831-4513-93a8-8ff88f06a487 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 05 12:27:38 multinode-841883 crio[2878]: time="2024-08-05 12:27:38.123535221Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:752056d3ae54b22f231f0c9cd31b2306a402026a1079aaed2e2583afd64aab14,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-7lqm2,Uid:f10ce9f2-7971-4942-836f-143b674e5cb4,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722860643367347595,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-7lqm2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f10ce9f2-7971-4942-836f-143b674e5cb4,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-05T12:23:36.058279097Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d0c548acf7266dda3b49cc063799473ddfe9acb87560165e9b7292c7ed9b71cf,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-841883,Uid:769ccf6183f9b4c42bf8977e06c6180b,Namespace:kube-system,Attempt:
1,},State:SANDBOX_READY,CreatedAt:1722860609637966214,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 769ccf6183f9b4c42bf8977e06c6180b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 769ccf6183f9b4c42bf8977e06c6180b,kubernetes.io/config.seen: 2024-08-05T12:16:47.125863581Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:72c1220da3ab072588cbe0f6408518211563aae2e6a48189a99f8db6721a1332,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-zrs8r,Uid:41dfbd90-fc76-49cd-8127-41d05f965cee,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722860609637409908,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-zrs8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41dfbd90-fc76-49cd-8127-41d05f965cee,k8s-app: kube-dns,pod-template-hash
: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-05T12:17:16.939950109Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4d3916bf084ff3002b4d491c8418e852c68c65921e6f4de12ca04e86e56fe5f5,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-841883,Uid:1bf1ff096dd95d095391f6be6da0fb24,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722860609623308851,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bf1ff096dd95d095391f6be6da0fb24,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1bf1ff096dd95d095391f6be6da0fb24,kubernetes.io/config.seen: 2024-08-05T12:16:47.125859254Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:39dab2174e03330ef93d464e584ffe6fd9028e026f68f3a18cca54a619cae32b,Metadata:&PodSandboxMetadata{Nam
e:etcd-multinode-841883,Uid:480fe03e83fb027d0a44921d69591762,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722860609620338687,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 480fe03e83fb027d0a44921d69591762,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.86:2379,kubernetes.io/config.hash: 480fe03e83fb027d0a44921d69591762,kubernetes.io/config.seen: 2024-08-05T12:16:47.125864681Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:72e039ff71c700ce91d9ed0f4ec05f88a6302e9680edf2c3f969e5049bd7d9b1,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-841883,Uid:2bcce528bbdec41e89d8e795d3f250d7,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722860609616303041,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kuber
netes.pod.name: kube-apiserver-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bcce528bbdec41e89d8e795d3f250d7,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.86:8443,kubernetes.io/config.hash: 2bcce528bbdec41e89d8e795d3f250d7,kubernetes.io/config.seen: 2024-08-05T12:16:47.125865496Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1cadc7450b91bc1439026f7673ee1f59769ab98d26506a1aef946d7a0d0a047e,Metadata:&PodSandboxMetadata{Name:kube-proxy-h2bf5,Uid:cadf65c5-0bf5-4c49-9ab5-442c0b3c6f49,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722860609613389881,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-h2bf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cadf65c5-0bf5-4c49-9ab5-442c0b3c6f49,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]strin
g{kubernetes.io/config.seen: 2024-08-05T12:17:00.870530470Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:336818d1a255e5029842bdf1b80f7f275a776db50f36b23e492188fb4d37e62c,Metadata:&PodSandboxMetadata{Name:kindnet-cwklz,Uid:3de46bbd-b3ee-4132-927a-2abded24a986,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722860609611786667,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-cwklz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3de46bbd-b3ee-4132-927a-2abded24a986,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-05T12:17:00.848709321Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e52865878a5061aec21758aa35a895f3d44460b5d0706d36e3b5371c8cf78b27,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:d4d95110-27dc-4a02-810d-c60f43201bde,Namespace:kube-system,Attempt:1,},Sta
te:SANDBOX_READY,CreatedAt:1722860609603229429,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4d95110-27dc-4a02-810d-c60f43201bde,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/t
mp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-05T12:17:16.945819985Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=0b2220db-2831-4513-93a8-8ff88f06a487 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 05 12:27:38 multinode-841883 crio[2878]: time="2024-08-05 12:27:38.124150662Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5e2772b0-5f96-47e4-8f9f-49a338ad7fa4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:27:38 multinode-841883 crio[2878]: time="2024-08-05 12:27:38.124210436Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5e2772b0-5f96-47e4-8f9f-49a338ad7fa4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:27:38 multinode-841883 crio[2878]: time="2024-08-05 12:27:38.124435457Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:98795ff3de72c2452e43ef9b281090d775f986ad948912fa4f818d95b00050c0,PodSandboxId:752056d3ae54b22f231f0c9cd31b2306a402026a1079aaed2e2583afd64aab14,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722860643511564748,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7lqm2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f10ce9f2-7971-4942-836f-143b674e5cb4,},Annotations:map[string]string{io.kubernetes.container.hash: 34ccb7c2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a98aae9aaee6555cf56d0a63b8a6aa7840e4775625a04ad762cc70b4247c868,PodSandboxId:336818d1a255e5029842bdf1b80f7f275a776db50f36b23e492188fb4d37e62c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722860616434897404,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cwklz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3de46bbd-b3ee-4132-927a-2abded24a986,},Annotations:map[string]string{io.kubernetes.container.hash: c2fe2da6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69b19862fff81457c30fc3f2c95dd1ddb95078eabced6e3d18ede6ff578fc015,PodSandboxId:1cadc7450b91bc1439026f7673ee1f59769ab98d26506a1aef946d7a0d0a047e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722860616423518560,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h2bf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cadf65c5-0bf5-4c49-9ab5-442c0b3c6f49,},Annotations:map[string]string{io.kubernetes.container.hash: d65d9610,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce0d95b1a263838ecc2145f0186c2edc7664b77d6da72ca2d16fc7c59dbfb40c,PodSandboxId:72c1220da3ab072588cbe0f6408518211563aae2e6a48189a99f8db6721a1332,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722860616406911622,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zrs8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41dfbd90-fc76-49cd-8127-41d05f965cee,},Annotations:map[string]string{io.kubernetes.container.hash: 70240e0c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",
\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8317ecc61c341dfc6af1131856011dd807492b30baf5cab804eae543fea0eebc,PodSandboxId:e52865878a5061aec21758aa35a895f3d44460b5d0706d36e3b5371c8cf78b27,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722860616393433208,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4d95110-27dc-4a02-810d-c60f43201bde,},Annotations:map[string]string{io.ku
bernetes.container.hash: b6859de0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1f79275fe330946cd4b64487589ea3c51d9ccbd7d29eece0acaf200f2a63cbc,PodSandboxId:4d3916bf084ff3002b4d491c8418e852c68c65921e6f4de12ca04e86e56fe5f5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722860612589317430,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bf1ff096dd95d095391f6be6da0fb24,},Annotations:map[string
]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5117bb87b2f82290d53e273522bc4c8c828f19edcea13a3790022d30ee6f3650,PodSandboxId:d0c548acf7266dda3b49cc063799473ddfe9acb87560165e9b7292c7ed9b71cf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722860612563889753,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 769ccf6183f9b4c42bf8977e06c6180b,},Annotations:map[string]string{io.kub
ernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:695d028e1c305ae52f913f9ca1f0162d430e5829f7b2fd485ab1c928bfcd102c,PodSandboxId:72e039ff71c700ce91d9ed0f4ec05f88a6302e9680edf2c3f969e5049bd7d9b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722860612588706040,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bcce528bbdec41e89d8e795d3f250d7,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 2e13678b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f3ade224e3cf45fffdb29f70932afb418c9cd74b9345c096d14c2f17988cff3,PodSandboxId:39dab2174e03330ef93d464e584ffe6fd9028e026f68f3a18cca54a619cae32b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722860612551299409,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 480fe03e83fb027d0a44921d69591762,},Annotations:map[string]string{io.kubernetes.container.hash: c7d3e2c7,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5e2772b0-5f96-47e4-8f9f-49a338ad7fa4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:27:38 multinode-841883 crio[2878]: time="2024-08-05 12:27:38.143835248Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f2eea553-45be-4add-a17a-3c95310b2b36 name=/runtime.v1.RuntimeService/Version
	Aug 05 12:27:38 multinode-841883 crio[2878]: time="2024-08-05 12:27:38.143907695Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f2eea553-45be-4add-a17a-3c95310b2b36 name=/runtime.v1.RuntimeService/Version
	Aug 05 12:27:38 multinode-841883 crio[2878]: time="2024-08-05 12:27:38.145497651Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9bd8f52a-0162-4998-a7cd-7759298ecb0c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 12:27:38 multinode-841883 crio[2878]: time="2024-08-05 12:27:38.145980683Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722860858145958249,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9bd8f52a-0162-4998-a7cd-7759298ecb0c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 12:27:38 multinode-841883 crio[2878]: time="2024-08-05 12:27:38.146348243Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=78f4901d-4ea0-4acf-83f0-62a78087836e name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:27:38 multinode-841883 crio[2878]: time="2024-08-05 12:27:38.146398995Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=78f4901d-4ea0-4acf-83f0-62a78087836e name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:27:38 multinode-841883 crio[2878]: time="2024-08-05 12:27:38.146806972Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:98795ff3de72c2452e43ef9b281090d775f986ad948912fa4f818d95b00050c0,PodSandboxId:752056d3ae54b22f231f0c9cd31b2306a402026a1079aaed2e2583afd64aab14,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722860643511564748,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7lqm2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f10ce9f2-7971-4942-836f-143b674e5cb4,},Annotations:map[string]string{io.kubernetes.container.hash: 34ccb7c2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a98aae9aaee6555cf56d0a63b8a6aa7840e4775625a04ad762cc70b4247c868,PodSandboxId:336818d1a255e5029842bdf1b80f7f275a776db50f36b23e492188fb4d37e62c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722860616434897404,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cwklz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3de46bbd-b3ee-4132-927a-2abded24a986,},Annotations:map[string]string{io.kubernetes.container.hash: c2fe2da6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69b19862fff81457c30fc3f2c95dd1ddb95078eabced6e3d18ede6ff578fc015,PodSandboxId:1cadc7450b91bc1439026f7673ee1f59769ab98d26506a1aef946d7a0d0a047e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722860616423518560,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h2bf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cadf65c5-0bf5-4c49-9ab5-442c0b3c6f49,},Annotations:map[string]string{io.kubernetes.container.hash: d65d9610,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce0d95b1a263838ecc2145f0186c2edc7664b77d6da72ca2d16fc7c59dbfb40c,PodSandboxId:72c1220da3ab072588cbe0f6408518211563aae2e6a48189a99f8db6721a1332,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722860616406911622,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zrs8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41dfbd90-fc76-49cd-8127-41d05f965cee,},Annotations:map[string]string{io.kubernetes.container.hash: 70240e0c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",
\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8317ecc61c341dfc6af1131856011dd807492b30baf5cab804eae543fea0eebc,PodSandboxId:e52865878a5061aec21758aa35a895f3d44460b5d0706d36e3b5371c8cf78b27,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722860616393433208,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4d95110-27dc-4a02-810d-c60f43201bde,},Annotations:map[string]string{io.ku
bernetes.container.hash: b6859de0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1f79275fe330946cd4b64487589ea3c51d9ccbd7d29eece0acaf200f2a63cbc,PodSandboxId:4d3916bf084ff3002b4d491c8418e852c68c65921e6f4de12ca04e86e56fe5f5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722860612589317430,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bf1ff096dd95d095391f6be6da0fb24,},Annotations:map[string
]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5117bb87b2f82290d53e273522bc4c8c828f19edcea13a3790022d30ee6f3650,PodSandboxId:d0c548acf7266dda3b49cc063799473ddfe9acb87560165e9b7292c7ed9b71cf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722860612563889753,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 769ccf6183f9b4c42bf8977e06c6180b,},Annotations:map[string]string{io.kub
ernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:695d028e1c305ae52f913f9ca1f0162d430e5829f7b2fd485ab1c928bfcd102c,PodSandboxId:72e039ff71c700ce91d9ed0f4ec05f88a6302e9680edf2c3f969e5049bd7d9b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722860612588706040,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bcce528bbdec41e89d8e795d3f250d7,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 2e13678b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f3ade224e3cf45fffdb29f70932afb418c9cd74b9345c096d14c2f17988cff3,PodSandboxId:39dab2174e03330ef93d464e584ffe6fd9028e026f68f3a18cca54a619cae32b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722860612551299409,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 480fe03e83fb027d0a44921d69591762,},Annotations:map[string]string{io.kubernetes.container.hash: c7d3e2c7,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4263e00071665fb62a27bd960d8a9141bf062de62f4e128f4f46034dfd628236,PodSandboxId:0680c63e48eecf32f4db50456d2cdbf763f72ef81b253e077df0622cc05d3e4f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722860293050416784,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7lqm2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f10ce9f2-7971-4942-836f-143b674e5cb4,},Annotations:map[string]string{io.kubernetes.container.hash: 34ccb7c2,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be9aabcf660e72d03801ba3b30950b9c6f57ba94086ce1cc291dd5c5e32f8933,PodSandboxId:5415052d80d9bc352f5a9a1e80c1fdc4965d8f486e997c14d63784a90abd792c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722860237453831575,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zrs8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41dfbd90-fc76-49cd-8127-41d05f965cee,},Annotations:map[string]string{io.kubernetes.container.hash: 70240e0c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbcbcd0cbb8aac419f5d87c3ef071d0756652f64b3149ff41829c99197eb025f,PodSandboxId:efbda2f5a062a7b3105c305106d35b07929873007d072f5afb089e7faa09219b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722860237396959152,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d4d95110-27dc-4a02-810d-c60f43201bde,},Annotations:map[string]string{io.kubernetes.container.hash: b6859de0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc02fb96e19d7d6a667ebd81c3e5cdcecb15fbfb47330274fb4b86c710474f10,PodSandboxId:eb09f0acc4db3f91aef14462a298f0f24c2c63e7152d2c04625fffd9c0a5d319,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722860225375143490,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cwklz,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 3de46bbd-b3ee-4132-927a-2abded24a986,},Annotations:map[string]string{io.kubernetes.container.hash: c2fe2da6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6441cea8b8c78541b9bd98f0b805d148d801178138fb4e18bb54327800d11f1,PodSandboxId:ba809f3556888e01562cee1a8fd8a7d639f1406ab3c3bc9a89f1a95153c37fce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722860221370951881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h2bf5,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: cadf65c5-0bf5-4c49-9ab5-442c0b3c6f49,},Annotations:map[string]string{io.kubernetes.container.hash: d65d9610,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f5bff1b0b6709c5cec533eaef857f59d546deee5fb23e9647d7dbdcd5b6645a,PodSandboxId:ec56389b09fd970d770bdcd650f65185042d1847f47c201302765071934665e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722860202228031552,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-841883,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 1bf1ff096dd95d095391f6be6da0fb24,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38b97b9b3cf57db4b8524e9f2c6d9ba04d00d56377c723b6c3868713d10fa6fe,PodSandboxId:5658839e595f8ae657238db457616865e02d80ab7b8bf244c41874a829c054e7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722860202152441421,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 769ccf6183f9b4c42bf8977e06c6180b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c65eb324f492bf364da9ea9a47631827809e1542734c857fabec4020e9dc3d7,PodSandboxId:aea5e35a8af16f80e782e0b0deb57cb886bf1ae41f9a252d1c212eb2f7e3fe22,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722860202179572149,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 480fe03e83fb027d0a44921d69591762,
},Annotations:map[string]string{io.kubernetes.container.hash: c7d3e2c7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ad7f7b96f84996531bb595e6e5e24fb9e8a513373562f78426f1a2175bafea1,PodSandboxId:4851d727499f1b8298a50dce48f87c8655f9fd8066eaf100567ccf06e7463a08,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722860202118566732,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bcce528bbdec41e89d8e795d3f250d7,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 2e13678b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=78f4901d-4ea0-4acf-83f0-62a78087836e name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:27:38 multinode-841883 crio[2878]: time="2024-08-05 12:27:38.191010232Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4791f4e6-b417-456d-96f4-0f8fed301241 name=/runtime.v1.RuntimeService/Version
	Aug 05 12:27:38 multinode-841883 crio[2878]: time="2024-08-05 12:27:38.191080900Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4791f4e6-b417-456d-96f4-0f8fed301241 name=/runtime.v1.RuntimeService/Version
	Aug 05 12:27:38 multinode-841883 crio[2878]: time="2024-08-05 12:27:38.192229736Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=11e7fd98-fab1-48dd-9817-bb4119fc473f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 12:27:38 multinode-841883 crio[2878]: time="2024-08-05 12:27:38.192676105Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722860858192595068,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=11e7fd98-fab1-48dd-9817-bb4119fc473f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 12:27:38 multinode-841883 crio[2878]: time="2024-08-05 12:27:38.193248030Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=98478e85-c741-412d-8992-59ced308a39c name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:27:38 multinode-841883 crio[2878]: time="2024-08-05 12:27:38.193300172Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=98478e85-c741-412d-8992-59ced308a39c name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:27:38 multinode-841883 crio[2878]: time="2024-08-05 12:27:38.193686289Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:98795ff3de72c2452e43ef9b281090d775f986ad948912fa4f818d95b00050c0,PodSandboxId:752056d3ae54b22f231f0c9cd31b2306a402026a1079aaed2e2583afd64aab14,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722860643511564748,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7lqm2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f10ce9f2-7971-4942-836f-143b674e5cb4,},Annotations:map[string]string{io.kubernetes.container.hash: 34ccb7c2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a98aae9aaee6555cf56d0a63b8a6aa7840e4775625a04ad762cc70b4247c868,PodSandboxId:336818d1a255e5029842bdf1b80f7f275a776db50f36b23e492188fb4d37e62c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722860616434897404,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cwklz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3de46bbd-b3ee-4132-927a-2abded24a986,},Annotations:map[string]string{io.kubernetes.container.hash: c2fe2da6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69b19862fff81457c30fc3f2c95dd1ddb95078eabced6e3d18ede6ff578fc015,PodSandboxId:1cadc7450b91bc1439026f7673ee1f59769ab98d26506a1aef946d7a0d0a047e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722860616423518560,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h2bf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cadf65c5-0bf5-4c49-9ab5-442c0b3c6f49,},Annotations:map[string]string{io.kubernetes.container.hash: d65d9610,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce0d95b1a263838ecc2145f0186c2edc7664b77d6da72ca2d16fc7c59dbfb40c,PodSandboxId:72c1220da3ab072588cbe0f6408518211563aae2e6a48189a99f8db6721a1332,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722860616406911622,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zrs8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41dfbd90-fc76-49cd-8127-41d05f965cee,},Annotations:map[string]string{io.kubernetes.container.hash: 70240e0c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",
\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8317ecc61c341dfc6af1131856011dd807492b30baf5cab804eae543fea0eebc,PodSandboxId:e52865878a5061aec21758aa35a895f3d44460b5d0706d36e3b5371c8cf78b27,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722860616393433208,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4d95110-27dc-4a02-810d-c60f43201bde,},Annotations:map[string]string{io.ku
bernetes.container.hash: b6859de0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1f79275fe330946cd4b64487589ea3c51d9ccbd7d29eece0acaf200f2a63cbc,PodSandboxId:4d3916bf084ff3002b4d491c8418e852c68c65921e6f4de12ca04e86e56fe5f5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722860612589317430,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bf1ff096dd95d095391f6be6da0fb24,},Annotations:map[string
]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5117bb87b2f82290d53e273522bc4c8c828f19edcea13a3790022d30ee6f3650,PodSandboxId:d0c548acf7266dda3b49cc063799473ddfe9acb87560165e9b7292c7ed9b71cf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722860612563889753,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 769ccf6183f9b4c42bf8977e06c6180b,},Annotations:map[string]string{io.kub
ernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:695d028e1c305ae52f913f9ca1f0162d430e5829f7b2fd485ab1c928bfcd102c,PodSandboxId:72e039ff71c700ce91d9ed0f4ec05f88a6302e9680edf2c3f969e5049bd7d9b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722860612588706040,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bcce528bbdec41e89d8e795d3f250d7,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 2e13678b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f3ade224e3cf45fffdb29f70932afb418c9cd74b9345c096d14c2f17988cff3,PodSandboxId:39dab2174e03330ef93d464e584ffe6fd9028e026f68f3a18cca54a619cae32b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722860612551299409,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 480fe03e83fb027d0a44921d69591762,},Annotations:map[string]string{io.kubernetes.container.hash: c7d3e2c7,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4263e00071665fb62a27bd960d8a9141bf062de62f4e128f4f46034dfd628236,PodSandboxId:0680c63e48eecf32f4db50456d2cdbf763f72ef81b253e077df0622cc05d3e4f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722860293050416784,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7lqm2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f10ce9f2-7971-4942-836f-143b674e5cb4,},Annotations:map[string]string{io.kubernetes.container.hash: 34ccb7c2,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be9aabcf660e72d03801ba3b30950b9c6f57ba94086ce1cc291dd5c5e32f8933,PodSandboxId:5415052d80d9bc352f5a9a1e80c1fdc4965d8f486e997c14d63784a90abd792c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722860237453831575,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zrs8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41dfbd90-fc76-49cd-8127-41d05f965cee,},Annotations:map[string]string{io.kubernetes.container.hash: 70240e0c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbcbcd0cbb8aac419f5d87c3ef071d0756652f64b3149ff41829c99197eb025f,PodSandboxId:efbda2f5a062a7b3105c305106d35b07929873007d072f5afb089e7faa09219b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722860237396959152,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d4d95110-27dc-4a02-810d-c60f43201bde,},Annotations:map[string]string{io.kubernetes.container.hash: b6859de0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc02fb96e19d7d6a667ebd81c3e5cdcecb15fbfb47330274fb4b86c710474f10,PodSandboxId:eb09f0acc4db3f91aef14462a298f0f24c2c63e7152d2c04625fffd9c0a5d319,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722860225375143490,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cwklz,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 3de46bbd-b3ee-4132-927a-2abded24a986,},Annotations:map[string]string{io.kubernetes.container.hash: c2fe2da6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6441cea8b8c78541b9bd98f0b805d148d801178138fb4e18bb54327800d11f1,PodSandboxId:ba809f3556888e01562cee1a8fd8a7d639f1406ab3c3bc9a89f1a95153c37fce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722860221370951881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h2bf5,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: cadf65c5-0bf5-4c49-9ab5-442c0b3c6f49,},Annotations:map[string]string{io.kubernetes.container.hash: d65d9610,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f5bff1b0b6709c5cec533eaef857f59d546deee5fb23e9647d7dbdcd5b6645a,PodSandboxId:ec56389b09fd970d770bdcd650f65185042d1847f47c201302765071934665e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722860202228031552,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-841883,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 1bf1ff096dd95d095391f6be6da0fb24,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38b97b9b3cf57db4b8524e9f2c6d9ba04d00d56377c723b6c3868713d10fa6fe,PodSandboxId:5658839e595f8ae657238db457616865e02d80ab7b8bf244c41874a829c054e7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722860202152441421,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 769ccf6183f9b4c42bf8977e06c6180b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c65eb324f492bf364da9ea9a47631827809e1542734c857fabec4020e9dc3d7,PodSandboxId:aea5e35a8af16f80e782e0b0deb57cb886bf1ae41f9a252d1c212eb2f7e3fe22,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722860202179572149,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 480fe03e83fb027d0a44921d69591762,
},Annotations:map[string]string{io.kubernetes.container.hash: c7d3e2c7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ad7f7b96f84996531bb595e6e5e24fb9e8a513373562f78426f1a2175bafea1,PodSandboxId:4851d727499f1b8298a50dce48f87c8655f9fd8066eaf100567ccf06e7463a08,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722860202118566732,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-841883,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bcce528bbdec41e89d8e795d3f250d7,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 2e13678b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=98478e85-c741-412d-8992-59ced308a39c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	98795ff3de72c       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   752056d3ae54b       busybox-fc5497c4f-7lqm2
	7a98aae9aaee6       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      4 minutes ago       Running             kindnet-cni               1                   336818d1a255e       kindnet-cwklz
	69b19862fff81       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      4 minutes ago       Running             kube-proxy                1                   1cadc7450b91b       kube-proxy-h2bf5
	ce0d95b1a2638       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   72c1220da3ab0       coredns-7db6d8ff4d-zrs8r
	8317ecc61c341       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   e52865878a506       storage-provisioner
	e1f79275fe330       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   1                   4d3916bf084ff       kube-controller-manager-multinode-841883
	695d028e1c305       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            1                   72e039ff71c70       kube-apiserver-multinode-841883
	5117bb87b2f82       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      4 minutes ago       Running             kube-scheduler            1                   d0c548acf7266       kube-scheduler-multinode-841883
	2f3ade224e3cf       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Running             etcd                      1                   39dab2174e033       etcd-multinode-841883
	4263e00071665       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   0680c63e48eec       busybox-fc5497c4f-7lqm2
	be9aabcf660e7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago      Exited              coredns                   0                   5415052d80d9b       coredns-7db6d8ff4d-zrs8r
	bbcbcd0cbb8aa       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   efbda2f5a062a       storage-provisioner
	cc02fb96e19d7       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    10 minutes ago      Exited              kindnet-cni               0                   eb09f0acc4db3       kindnet-cwklz
	e6441cea8b8c7       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      10 minutes ago      Exited              kube-proxy                0                   ba809f3556888       kube-proxy-h2bf5
	9f5bff1b0b670       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      10 minutes ago      Exited              kube-controller-manager   0                   ec56389b09fd9       kube-controller-manager-multinode-841883
	4c65eb324f492       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      10 minutes ago      Exited              etcd                      0                   aea5e35a8af16       etcd-multinode-841883
	38b97b9b3cf57       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      10 minutes ago      Exited              kube-scheduler            0                   5658839e595f8       kube-scheduler-multinode-841883
	7ad7f7b96f849       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      10 minutes ago      Exited              kube-apiserver            0                   4851d727499f1       kube-apiserver-multinode-841883
	
	
	==> coredns [be9aabcf660e72d03801ba3b30950b9c6f57ba94086ce1cc291dd5c5e32f8933] <==
	[INFO] 10.244.1.2:39925 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001821251s
	[INFO] 10.244.1.2:46445 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000127982s
	[INFO] 10.244.1.2:56510 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010266s
	[INFO] 10.244.1.2:37286 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001245769s
	[INFO] 10.244.1.2:57588 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000099668s
	[INFO] 10.244.1.2:45841 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000106901s
	[INFO] 10.244.1.2:34459 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110108s
	[INFO] 10.244.0.3:34006 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013481s
	[INFO] 10.244.0.3:41161 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000067302s
	[INFO] 10.244.0.3:37785 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000056333s
	[INFO] 10.244.0.3:34587 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000048812s
	[INFO] 10.244.1.2:51284 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137526s
	[INFO] 10.244.1.2:43623 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096345s
	[INFO] 10.244.1.2:53591 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092229s
	[INFO] 10.244.1.2:43422 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076128s
	[INFO] 10.244.0.3:57865 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113278s
	[INFO] 10.244.0.3:48031 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000343192s
	[INFO] 10.244.0.3:58137 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000104051s
	[INFO] 10.244.0.3:38594 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000091942s
	[INFO] 10.244.1.2:38327 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00010431s
	[INFO] 10.244.1.2:40574 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000068802s
	[INFO] 10.244.1.2:48699 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000059382s
	[INFO] 10.244.1.2:49670 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000094872s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ce0d95b1a263838ecc2145f0186c2edc7664b77d6da72ca2d16fc7c59dbfb40c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:41583 - 53781 "HINFO IN 37412991472444561.7596293966948227027. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.015584976s
	
	
	==> describe nodes <==
	Name:               multinode-841883
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-841883
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f
	                    minikube.k8s.io/name=multinode-841883
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_05T12_16_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 12:16:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-841883
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 12:27:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 12:23:35 +0000   Mon, 05 Aug 2024 12:16:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 12:23:35 +0000   Mon, 05 Aug 2024 12:16:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 12:23:35 +0000   Mon, 05 Aug 2024 12:16:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 12:23:35 +0000   Mon, 05 Aug 2024 12:17:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.86
	  Hostname:    multinode-841883
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4adf07e287e34edea1723ba3f4587bda
	  System UUID:                4adf07e2-87e3-4ede-a172-3ba3f4587bda
	  Boot ID:                    0d30dd89-98f9-436f-8b69-49a330751387
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-7lqm2                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m28s
	  kube-system                 coredns-7db6d8ff4d-zrs8r                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-multinode-841883                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-cwklz                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-multinode-841883             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-multinode-841883    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-h2bf5                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-multinode-841883             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 10m                  kube-proxy       
	  Normal  Starting                 4m1s                 kube-proxy       
	  Normal  NodeHasSufficientPID     10m                  kubelet          Node multinode-841883 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                  kubelet          Node multinode-841883 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                  kubelet          Node multinode-841883 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                  node-controller  Node multinode-841883 event: Registered Node multinode-841883 in Controller
	  Normal  NodeReady                10m                  kubelet          Node multinode-841883 status is now: NodeReady
	  Normal  Starting                 4m6s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m6s (x8 over 4m6s)  kubelet          Node multinode-841883 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m6s (x8 over 4m6s)  kubelet          Node multinode-841883 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m6s (x7 over 4m6s)  kubelet          Node multinode-841883 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m50s                node-controller  Node multinode-841883 event: Registered Node multinode-841883 in Controller
	
	
	Name:               multinode-841883-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-841883-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f
	                    minikube.k8s.io/name=multinode-841883
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_05T12_24_13_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 12:24:13 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-841883-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 12:25:14 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 05 Aug 2024 12:24:43 +0000   Mon, 05 Aug 2024 12:25:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 05 Aug 2024 12:24:43 +0000   Mon, 05 Aug 2024 12:25:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 05 Aug 2024 12:24:43 +0000   Mon, 05 Aug 2024 12:25:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 05 Aug 2024 12:24:43 +0000   Mon, 05 Aug 2024 12:25:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.205
	  Hostname:    multinode-841883-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6b7d544f73564d0ebdb16ab281242cf5
	  System UUID:                6b7d544f-7356-4d0e-bdb1-6ab281242cf5
	  Boot ID:                    d2634bd5-8967-4bb9-83cf-280ed6dcee00
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-jtgc2    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m30s
	  kube-system                 kindnet-w4fdf              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m50s
	  kube-system                 kube-proxy-6q2pz           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m20s                  kube-proxy       
	  Normal  Starting                 9m45s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m51s (x2 over 9m51s)  kubelet          Node multinode-841883-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m51s (x2 over 9m51s)  kubelet          Node multinode-841883-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m51s (x2 over 9m51s)  kubelet          Node multinode-841883-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m31s                  kubelet          Node multinode-841883-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m25s (x2 over 3m25s)  kubelet          Node multinode-841883-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m25s (x2 over 3m25s)  kubelet          Node multinode-841883-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m25s (x2 over 3m25s)  kubelet          Node multinode-841883-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m6s                   kubelet          Node multinode-841883-m02 status is now: NodeReady
	  Normal  NodeNotReady             100s                   node-controller  Node multinode-841883-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.062248] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064651] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.193215] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.132128] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.259895] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +4.142872] systemd-fstab-generator[758]: Ignoring "noauto" option for root device
	[  +4.084513] systemd-fstab-generator[934]: Ignoring "noauto" option for root device
	[  +0.060374] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.489444] systemd-fstab-generator[1271]: Ignoring "noauto" option for root device
	[  +0.084526] kauditd_printk_skb: 69 callbacks suppressed
	[  +7.067464] kauditd_printk_skb: 18 callbacks suppressed
	[Aug 5 12:17] systemd-fstab-generator[1466]: Ignoring "noauto" option for root device
	[  +5.722932] kauditd_printk_skb: 56 callbacks suppressed
	[Aug 5 12:18] kauditd_printk_skb: 12 callbacks suppressed
	[Aug 5 12:23] systemd-fstab-generator[2796]: Ignoring "noauto" option for root device
	[  +0.148580] systemd-fstab-generator[2808]: Ignoring "noauto" option for root device
	[  +0.169768] systemd-fstab-generator[2822]: Ignoring "noauto" option for root device
	[  +0.142903] systemd-fstab-generator[2834]: Ignoring "noauto" option for root device
	[  +0.268851] systemd-fstab-generator[2862]: Ignoring "noauto" option for root device
	[  +0.693965] systemd-fstab-generator[2961]: Ignoring "noauto" option for root device
	[  +3.055018] systemd-fstab-generator[3366]: Ignoring "noauto" option for root device
	[  +0.799838] kauditd_printk_skb: 184 callbacks suppressed
	[ +15.846388] systemd-fstab-generator[3924]: Ignoring "noauto" option for root device
	[  +0.102279] kauditd_printk_skb: 32 callbacks suppressed
	[Aug 5 12:24] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [2f3ade224e3cf45fffdb29f70932afb418c9cd74b9345c096d14c2f17988cff3] <==
	{"level":"info","ts":"2024-08-05T12:23:32.995592Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-05T12:23:32.997667Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-05T12:23:32.997969Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5e65f7c667250dae switched to configuration voters=(6802115243719069102)"}
	{"level":"info","ts":"2024-08-05T12:23:32.998035Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1e2108b476944475","local-member-id":"5e65f7c667250dae","added-peer-id":"5e65f7c667250dae","added-peer-peer-urls":["https://192.168.39.86:2380"]}
	{"level":"info","ts":"2024-08-05T12:23:32.998163Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1e2108b476944475","local-member-id":"5e65f7c667250dae","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T12:23:32.998202Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T12:23:33.013799Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-05T12:23:33.014011Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"5e65f7c667250dae","initial-advertise-peer-urls":["https://192.168.39.86:2380"],"listen-peer-urls":["https://192.168.39.86:2380"],"advertise-client-urls":["https://192.168.39.86:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.86:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-05T12:23:33.014053Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-05T12:23:33.014169Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.86:2380"}
	{"level":"info","ts":"2024-08-05T12:23:33.014192Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.86:2380"}
	{"level":"info","ts":"2024-08-05T12:23:34.4597Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5e65f7c667250dae is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-05T12:23:34.459757Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5e65f7c667250dae became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-05T12:23:34.459854Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5e65f7c667250dae received MsgPreVoteResp from 5e65f7c667250dae at term 2"}
	{"level":"info","ts":"2024-08-05T12:23:34.459885Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5e65f7c667250dae became candidate at term 3"}
	{"level":"info","ts":"2024-08-05T12:23:34.459893Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5e65f7c667250dae received MsgVoteResp from 5e65f7c667250dae at term 3"}
	{"level":"info","ts":"2024-08-05T12:23:34.459913Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5e65f7c667250dae became leader at term 3"}
	{"level":"info","ts":"2024-08-05T12:23:34.45994Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 5e65f7c667250dae elected leader 5e65f7c667250dae at term 3"}
	{"level":"info","ts":"2024-08-05T12:23:34.466438Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"5e65f7c667250dae","local-member-attributes":"{Name:multinode-841883 ClientURLs:[https://192.168.39.86:2379]}","request-path":"/0/members/5e65f7c667250dae/attributes","cluster-id":"1e2108b476944475","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-05T12:23:34.466534Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T12:23:34.466725Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T12:23:34.46706Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-05T12:23:34.46711Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-05T12:23:34.468872Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-05T12:23:34.468877Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.86:2379"}
	
	
	==> etcd [4c65eb324f492bf364da9ea9a47631827809e1542734c857fabec4020e9dc3d7] <==
	{"level":"info","ts":"2024-08-05T12:16:42.576748Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T12:16:42.57685Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T12:16:42.596161Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.86:2379"}
	{"level":"info","ts":"2024-08-05T12:17:47.987047Z","caller":"traceutil/trace.go:171","msg":"trace[1517432004] linearizableReadLoop","detail":"{readStateIndex:463; appliedIndex:461; }","duration":"109.233877ms","start":"2024-08-05T12:17:47.877793Z","end":"2024-08-05T12:17:47.987027Z","steps":["trace[1517432004] 'read index received'  (duration: 31.410545ms)","trace[1517432004] 'applied index is now lower than readState.Index'  (duration: 77.822851ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-05T12:17:47.987591Z","caller":"traceutil/trace.go:171","msg":"trace[386703741] transaction","detail":"{read_only:false; response_revision:443; number_of_response:1; }","duration":"152.518744ms","start":"2024-08-05T12:17:47.835062Z","end":"2024-08-05T12:17:47.987581Z","steps":["trace[386703741] 'process raft request'  (duration: 151.807825ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-05T12:17:47.987983Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.110614ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-841883-m02\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-08-05T12:17:47.988565Z","caller":"traceutil/trace.go:171","msg":"trace[370504870] range","detail":"{range_begin:/registry/minions/multinode-841883-m02; range_end:; response_count:1; response_revision:443; }","duration":"110.762702ms","start":"2024-08-05T12:17:47.877788Z","end":"2024-08-05T12:17:47.988551Z","steps":["trace[370504870] 'agreement among raft nodes before linearized reading'  (duration: 109.973069ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T12:18:41.748142Z","caller":"traceutil/trace.go:171","msg":"trace[1817276155] transaction","detail":"{read_only:false; response_revision:576; number_of_response:1; }","duration":"187.275505ms","start":"2024-08-05T12:18:41.56083Z","end":"2024-08-05T12:18:41.748106Z","steps":["trace[1817276155] 'process raft request'  (duration: 182.299104ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T12:18:41.748395Z","caller":"traceutil/trace.go:171","msg":"trace[1505216082] transaction","detail":"{read_only:false; response_revision:577; number_of_response:1; }","duration":"151.245753ms","start":"2024-08-05T12:18:41.597131Z","end":"2024-08-05T12:18:41.748377Z","steps":["trace[1505216082] 'process raft request'  (duration: 150.803801ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T12:18:41.755811Z","caller":"traceutil/trace.go:171","msg":"trace[1386503716] linearizableReadLoop","detail":"{readStateIndex:611; appliedIndex:608; }","duration":"124.756732ms","start":"2024-08-05T12:18:41.631041Z","end":"2024-08-05T12:18:41.755798Z","steps":["trace[1386503716] 'read index received'  (duration: 111.997993ms)","trace[1386503716] 'applied index is now lower than readState.Index'  (duration: 12.75796ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-05T12:18:41.755963Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.892543ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-05T12:18:41.756Z","caller":"traceutil/trace.go:171","msg":"trace[754973419] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:577; }","duration":"124.979561ms","start":"2024-08-05T12:18:41.631015Z","end":"2024-08-05T12:18:41.755995Z","steps":["trace[754973419] 'agreement among raft nodes before linearized reading'  (duration: 124.871517ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-05T12:19:38.550459Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.491356ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-841883-m03\" ","response":"range_response_count:1 size:3116"}
	{"level":"info","ts":"2024-08-05T12:19:38.55081Z","caller":"traceutil/trace.go:171","msg":"trace[2146832427] range","detail":"{range_begin:/registry/minions/multinode-841883-m03; range_end:; response_count:1; response_revision:710; }","duration":"144.882733ms","start":"2024-08-05T12:19:38.405905Z","end":"2024-08-05T12:19:38.550787Z","steps":["trace[2146832427] 'range keys from in-memory index tree'  (duration: 144.218068ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T12:19:38.552525Z","caller":"traceutil/trace.go:171","msg":"trace[803482962] transaction","detail":"{read_only:false; response_revision:711; number_of_response:1; }","duration":"112.809359ms","start":"2024-08-05T12:19:38.439706Z","end":"2024-08-05T12:19:38.552515Z","steps":["trace[803482962] 'process raft request'  (duration: 112.673289ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T12:21:55.944235Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-05T12:21:55.94436Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-841883","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.86:2380"],"advertise-client-urls":["https://192.168.39.86:2379"]}
	{"level":"warn","ts":"2024-08-05T12:21:55.944507Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.86:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-05T12:21:55.944543Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.86:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-05T12:21:55.946475Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-05T12:21:55.946548Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-05T12:21:55.991981Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"5e65f7c667250dae","current-leader-member-id":"5e65f7c667250dae"}
	{"level":"info","ts":"2024-08-05T12:21:55.994811Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.86:2380"}
	{"level":"info","ts":"2024-08-05T12:21:55.994959Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.86:2380"}
	{"level":"info","ts":"2024-08-05T12:21:55.994992Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-841883","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.86:2380"],"advertise-client-urls":["https://192.168.39.86:2379"]}
	
	
	==> kernel <==
	 12:27:38 up 11 min,  0 users,  load average: 0.23, 0.18, 0.11
	Linux multinode-841883 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [7a98aae9aaee6555cf56d0a63b8a6aa7840e4775625a04ad762cc70b4247c868] <==
	I0805 12:26:37.481979       1 main.go:322] Node multinode-841883-m02 has CIDR [10.244.1.0/24] 
	I0805 12:26:47.486746       1 main.go:295] Handling node with IPs: map[192.168.39.86:{}]
	I0805 12:26:47.486816       1 main.go:299] handling current node
	I0805 12:26:47.486837       1 main.go:295] Handling node with IPs: map[192.168.39.205:{}]
	I0805 12:26:47.486845       1 main.go:322] Node multinode-841883-m02 has CIDR [10.244.1.0/24] 
	I0805 12:26:57.484670       1 main.go:295] Handling node with IPs: map[192.168.39.86:{}]
	I0805 12:26:57.484807       1 main.go:299] handling current node
	I0805 12:26:57.484841       1 main.go:295] Handling node with IPs: map[192.168.39.205:{}]
	I0805 12:26:57.484860       1 main.go:322] Node multinode-841883-m02 has CIDR [10.244.1.0/24] 
	I0805 12:27:07.481282       1 main.go:295] Handling node with IPs: map[192.168.39.86:{}]
	I0805 12:27:07.481332       1 main.go:299] handling current node
	I0805 12:27:07.481349       1 main.go:295] Handling node with IPs: map[192.168.39.205:{}]
	I0805 12:27:07.481358       1 main.go:322] Node multinode-841883-m02 has CIDR [10.244.1.0/24] 
	I0805 12:27:17.482217       1 main.go:295] Handling node with IPs: map[192.168.39.86:{}]
	I0805 12:27:17.482337       1 main.go:299] handling current node
	I0805 12:27:17.482369       1 main.go:295] Handling node with IPs: map[192.168.39.205:{}]
	I0805 12:27:17.482388       1 main.go:322] Node multinode-841883-m02 has CIDR [10.244.1.0/24] 
	I0805 12:27:27.481671       1 main.go:295] Handling node with IPs: map[192.168.39.86:{}]
	I0805 12:27:27.481853       1 main.go:299] handling current node
	I0805 12:27:27.481914       1 main.go:295] Handling node with IPs: map[192.168.39.205:{}]
	I0805 12:27:27.481935       1 main.go:322] Node multinode-841883-m02 has CIDR [10.244.1.0/24] 
	I0805 12:27:37.481841       1 main.go:295] Handling node with IPs: map[192.168.39.86:{}]
	I0805 12:27:37.481929       1 main.go:299] handling current node
	I0805 12:27:37.481963       1 main.go:295] Handling node with IPs: map[192.168.39.205:{}]
	I0805 12:27:37.481973       1 main.go:322] Node multinode-841883-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [cc02fb96e19d7d6a667ebd81c3e5cdcecb15fbfb47330274fb4b86c710474f10] <==
	I0805 12:21:06.464363       1 main.go:322] Node multinode-841883-m03 has CIDR [10.244.3.0/24] 
	I0805 12:21:16.463987       1 main.go:295] Handling node with IPs: map[192.168.39.86:{}]
	I0805 12:21:16.464122       1 main.go:299] handling current node
	I0805 12:21:16.464155       1 main.go:295] Handling node with IPs: map[192.168.39.205:{}]
	I0805 12:21:16.464174       1 main.go:322] Node multinode-841883-m02 has CIDR [10.244.1.0/24] 
	I0805 12:21:16.464312       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0805 12:21:16.464334       1 main.go:322] Node multinode-841883-m03 has CIDR [10.244.3.0/24] 
	I0805 12:21:26.472845       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0805 12:21:26.472957       1 main.go:322] Node multinode-841883-m03 has CIDR [10.244.3.0/24] 
	I0805 12:21:26.473187       1 main.go:295] Handling node with IPs: map[192.168.39.86:{}]
	I0805 12:21:26.473218       1 main.go:299] handling current node
	I0805 12:21:26.473252       1 main.go:295] Handling node with IPs: map[192.168.39.205:{}]
	I0805 12:21:26.473285       1 main.go:322] Node multinode-841883-m02 has CIDR [10.244.1.0/24] 
	I0805 12:21:36.466152       1 main.go:295] Handling node with IPs: map[192.168.39.86:{}]
	I0805 12:21:36.466320       1 main.go:299] handling current node
	I0805 12:21:36.466359       1 main.go:295] Handling node with IPs: map[192.168.39.205:{}]
	I0805 12:21:36.466377       1 main.go:322] Node multinode-841883-m02 has CIDR [10.244.1.0/24] 
	I0805 12:21:36.466567       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0805 12:21:36.466589       1 main.go:322] Node multinode-841883-m03 has CIDR [10.244.3.0/24] 
	I0805 12:21:46.468155       1 main.go:295] Handling node with IPs: map[192.168.39.86:{}]
	I0805 12:21:46.468198       1 main.go:299] handling current node
	I0805 12:21:46.468215       1 main.go:295] Handling node with IPs: map[192.168.39.205:{}]
	I0805 12:21:46.468221       1 main.go:322] Node multinode-841883-m02 has CIDR [10.244.1.0/24] 
	I0805 12:21:46.468400       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0805 12:21:46.468423       1 main.go:322] Node multinode-841883-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [695d028e1c305ae52f913f9ca1f0162d430e5829f7b2fd485ab1c928bfcd102c] <==
	I0805 12:23:35.761657       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0805 12:23:35.763452       1 aggregator.go:165] initial CRD sync complete...
	I0805 12:23:35.763597       1 autoregister_controller.go:141] Starting autoregister controller
	I0805 12:23:35.763690       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0805 12:23:35.805474       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0805 12:23:35.805969       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0805 12:23:35.811531       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0805 12:23:35.811729       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0805 12:23:35.811755       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0805 12:23:35.819425       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0805 12:23:35.820174       1 shared_informer.go:320] Caches are synced for configmaps
	E0805 12:23:35.825927       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0805 12:23:35.835010       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0805 12:23:35.864950       1 cache.go:39] Caches are synced for autoregister controller
	I0805 12:23:35.865134       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0805 12:23:35.865174       1 policy_source.go:224] refreshing policies
	I0805 12:23:35.870011       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0805 12:23:36.737861       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0805 12:23:37.612030       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0805 12:23:37.729437       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0805 12:23:37.746440       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0805 12:23:37.830920       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0805 12:23:37.840923       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0805 12:23:48.916067       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0805 12:23:49.017496       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [7ad7f7b96f84996531bb595e6e5e24fb9e8a513373562f78426f1a2175bafea1] <==
	I0805 12:21:55.970932       1 controller.go:157] Shutting down quota evaluator
	I0805 12:21:55.970998       1 controller.go:176] quota evaluator worker shutdown
	W0805 12:21:55.971245       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 12:21:55.971586       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 12:21:55.972015       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 12:21:55.972092       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 12:21:55.972665       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 12:21:55.972801       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0805 12:21:55.974131       1 controller.go:176] quota evaluator worker shutdown
	I0805 12:21:55.974186       1 controller.go:176] quota evaluator worker shutdown
	I0805 12:21:55.974211       1 controller.go:176] quota evaluator worker shutdown
	I0805 12:21:55.974233       1 controller.go:176] quota evaluator worker shutdown
	W0805 12:21:55.974298       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 12:21:55.976907       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 12:21:55.977110       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 12:21:55.977180       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 12:21:55.977242       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 12:21:55.977303       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 12:21:55.977369       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 12:21:55.977428       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 12:21:55.977496       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 12:21:55.977556       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 12:21:55.977726       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 12:21:55.978469       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 12:21:55.980204       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [9f5bff1b0b6709c5cec533eaef857f59d546deee5fb23e9647d7dbdcd5b6645a] <==
	I0805 12:17:19.787254       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0805 12:17:47.991235       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-841883-m02\" does not exist"
	I0805 12:17:48.002561       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-841883-m02" podCIDRs=["10.244.1.0/24"]
	I0805 12:17:49.791209       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-841883-m02"
	I0805 12:18:07.823311       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-841883-m02"
	I0805 12:18:10.093339       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.228981ms"
	I0805 12:18:10.108340       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.658887ms"
	I0805 12:18:10.108429       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.947µs"
	I0805 12:18:13.537868       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.564433ms"
	I0805 12:18:13.538035       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.096µs"
	I0805 12:18:13.863829       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.454982ms"
	I0805 12:18:13.864125       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="78.326µs"
	I0805 12:18:41.751566       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-841883-m02"
	I0805 12:18:41.753696       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-841883-m03\" does not exist"
	I0805 12:18:41.797709       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-841883-m03" podCIDRs=["10.244.2.0/24"]
	I0805 12:18:44.810671       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-841883-m03"
	I0805 12:19:00.998936       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-841883-m02"
	I0805 12:19:29.398495       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-841883-m02"
	I0805 12:19:30.369500       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-841883-m02"
	I0805 12:19:30.370916       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-841883-m03\" does not exist"
	I0805 12:19:30.384126       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-841883-m03" podCIDRs=["10.244.3.0/24"]
	I0805 12:19:50.231580       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-841883-m02"
	I0805 12:20:34.868908       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-841883-m02"
	I0805 12:20:34.934799       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.872132ms"
	I0805 12:20:34.934933       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.575µs"
	
	
	==> kube-controller-manager [e1f79275fe330946cd4b64487589ea3c51d9ccbd7d29eece0acaf200f2a63cbc] <==
	I0805 12:24:11.912884       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.025µs"
	I0805 12:24:13.178708       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-841883-m02\" does not exist"
	I0805 12:24:13.193833       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-841883-m02" podCIDRs=["10.244.1.0/24"]
	I0805 12:24:15.099960       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="120.521µs"
	I0805 12:24:15.108195       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.456µs"
	I0805 12:24:15.117427       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="127.663µs"
	I0805 12:24:15.125520       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.217µs"
	I0805 12:24:15.129824       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.572µs"
	I0805 12:24:32.677209       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-841883-m02"
	I0805 12:24:32.706257       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.87µs"
	I0805 12:24:32.730018       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.336µs"
	I0805 12:24:36.391198       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.727843ms"
	I0805 12:24:36.391398       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.011µs"
	I0805 12:24:50.887271       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-841883-m02"
	I0805 12:24:52.179046       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-841883-m03\" does not exist"
	I0805 12:24:52.179545       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-841883-m02"
	I0805 12:24:52.188497       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-841883-m03" podCIDRs=["10.244.2.0/24"]
	I0805 12:25:11.751900       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-841883-m02"
	I0805 12:25:17.001445       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-841883-m02"
	I0805 12:25:58.809850       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.473156ms"
	I0805 12:25:58.811954       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.248µs"
	I0805 12:26:08.678728       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-572vf"
	I0805 12:26:08.713991       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-572vf"
	I0805 12:26:08.714032       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-fjx7z"
	I0805 12:26:08.739711       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-fjx7z"
	
	
	==> kube-proxy [69b19862fff81457c30fc3f2c95dd1ddb95078eabced6e3d18ede6ff578fc015] <==
	I0805 12:23:36.668510       1 server_linux.go:69] "Using iptables proxy"
	I0805 12:23:36.690534       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.86"]
	I0805 12:23:36.765501       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0805 12:23:36.765562       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0805 12:23:36.765580       1 server_linux.go:165] "Using iptables Proxier"
	I0805 12:23:36.773929       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0805 12:23:36.774145       1 server.go:872] "Version info" version="v1.30.3"
	I0805 12:23:36.774174       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 12:23:36.779064       1 config.go:192] "Starting service config controller"
	I0805 12:23:36.779099       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 12:23:36.779214       1 config.go:101] "Starting endpoint slice config controller"
	I0805 12:23:36.779234       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 12:23:36.791177       1 config.go:319] "Starting node config controller"
	I0805 12:23:36.791208       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 12:23:36.880188       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0805 12:23:36.880257       1 shared_informer.go:320] Caches are synced for service config
	I0805 12:23:36.891577       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [e6441cea8b8c78541b9bd98f0b805d148d801178138fb4e18bb54327800d11f1] <==
	I0805 12:17:01.751827       1 server_linux.go:69] "Using iptables proxy"
	I0805 12:17:01.765482       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.86"]
	I0805 12:17:01.811177       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0805 12:17:01.811231       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0805 12:17:01.811287       1 server_linux.go:165] "Using iptables Proxier"
	I0805 12:17:01.814313       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0805 12:17:01.814694       1 server.go:872] "Version info" version="v1.30.3"
	I0805 12:17:01.814724       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 12:17:01.816046       1 config.go:192] "Starting service config controller"
	I0805 12:17:01.816304       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 12:17:01.816360       1 config.go:101] "Starting endpoint slice config controller"
	I0805 12:17:01.816366       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 12:17:01.817272       1 config.go:319] "Starting node config controller"
	I0805 12:17:01.817392       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 12:17:01.916967       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0805 12:17:01.917026       1 shared_informer.go:320] Caches are synced for service config
	I0805 12:17:01.919262       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [38b97b9b3cf57db4b8524e9f2c6d9ba04d00d56377c723b6c3868713d10fa6fe] <==
	E0805 12:16:45.517953       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0805 12:16:45.601382       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0805 12:16:45.601829       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0805 12:16:45.625794       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0805 12:16:45.626303       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0805 12:16:45.629890       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0805 12:16:45.629948       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0805 12:16:45.657222       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0805 12:16:45.657266       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0805 12:16:45.724725       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0805 12:16:45.724834       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0805 12:16:45.761907       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0805 12:16:45.762005       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0805 12:16:45.769549       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0805 12:16:45.769672       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0805 12:16:45.776575       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0805 12:16:45.776668       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0805 12:16:45.838418       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0805 12:16:45.838472       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0805 12:16:45.979469       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0805 12:16:45.979518       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0805 12:16:48.604422       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0805 12:21:55.957744       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0805 12:21:55.957872       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0805 12:21:55.958027       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [5117bb87b2f82290d53e273522bc4c8c828f19edcea13a3790022d30ee6f3650] <==
	I0805 12:23:33.432210       1 serving.go:380] Generated self-signed cert in-memory
	W0805 12:23:35.770938       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0805 12:23:35.771016       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0805 12:23:35.771660       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0805 12:23:35.771714       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0805 12:23:35.790725       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0805 12:23:35.790868       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 12:23:35.793195       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0805 12:23:35.794151       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0805 12:23:35.794235       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0805 12:23:35.794276       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0805 12:23:35.895291       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 05 12:23:36 multinode-841883 kubelet[3373]: I0805 12:23:36.068114    3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cadf65c5-0bf5-4c49-9ab5-442c0b3c6f49-xtables-lock\") pod \"kube-proxy-h2bf5\" (UID: \"cadf65c5-0bf5-4c49-9ab5-442c0b3c6f49\") " pod="kube-system/kube-proxy-h2bf5"
	Aug 05 12:23:36 multinode-841883 kubelet[3373]: I0805 12:23:36.068189    3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3de46bbd-b3ee-4132-927a-2abded24a986-xtables-lock\") pod \"kindnet-cwklz\" (UID: \"3de46bbd-b3ee-4132-927a-2abded24a986\") " pod="kube-system/kindnet-cwklz"
	Aug 05 12:23:36 multinode-841883 kubelet[3373]: I0805 12:23:36.068252    3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d4d95110-27dc-4a02-810d-c60f43201bde-tmp\") pod \"storage-provisioner\" (UID: \"d4d95110-27dc-4a02-810d-c60f43201bde\") " pod="kube-system/storage-provisioner"
	Aug 05 12:23:36 multinode-841883 kubelet[3373]: I0805 12:23:36.068314    3373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cadf65c5-0bf5-4c49-9ab5-442c0b3c6f49-lib-modules\") pod \"kube-proxy-h2bf5\" (UID: \"cadf65c5-0bf5-4c49-9ab5-442c0b3c6f49\") " pod="kube-system/kube-proxy-h2bf5"
	Aug 05 12:23:38 multinode-841883 kubelet[3373]: I0805 12:23:38.849726    3373 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Aug 05 12:24:32 multinode-841883 kubelet[3373]: E0805 12:24:32.142413    3373 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 12:24:32 multinode-841883 kubelet[3373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 12:24:32 multinode-841883 kubelet[3373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 12:24:32 multinode-841883 kubelet[3373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 12:24:32 multinode-841883 kubelet[3373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 12:25:32 multinode-841883 kubelet[3373]: E0805 12:25:32.142678    3373 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 12:25:32 multinode-841883 kubelet[3373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 12:25:32 multinode-841883 kubelet[3373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 12:25:32 multinode-841883 kubelet[3373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 12:25:32 multinode-841883 kubelet[3373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 12:26:32 multinode-841883 kubelet[3373]: E0805 12:26:32.142989    3373 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 12:26:32 multinode-841883 kubelet[3373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 12:26:32 multinode-841883 kubelet[3373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 12:26:32 multinode-841883 kubelet[3373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 12:26:32 multinode-841883 kubelet[3373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 12:27:32 multinode-841883 kubelet[3373]: E0805 12:27:32.142770    3373 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 12:27:32 multinode-841883 kubelet[3373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 12:27:32 multinode-841883 kubelet[3373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 12:27:32 multinode-841883 kubelet[3373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 12:27:32 multinode-841883 kubelet[3373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0805 12:27:37.777599  423442 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19377-383955/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-841883 -n multinode-841883
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-841883 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.41s)

                                                
                                    
x
+
TestPreload (279.39s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-282629 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0805 12:32:52.926619  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/functional-014296/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-282629 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m16.26306525s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-282629 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-282629 image pull gcr.io/k8s-minikube/busybox: (2.919658277s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-282629
E0805 12:35:27.754597  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.crt: no such file or directory
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-282629: exit status 82 (2m0.460433706s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-282629"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-282629 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-08-05 12:35:49.589903304 +0000 UTC m=+4136.547226997
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-282629 -n test-preload-282629
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-282629 -n test-preload-282629: exit status 3 (18.578633036s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0805 12:36:08.164108  426310 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.227:22: connect: no route to host
	E0805 12:36:08.164128  426310 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.227:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-282629" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-282629" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-282629
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-282629: (1.166100192s)
--- FAIL: TestPreload (279.39s)

                                                
                                    
x
+
TestKubernetesUpgrade (445.26s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-515808 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-515808 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m47.233544077s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-515808] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19377
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19377-383955/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-383955/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-515808" primary control-plane node in "kubernetes-upgrade-515808" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 12:40:56.428149  432266 out.go:291] Setting OutFile to fd 1 ...
	I0805 12:40:56.428407  432266 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:40:56.428417  432266 out.go:304] Setting ErrFile to fd 2...
	I0805 12:40:56.428421  432266 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:40:56.428586  432266 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
	I0805 12:40:56.429150  432266 out.go:298] Setting JSON to false
	I0805 12:40:56.430100  432266 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8603,"bootTime":1722853053,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0805 12:40:56.430163  432266 start.go:139] virtualization: kvm guest
	I0805 12:40:56.432306  432266 out.go:177] * [kubernetes-upgrade-515808] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0805 12:40:56.433527  432266 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 12:40:56.433532  432266 notify.go:220] Checking for updates...
	I0805 12:40:56.434683  432266 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 12:40:56.435866  432266 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 12:40:56.437096  432266 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 12:40:56.438262  432266 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0805 12:40:56.439314  432266 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 12:40:56.440735  432266 config.go:182] Loaded profile config "NoKubernetes-833202": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0805 12:40:56.440837  432266 config.go:182] Loaded profile config "cert-expiration-623276": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:40:56.440952  432266 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 12:40:56.480140  432266 out.go:177] * Using the kvm2 driver based on user configuration
	I0805 12:40:56.481494  432266 start.go:297] selected driver: kvm2
	I0805 12:40:56.481509  432266 start.go:901] validating driver "kvm2" against <nil>
	I0805 12:40:56.481520  432266 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 12:40:56.482333  432266 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 12:40:56.482442  432266 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19377-383955/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0805 12:40:56.497946  432266 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0805 12:40:56.498002  432266 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 12:40:56.498222  432266 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0805 12:40:56.498281  432266 cni.go:84] Creating CNI manager for ""
	I0805 12:40:56.498294  432266 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:40:56.498301  432266 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 12:40:56.498368  432266 start.go:340] cluster config:
	{Name:kubernetes-upgrade-515808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-515808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:40:56.498489  432266 iso.go:125] acquiring lock: {Name:mk78a4988ea0dfb86bb6f7367e362683a39fd912 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 12:40:56.500523  432266 out.go:177] * Starting "kubernetes-upgrade-515808" primary control-plane node in "kubernetes-upgrade-515808" cluster
	I0805 12:40:56.501905  432266 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0805 12:40:56.501949  432266 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0805 12:40:56.501972  432266 cache.go:56] Caching tarball of preloaded images
	I0805 12:40:56.502055  432266 preload.go:172] Found /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0805 12:40:56.502101  432266 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0805 12:40:56.502239  432266 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kubernetes-upgrade-515808/config.json ...
	I0805 12:40:56.502271  432266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kubernetes-upgrade-515808/config.json: {Name:mkce02c03d85dbafdbe5e982870f291b87d2dc95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:40:56.502454  432266 start.go:360] acquireMachinesLock for kubernetes-upgrade-515808: {Name:mk3babe91d55c30c0b650587cdec6489eb3a7ed6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 12:41:13.192655  432266 start.go:364] duration metric: took 16.690158372s to acquireMachinesLock for "kubernetes-upgrade-515808"
	I0805 12:41:13.192728  432266 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-515808 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-515808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 12:41:13.192833  432266 start.go:125] createHost starting for "" (driver="kvm2")
	I0805 12:41:13.194860  432266 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 12:41:13.195084  432266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:41:13.195148  432266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:41:13.211441  432266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43173
	I0805 12:41:13.211868  432266 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:41:13.212433  432266 main.go:141] libmachine: Using API Version  1
	I0805 12:41:13.212458  432266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:41:13.212819  432266 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:41:13.213014  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetMachineName
	I0805 12:41:13.213181  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .DriverName
	I0805 12:41:13.213332  432266 start.go:159] libmachine.API.Create for "kubernetes-upgrade-515808" (driver="kvm2")
	I0805 12:41:13.213363  432266 client.go:168] LocalClient.Create starting
	I0805 12:41:13.213397  432266 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem
	I0805 12:41:13.213435  432266 main.go:141] libmachine: Decoding PEM data...
	I0805 12:41:13.213461  432266 main.go:141] libmachine: Parsing certificate...
	I0805 12:41:13.213540  432266 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem
	I0805 12:41:13.213569  432266 main.go:141] libmachine: Decoding PEM data...
	I0805 12:41:13.213586  432266 main.go:141] libmachine: Parsing certificate...
	I0805 12:41:13.213616  432266 main.go:141] libmachine: Running pre-create checks...
	I0805 12:41:13.213629  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .PreCreateCheck
	I0805 12:41:13.214071  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetConfigRaw
	I0805 12:41:13.214553  432266 main.go:141] libmachine: Creating machine...
	I0805 12:41:13.214575  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .Create
	I0805 12:41:13.214720  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Creating KVM machine...
	I0805 12:41:13.216069  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | found existing default KVM network
	I0805 12:41:13.217472  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | I0805 12:41:13.217290  432413 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:d9:76:73} reservation:<nil>}
	I0805 12:41:13.218598  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | I0805 12:41:13.218504  432413 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:3f:1f:61} reservation:<nil>}
	I0805 12:41:13.219675  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | I0805 12:41:13.219590  432413 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00026cdb0}
	I0805 12:41:13.219779  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | created network xml: 
	I0805 12:41:13.219805  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | <network>
	I0805 12:41:13.219823  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG |   <name>mk-kubernetes-upgrade-515808</name>
	I0805 12:41:13.219842  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG |   <dns enable='no'/>
	I0805 12:41:13.219855  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG |   
	I0805 12:41:13.219878  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0805 12:41:13.219891  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG |     <dhcp>
	I0805 12:41:13.219911  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0805 12:41:13.219933  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG |     </dhcp>
	I0805 12:41:13.219950  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG |   </ip>
	I0805 12:41:13.219964  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG |   
	I0805 12:41:13.219975  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | </network>
	I0805 12:41:13.219991  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | 
	I0805 12:41:13.224949  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | trying to create private KVM network mk-kubernetes-upgrade-515808 192.168.61.0/24...
	I0805 12:41:13.292436  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | private KVM network mk-kubernetes-upgrade-515808 192.168.61.0/24 created
	I0805 12:41:13.292469  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Setting up store path in /home/jenkins/minikube-integration/19377-383955/.minikube/machines/kubernetes-upgrade-515808 ...
	I0805 12:41:13.292483  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | I0805 12:41:13.292391  432413 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 12:41:13.292503  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Building disk image from file:///home/jenkins/minikube-integration/19377-383955/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0805 12:41:13.292636  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Downloading /home/jenkins/minikube-integration/19377-383955/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19377-383955/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0805 12:41:13.554907  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | I0805 12:41:13.554783  432413 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/kubernetes-upgrade-515808/id_rsa...
	I0805 12:41:13.653658  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | I0805 12:41:13.653517  432413 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/kubernetes-upgrade-515808/kubernetes-upgrade-515808.rawdisk...
	I0805 12:41:13.653698  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | Writing magic tar header
	I0805 12:41:13.653723  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | Writing SSH key tar header
	I0805 12:41:13.653740  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | I0805 12:41:13.653632  432413 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19377-383955/.minikube/machines/kubernetes-upgrade-515808 ...
	I0805 12:41:13.653766  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/kubernetes-upgrade-515808
	I0805 12:41:13.653822  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Setting executable bit set on /home/jenkins/minikube-integration/19377-383955/.minikube/machines/kubernetes-upgrade-515808 (perms=drwx------)
	I0805 12:41:13.653852  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Setting executable bit set on /home/jenkins/minikube-integration/19377-383955/.minikube/machines (perms=drwxr-xr-x)
	I0805 12:41:13.653865  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19377-383955/.minikube/machines
	I0805 12:41:13.653891  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 12:41:13.653901  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19377-383955
	I0805 12:41:13.653910  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0805 12:41:13.653916  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | Checking permissions on dir: /home/jenkins
	I0805 12:41:13.653945  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | Checking permissions on dir: /home
	I0805 12:41:13.653968  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | Skipping /home - not owner
	I0805 12:41:13.653984  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Setting executable bit set on /home/jenkins/minikube-integration/19377-383955/.minikube (perms=drwxr-xr-x)
	I0805 12:41:13.654001  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Setting executable bit set on /home/jenkins/minikube-integration/19377-383955 (perms=drwxrwxr-x)
	I0805 12:41:13.654016  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0805 12:41:13.654031  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0805 12:41:13.654042  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Creating domain...
	I0805 12:41:13.655063  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) define libvirt domain using xml: 
	I0805 12:41:13.655088  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) <domain type='kvm'>
	I0805 12:41:13.655123  432266 main.go:141] libmachine: (kubernetes-upgrade-515808)   <name>kubernetes-upgrade-515808</name>
	I0805 12:41:13.655146  432266 main.go:141] libmachine: (kubernetes-upgrade-515808)   <memory unit='MiB'>2200</memory>
	I0805 12:41:13.655158  432266 main.go:141] libmachine: (kubernetes-upgrade-515808)   <vcpu>2</vcpu>
	I0805 12:41:13.655164  432266 main.go:141] libmachine: (kubernetes-upgrade-515808)   <features>
	I0805 12:41:13.655170  432266 main.go:141] libmachine: (kubernetes-upgrade-515808)     <acpi/>
	I0805 12:41:13.655184  432266 main.go:141] libmachine: (kubernetes-upgrade-515808)     <apic/>
	I0805 12:41:13.655195  432266 main.go:141] libmachine: (kubernetes-upgrade-515808)     <pae/>
	I0805 12:41:13.655204  432266 main.go:141] libmachine: (kubernetes-upgrade-515808)     
	I0805 12:41:13.655217  432266 main.go:141] libmachine: (kubernetes-upgrade-515808)   </features>
	I0805 12:41:13.655228  432266 main.go:141] libmachine: (kubernetes-upgrade-515808)   <cpu mode='host-passthrough'>
	I0805 12:41:13.655235  432266 main.go:141] libmachine: (kubernetes-upgrade-515808)   
	I0805 12:41:13.655242  432266 main.go:141] libmachine: (kubernetes-upgrade-515808)   </cpu>
	I0805 12:41:13.655254  432266 main.go:141] libmachine: (kubernetes-upgrade-515808)   <os>
	I0805 12:41:13.655261  432266 main.go:141] libmachine: (kubernetes-upgrade-515808)     <type>hvm</type>
	I0805 12:41:13.655267  432266 main.go:141] libmachine: (kubernetes-upgrade-515808)     <boot dev='cdrom'/>
	I0805 12:41:13.655274  432266 main.go:141] libmachine: (kubernetes-upgrade-515808)     <boot dev='hd'/>
	I0805 12:41:13.655283  432266 main.go:141] libmachine: (kubernetes-upgrade-515808)     <bootmenu enable='no'/>
	I0805 12:41:13.655293  432266 main.go:141] libmachine: (kubernetes-upgrade-515808)   </os>
	I0805 12:41:13.655302  432266 main.go:141] libmachine: (kubernetes-upgrade-515808)   <devices>
	I0805 12:41:13.655313  432266 main.go:141] libmachine: (kubernetes-upgrade-515808)     <disk type='file' device='cdrom'>
	I0805 12:41:13.655332  432266 main.go:141] libmachine: (kubernetes-upgrade-515808)       <source file='/home/jenkins/minikube-integration/19377-383955/.minikube/machines/kubernetes-upgrade-515808/boot2docker.iso'/>
	I0805 12:41:13.655343  432266 main.go:141] libmachine: (kubernetes-upgrade-515808)       <target dev='hdc' bus='scsi'/>
	I0805 12:41:13.655353  432266 main.go:141] libmachine: (kubernetes-upgrade-515808)       <readonly/>
	I0805 12:41:13.655358  432266 main.go:141] libmachine: (kubernetes-upgrade-515808)     </disk>
	I0805 12:41:13.655364  432266 main.go:141] libmachine: (kubernetes-upgrade-515808)     <disk type='file' device='disk'>
	I0805 12:41:13.655374  432266 main.go:141] libmachine: (kubernetes-upgrade-515808)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0805 12:41:13.655388  432266 main.go:141] libmachine: (kubernetes-upgrade-515808)       <source file='/home/jenkins/minikube-integration/19377-383955/.minikube/machines/kubernetes-upgrade-515808/kubernetes-upgrade-515808.rawdisk'/>
	I0805 12:41:13.655400  432266 main.go:141] libmachine: (kubernetes-upgrade-515808)       <target dev='hda' bus='virtio'/>
	I0805 12:41:13.655409  432266 main.go:141] libmachine: (kubernetes-upgrade-515808)     </disk>
	I0805 12:41:13.655421  432266 main.go:141] libmachine: (kubernetes-upgrade-515808)     <interface type='network'>
	I0805 12:41:13.655434  432266 main.go:141] libmachine: (kubernetes-upgrade-515808)       <source network='mk-kubernetes-upgrade-515808'/>
	I0805 12:41:13.655451  432266 main.go:141] libmachine: (kubernetes-upgrade-515808)       <model type='virtio'/>
	I0805 12:41:13.655463  432266 main.go:141] libmachine: (kubernetes-upgrade-515808)     </interface>
	I0805 12:41:13.655471  432266 main.go:141] libmachine: (kubernetes-upgrade-515808)     <interface type='network'>
	I0805 12:41:13.655477  432266 main.go:141] libmachine: (kubernetes-upgrade-515808)       <source network='default'/>
	I0805 12:41:13.655487  432266 main.go:141] libmachine: (kubernetes-upgrade-515808)       <model type='virtio'/>
	I0805 12:41:13.655498  432266 main.go:141] libmachine: (kubernetes-upgrade-515808)     </interface>
	I0805 12:41:13.655506  432266 main.go:141] libmachine: (kubernetes-upgrade-515808)     <serial type='pty'>
	I0805 12:41:13.655519  432266 main.go:141] libmachine: (kubernetes-upgrade-515808)       <target port='0'/>
	I0805 12:41:13.655529  432266 main.go:141] libmachine: (kubernetes-upgrade-515808)     </serial>
	I0805 12:41:13.655538  432266 main.go:141] libmachine: (kubernetes-upgrade-515808)     <console type='pty'>
	I0805 12:41:13.655549  432266 main.go:141] libmachine: (kubernetes-upgrade-515808)       <target type='serial' port='0'/>
	I0805 12:41:13.655560  432266 main.go:141] libmachine: (kubernetes-upgrade-515808)     </console>
	I0805 12:41:13.655569  432266 main.go:141] libmachine: (kubernetes-upgrade-515808)     <rng model='virtio'>
	I0805 12:41:13.655592  432266 main.go:141] libmachine: (kubernetes-upgrade-515808)       <backend model='random'>/dev/random</backend>
	I0805 12:41:13.655604  432266 main.go:141] libmachine: (kubernetes-upgrade-515808)     </rng>
	I0805 12:41:13.655612  432266 main.go:141] libmachine: (kubernetes-upgrade-515808)     
	I0805 12:41:13.655625  432266 main.go:141] libmachine: (kubernetes-upgrade-515808)     
	I0805 12:41:13.655656  432266 main.go:141] libmachine: (kubernetes-upgrade-515808)   </devices>
	I0805 12:41:13.655677  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) </domain>
	I0805 12:41:13.655689  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) 
	I0805 12:41:13.661114  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined MAC address 52:54:00:03:0f:72 in network default
	I0805 12:41:13.661929  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Ensuring networks are active...
	I0805 12:41:13.661950  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:13.662625  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Ensuring network default is active
	I0805 12:41:13.662972  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Ensuring network mk-kubernetes-upgrade-515808 is active
	I0805 12:41:13.663532  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Getting domain xml...
	I0805 12:41:13.664313  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Creating domain...
	I0805 12:41:14.917207  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Waiting to get IP...
	I0805 12:41:14.917871  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:14.918329  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | unable to find current IP address of domain kubernetes-upgrade-515808 in network mk-kubernetes-upgrade-515808
	I0805 12:41:14.918361  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | I0805 12:41:14.918295  432413 retry.go:31] will retry after 222.444468ms: waiting for machine to come up
	I0805 12:41:15.142861  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:15.143360  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | unable to find current IP address of domain kubernetes-upgrade-515808 in network mk-kubernetes-upgrade-515808
	I0805 12:41:15.143387  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | I0805 12:41:15.143307  432413 retry.go:31] will retry after 355.439696ms: waiting for machine to come up
	I0805 12:41:15.500784  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:15.501163  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | unable to find current IP address of domain kubernetes-upgrade-515808 in network mk-kubernetes-upgrade-515808
	I0805 12:41:15.501188  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | I0805 12:41:15.501122  432413 retry.go:31] will retry after 420.153089ms: waiting for machine to come up
	I0805 12:41:15.924000  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:15.924472  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | unable to find current IP address of domain kubernetes-upgrade-515808 in network mk-kubernetes-upgrade-515808
	I0805 12:41:15.924511  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | I0805 12:41:15.924415  432413 retry.go:31] will retry after 518.59288ms: waiting for machine to come up
	I0805 12:41:16.444890  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:16.445399  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | unable to find current IP address of domain kubernetes-upgrade-515808 in network mk-kubernetes-upgrade-515808
	I0805 12:41:16.445448  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | I0805 12:41:16.445344  432413 retry.go:31] will retry after 606.284098ms: waiting for machine to come up
	I0805 12:41:17.052829  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:17.053243  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | unable to find current IP address of domain kubernetes-upgrade-515808 in network mk-kubernetes-upgrade-515808
	I0805 12:41:17.053272  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | I0805 12:41:17.053191  432413 retry.go:31] will retry after 912.27588ms: waiting for machine to come up
	I0805 12:41:17.967021  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:17.967483  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | unable to find current IP address of domain kubernetes-upgrade-515808 in network mk-kubernetes-upgrade-515808
	I0805 12:41:17.967512  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | I0805 12:41:17.967437  432413 retry.go:31] will retry after 816.260668ms: waiting for machine to come up
	I0805 12:41:18.785624  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:18.786171  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | unable to find current IP address of domain kubernetes-upgrade-515808 in network mk-kubernetes-upgrade-515808
	I0805 12:41:18.786200  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | I0805 12:41:18.786128  432413 retry.go:31] will retry after 1.166979716s: waiting for machine to come up
	I0805 12:41:19.954818  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:19.955235  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | unable to find current IP address of domain kubernetes-upgrade-515808 in network mk-kubernetes-upgrade-515808
	I0805 12:41:19.955262  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | I0805 12:41:19.955188  432413 retry.go:31] will retry after 1.544285189s: waiting for machine to come up
	I0805 12:41:21.501883  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:21.502281  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | unable to find current IP address of domain kubernetes-upgrade-515808 in network mk-kubernetes-upgrade-515808
	I0805 12:41:21.502307  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | I0805 12:41:21.502220  432413 retry.go:31] will retry after 1.824492806s: waiting for machine to come up
	I0805 12:41:23.327909  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:23.328323  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | unable to find current IP address of domain kubernetes-upgrade-515808 in network mk-kubernetes-upgrade-515808
	I0805 12:41:23.328354  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | I0805 12:41:23.328272  432413 retry.go:31] will retry after 1.854554428s: waiting for machine to come up
	I0805 12:41:25.184997  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:25.185501  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | unable to find current IP address of domain kubernetes-upgrade-515808 in network mk-kubernetes-upgrade-515808
	I0805 12:41:25.185529  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | I0805 12:41:25.185442  432413 retry.go:31] will retry after 3.366175104s: waiting for machine to come up
	I0805 12:41:28.552831  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:28.553303  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | unable to find current IP address of domain kubernetes-upgrade-515808 in network mk-kubernetes-upgrade-515808
	I0805 12:41:28.553335  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | I0805 12:41:28.553238  432413 retry.go:31] will retry after 3.956495973s: waiting for machine to come up
	I0805 12:41:32.513980  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:32.514459  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | unable to find current IP address of domain kubernetes-upgrade-515808 in network mk-kubernetes-upgrade-515808
	I0805 12:41:32.514477  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | I0805 12:41:32.514422  432413 retry.go:31] will retry after 4.051501667s: waiting for machine to come up
	I0805 12:41:36.570186  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:36.570819  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Found IP for machine: 192.168.61.242
	I0805 12:41:36.570843  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Reserving static IP address...
	I0805 12:41:36.570874  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has current primary IP address 192.168.61.242 and MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:36.571263  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-515808", mac: "52:54:00:c9:63:a0", ip: "192.168.61.242"} in network mk-kubernetes-upgrade-515808
	I0805 12:41:36.646221  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Reserved static IP address: 192.168.61.242
	I0805 12:41:36.646257  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | Getting to WaitForSSH function...
	I0805 12:41:36.646268  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Waiting for SSH to be available...
	I0805 12:41:36.648739  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:36.649256  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:63:a0", ip: ""} in network mk-kubernetes-upgrade-515808: {Iface:virbr3 ExpiryTime:2024-08-05 13:41:27 +0000 UTC Type:0 Mac:52:54:00:c9:63:a0 Iaid: IPaddr:192.168.61.242 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c9:63:a0}
	I0805 12:41:36.649290  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined IP address 192.168.61.242 and MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:36.649383  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | Using SSH client type: external
	I0805 12:41:36.649443  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | Using SSH private key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/kubernetes-upgrade-515808/id_rsa (-rw-------)
	I0805 12:41:36.649489  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.242 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19377-383955/.minikube/machines/kubernetes-upgrade-515808/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 12:41:36.649514  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | About to run SSH command:
	I0805 12:41:36.649525  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | exit 0
	I0805 12:41:36.783915  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | SSH cmd err, output: <nil>: 
	I0805 12:41:36.784192  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) KVM machine creation complete!
	I0805 12:41:36.784432  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetConfigRaw
	I0805 12:41:36.785024  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .DriverName
	I0805 12:41:36.785247  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .DriverName
	I0805 12:41:36.785434  432266 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0805 12:41:36.785450  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetState
	I0805 12:41:36.786757  432266 main.go:141] libmachine: Detecting operating system of created instance...
	I0805 12:41:36.786773  432266 main.go:141] libmachine: Waiting for SSH to be available...
	I0805 12:41:36.786781  432266 main.go:141] libmachine: Getting to WaitForSSH function...
	I0805 12:41:36.786790  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHHostname
	I0805 12:41:36.789191  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:36.789541  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:63:a0", ip: ""} in network mk-kubernetes-upgrade-515808: {Iface:virbr3 ExpiryTime:2024-08-05 13:41:27 +0000 UTC Type:0 Mac:52:54:00:c9:63:a0 Iaid: IPaddr:192.168.61.242 Prefix:24 Hostname:kubernetes-upgrade-515808 Clientid:01:52:54:00:c9:63:a0}
	I0805 12:41:36.789575  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined IP address 192.168.61.242 and MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:36.789667  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHPort
	I0805 12:41:36.789824  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHKeyPath
	I0805 12:41:36.789985  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHKeyPath
	I0805 12:41:36.790106  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHUsername
	I0805 12:41:36.790251  432266 main.go:141] libmachine: Using SSH client type: native
	I0805 12:41:36.790494  432266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.242 22 <nil> <nil>}
	I0805 12:41:36.790508  432266 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0805 12:41:36.899418  432266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 12:41:36.899444  432266 main.go:141] libmachine: Detecting the provisioner...
	I0805 12:41:36.899455  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHHostname
	I0805 12:41:36.902170  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:36.902519  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:63:a0", ip: ""} in network mk-kubernetes-upgrade-515808: {Iface:virbr3 ExpiryTime:2024-08-05 13:41:27 +0000 UTC Type:0 Mac:52:54:00:c9:63:a0 Iaid: IPaddr:192.168.61.242 Prefix:24 Hostname:kubernetes-upgrade-515808 Clientid:01:52:54:00:c9:63:a0}
	I0805 12:41:36.902561  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined IP address 192.168.61.242 and MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:36.902676  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHPort
	I0805 12:41:36.902840  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHKeyPath
	I0805 12:41:36.902997  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHKeyPath
	I0805 12:41:36.903136  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHUsername
	I0805 12:41:36.903313  432266 main.go:141] libmachine: Using SSH client type: native
	I0805 12:41:36.903549  432266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.242 22 <nil> <nil>}
	I0805 12:41:36.903566  432266 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0805 12:41:37.016609  432266 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0805 12:41:37.016700  432266 main.go:141] libmachine: found compatible host: buildroot
	I0805 12:41:37.016717  432266 main.go:141] libmachine: Provisioning with buildroot...
	I0805 12:41:37.016731  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetMachineName
	I0805 12:41:37.017011  432266 buildroot.go:166] provisioning hostname "kubernetes-upgrade-515808"
	I0805 12:41:37.017046  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetMachineName
	I0805 12:41:37.017237  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHHostname
	I0805 12:41:37.020213  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:37.020642  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:63:a0", ip: ""} in network mk-kubernetes-upgrade-515808: {Iface:virbr3 ExpiryTime:2024-08-05 13:41:27 +0000 UTC Type:0 Mac:52:54:00:c9:63:a0 Iaid: IPaddr:192.168.61.242 Prefix:24 Hostname:kubernetes-upgrade-515808 Clientid:01:52:54:00:c9:63:a0}
	I0805 12:41:37.020673  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined IP address 192.168.61.242 and MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:37.020812  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHPort
	I0805 12:41:37.021000  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHKeyPath
	I0805 12:41:37.021159  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHKeyPath
	I0805 12:41:37.021295  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHUsername
	I0805 12:41:37.021457  432266 main.go:141] libmachine: Using SSH client type: native
	I0805 12:41:37.021629  432266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.242 22 <nil> <nil>}
	I0805 12:41:37.021644  432266 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-515808 && echo "kubernetes-upgrade-515808" | sudo tee /etc/hostname
	I0805 12:41:37.146581  432266 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-515808
	
	I0805 12:41:37.146609  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHHostname
	I0805 12:41:37.149441  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:37.149822  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:63:a0", ip: ""} in network mk-kubernetes-upgrade-515808: {Iface:virbr3 ExpiryTime:2024-08-05 13:41:27 +0000 UTC Type:0 Mac:52:54:00:c9:63:a0 Iaid: IPaddr:192.168.61.242 Prefix:24 Hostname:kubernetes-upgrade-515808 Clientid:01:52:54:00:c9:63:a0}
	I0805 12:41:37.149851  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined IP address 192.168.61.242 and MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:37.150041  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHPort
	I0805 12:41:37.150233  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHKeyPath
	I0805 12:41:37.150357  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHKeyPath
	I0805 12:41:37.150536  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHUsername
	I0805 12:41:37.150850  432266 main.go:141] libmachine: Using SSH client type: native
	I0805 12:41:37.151104  432266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.242 22 <nil> <nil>}
	I0805 12:41:37.151130  432266 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-515808' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-515808/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-515808' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 12:41:37.269526  432266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 12:41:37.269565  432266 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19377-383955/.minikube CaCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19377-383955/.minikube}
	I0805 12:41:37.269595  432266 buildroot.go:174] setting up certificates
	I0805 12:41:37.269615  432266 provision.go:84] configureAuth start
	I0805 12:41:37.269656  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetMachineName
	I0805 12:41:37.269955  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetIP
	I0805 12:41:37.272784  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:37.273253  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:63:a0", ip: ""} in network mk-kubernetes-upgrade-515808: {Iface:virbr3 ExpiryTime:2024-08-05 13:41:27 +0000 UTC Type:0 Mac:52:54:00:c9:63:a0 Iaid: IPaddr:192.168.61.242 Prefix:24 Hostname:kubernetes-upgrade-515808 Clientid:01:52:54:00:c9:63:a0}
	I0805 12:41:37.273287  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined IP address 192.168.61.242 and MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:37.273367  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHHostname
	I0805 12:41:37.275413  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:37.275813  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:63:a0", ip: ""} in network mk-kubernetes-upgrade-515808: {Iface:virbr3 ExpiryTime:2024-08-05 13:41:27 +0000 UTC Type:0 Mac:52:54:00:c9:63:a0 Iaid: IPaddr:192.168.61.242 Prefix:24 Hostname:kubernetes-upgrade-515808 Clientid:01:52:54:00:c9:63:a0}
	I0805 12:41:37.275835  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined IP address 192.168.61.242 and MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:37.276001  432266 provision.go:143] copyHostCerts
	I0805 12:41:37.276099  432266 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem, removing ...
	I0805 12:41:37.276116  432266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 12:41:37.276195  432266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem (1123 bytes)
	I0805 12:41:37.276333  432266 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem, removing ...
	I0805 12:41:37.276344  432266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 12:41:37.276381  432266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem (1675 bytes)
	I0805 12:41:37.276497  432266 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem, removing ...
	I0805 12:41:37.276509  432266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 12:41:37.276542  432266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem (1082 bytes)
	I0805 12:41:37.276639  432266 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-515808 san=[127.0.0.1 192.168.61.242 kubernetes-upgrade-515808 localhost minikube]
	I0805 12:41:37.387699  432266 provision.go:177] copyRemoteCerts
	I0805 12:41:37.387772  432266 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 12:41:37.387804  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHHostname
	I0805 12:41:37.390247  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:37.390621  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:63:a0", ip: ""} in network mk-kubernetes-upgrade-515808: {Iface:virbr3 ExpiryTime:2024-08-05 13:41:27 +0000 UTC Type:0 Mac:52:54:00:c9:63:a0 Iaid: IPaddr:192.168.61.242 Prefix:24 Hostname:kubernetes-upgrade-515808 Clientid:01:52:54:00:c9:63:a0}
	I0805 12:41:37.390655  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined IP address 192.168.61.242 and MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:37.390813  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHPort
	I0805 12:41:37.391020  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHKeyPath
	I0805 12:41:37.391178  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHUsername
	I0805 12:41:37.391319  432266 sshutil.go:53] new ssh client: &{IP:192.168.61.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/kubernetes-upgrade-515808/id_rsa Username:docker}
	I0805 12:41:37.478136  432266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 12:41:37.503230  432266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0805 12:41:37.526336  432266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0805 12:41:37.549610  432266 provision.go:87] duration metric: took 279.957575ms to configureAuth
	I0805 12:41:37.549642  432266 buildroot.go:189] setting minikube options for container-runtime
	I0805 12:41:37.549840  432266 config.go:182] Loaded profile config "kubernetes-upgrade-515808": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0805 12:41:37.549919  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHHostname
	I0805 12:41:37.552312  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:37.552651  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:63:a0", ip: ""} in network mk-kubernetes-upgrade-515808: {Iface:virbr3 ExpiryTime:2024-08-05 13:41:27 +0000 UTC Type:0 Mac:52:54:00:c9:63:a0 Iaid: IPaddr:192.168.61.242 Prefix:24 Hostname:kubernetes-upgrade-515808 Clientid:01:52:54:00:c9:63:a0}
	I0805 12:41:37.552688  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined IP address 192.168.61.242 and MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:37.552837  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHPort
	I0805 12:41:37.553036  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHKeyPath
	I0805 12:41:37.553179  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHKeyPath
	I0805 12:41:37.553302  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHUsername
	I0805 12:41:37.553441  432266 main.go:141] libmachine: Using SSH client type: native
	I0805 12:41:37.553619  432266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.242 22 <nil> <nil>}
	I0805 12:41:37.553633  432266 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 12:41:37.834878  432266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 12:41:37.834910  432266 main.go:141] libmachine: Checking connection to Docker...
	I0805 12:41:37.834920  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetURL
	I0805 12:41:37.836202  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | Using libvirt version 6000000
	I0805 12:41:37.838622  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:37.838951  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:63:a0", ip: ""} in network mk-kubernetes-upgrade-515808: {Iface:virbr3 ExpiryTime:2024-08-05 13:41:27 +0000 UTC Type:0 Mac:52:54:00:c9:63:a0 Iaid: IPaddr:192.168.61.242 Prefix:24 Hostname:kubernetes-upgrade-515808 Clientid:01:52:54:00:c9:63:a0}
	I0805 12:41:37.838983  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined IP address 192.168.61.242 and MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:37.839159  432266 main.go:141] libmachine: Docker is up and running!
	I0805 12:41:37.839178  432266 main.go:141] libmachine: Reticulating splines...
	I0805 12:41:37.839187  432266 client.go:171] duration metric: took 24.625814831s to LocalClient.Create
	I0805 12:41:37.839210  432266 start.go:167] duration metric: took 24.625878222s to libmachine.API.Create "kubernetes-upgrade-515808"
	I0805 12:41:37.839220  432266 start.go:293] postStartSetup for "kubernetes-upgrade-515808" (driver="kvm2")
	I0805 12:41:37.839233  432266 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 12:41:37.839249  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .DriverName
	I0805 12:41:37.839468  432266 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 12:41:37.839493  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHHostname
	I0805 12:41:37.841481  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:37.841783  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:63:a0", ip: ""} in network mk-kubernetes-upgrade-515808: {Iface:virbr3 ExpiryTime:2024-08-05 13:41:27 +0000 UTC Type:0 Mac:52:54:00:c9:63:a0 Iaid: IPaddr:192.168.61.242 Prefix:24 Hostname:kubernetes-upgrade-515808 Clientid:01:52:54:00:c9:63:a0}
	I0805 12:41:37.841813  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined IP address 192.168.61.242 and MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:37.841944  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHPort
	I0805 12:41:37.842137  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHKeyPath
	I0805 12:41:37.842319  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHUsername
	I0805 12:41:37.842480  432266 sshutil.go:53] new ssh client: &{IP:192.168.61.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/kubernetes-upgrade-515808/id_rsa Username:docker}
	I0805 12:41:37.935085  432266 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 12:41:37.940923  432266 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 12:41:37.940946  432266 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/addons for local assets ...
	I0805 12:41:37.941020  432266 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/files for local assets ...
	I0805 12:41:37.941123  432266 filesync.go:149] local asset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> 3912192.pem in /etc/ssl/certs
	I0805 12:41:37.941237  432266 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 12:41:37.952912  432266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:41:37.976942  432266 start.go:296] duration metric: took 137.704495ms for postStartSetup
	I0805 12:41:37.977002  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetConfigRaw
	I0805 12:41:37.977640  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetIP
	I0805 12:41:37.980135  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:37.980463  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:63:a0", ip: ""} in network mk-kubernetes-upgrade-515808: {Iface:virbr3 ExpiryTime:2024-08-05 13:41:27 +0000 UTC Type:0 Mac:52:54:00:c9:63:a0 Iaid: IPaddr:192.168.61.242 Prefix:24 Hostname:kubernetes-upgrade-515808 Clientid:01:52:54:00:c9:63:a0}
	I0805 12:41:37.980493  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined IP address 192.168.61.242 and MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:37.980695  432266 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kubernetes-upgrade-515808/config.json ...
	I0805 12:41:37.980880  432266 start.go:128] duration metric: took 24.788036016s to createHost
	I0805 12:41:37.980904  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHHostname
	I0805 12:41:37.983063  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:37.983415  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:63:a0", ip: ""} in network mk-kubernetes-upgrade-515808: {Iface:virbr3 ExpiryTime:2024-08-05 13:41:27 +0000 UTC Type:0 Mac:52:54:00:c9:63:a0 Iaid: IPaddr:192.168.61.242 Prefix:24 Hostname:kubernetes-upgrade-515808 Clientid:01:52:54:00:c9:63:a0}
	I0805 12:41:37.983449  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined IP address 192.168.61.242 and MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:37.983583  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHPort
	I0805 12:41:37.983789  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHKeyPath
	I0805 12:41:37.983959  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHKeyPath
	I0805 12:41:37.984116  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHUsername
	I0805 12:41:37.984289  432266 main.go:141] libmachine: Using SSH client type: native
	I0805 12:41:37.984448  432266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.242 22 <nil> <nil>}
	I0805 12:41:37.984460  432266 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0805 12:41:38.096972  432266 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722861698.073331525
	
	I0805 12:41:38.096998  432266 fix.go:216] guest clock: 1722861698.073331525
	I0805 12:41:38.097006  432266 fix.go:229] Guest: 2024-08-05 12:41:38.073331525 +0000 UTC Remote: 2024-08-05 12:41:37.980892484 +0000 UTC m=+41.590767819 (delta=92.439041ms)
	I0805 12:41:38.097042  432266 fix.go:200] guest clock delta is within tolerance: 92.439041ms
	I0805 12:41:38.097048  432266 start.go:83] releasing machines lock for "kubernetes-upgrade-515808", held for 24.904363903s
	I0805 12:41:38.097074  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .DriverName
	I0805 12:41:38.097375  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetIP
	I0805 12:41:38.100258  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:38.100645  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:63:a0", ip: ""} in network mk-kubernetes-upgrade-515808: {Iface:virbr3 ExpiryTime:2024-08-05 13:41:27 +0000 UTC Type:0 Mac:52:54:00:c9:63:a0 Iaid: IPaddr:192.168.61.242 Prefix:24 Hostname:kubernetes-upgrade-515808 Clientid:01:52:54:00:c9:63:a0}
	I0805 12:41:38.100684  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined IP address 192.168.61.242 and MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:38.100874  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .DriverName
	I0805 12:41:38.101430  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .DriverName
	I0805 12:41:38.101632  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .DriverName
	I0805 12:41:38.101760  432266 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 12:41:38.101808  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHHostname
	I0805 12:41:38.101892  432266 ssh_runner.go:195] Run: cat /version.json
	I0805 12:41:38.101921  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHHostname
	I0805 12:41:38.104487  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:38.104826  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:63:a0", ip: ""} in network mk-kubernetes-upgrade-515808: {Iface:virbr3 ExpiryTime:2024-08-05 13:41:27 +0000 UTC Type:0 Mac:52:54:00:c9:63:a0 Iaid: IPaddr:192.168.61.242 Prefix:24 Hostname:kubernetes-upgrade-515808 Clientid:01:52:54:00:c9:63:a0}
	I0805 12:41:38.104858  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:38.104881  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined IP address 192.168.61.242 and MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:38.105067  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHPort
	I0805 12:41:38.105278  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHKeyPath
	I0805 12:41:38.105447  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHUsername
	I0805 12:41:38.105449  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:63:a0", ip: ""} in network mk-kubernetes-upgrade-515808: {Iface:virbr3 ExpiryTime:2024-08-05 13:41:27 +0000 UTC Type:0 Mac:52:54:00:c9:63:a0 Iaid: IPaddr:192.168.61.242 Prefix:24 Hostname:kubernetes-upgrade-515808 Clientid:01:52:54:00:c9:63:a0}
	I0805 12:41:38.105478  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined IP address 192.168.61.242 and MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:38.105615  432266 sshutil.go:53] new ssh client: &{IP:192.168.61.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/kubernetes-upgrade-515808/id_rsa Username:docker}
	I0805 12:41:38.105689  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHPort
	I0805 12:41:38.105843  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHKeyPath
	I0805 12:41:38.105984  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHUsername
	I0805 12:41:38.106152  432266 sshutil.go:53] new ssh client: &{IP:192.168.61.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/kubernetes-upgrade-515808/id_rsa Username:docker}
	I0805 12:41:38.214460  432266 ssh_runner.go:195] Run: systemctl --version
	I0805 12:41:38.221721  432266 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 12:41:38.385344  432266 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 12:41:38.392343  432266 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 12:41:38.392436  432266 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 12:41:38.409249  432266 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 12:41:38.409273  432266 start.go:495] detecting cgroup driver to use...
	I0805 12:41:38.409350  432266 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 12:41:38.429803  432266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 12:41:38.445748  432266 docker.go:217] disabling cri-docker service (if available) ...
	I0805 12:41:38.445809  432266 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 12:41:38.462057  432266 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 12:41:38.476852  432266 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 12:41:38.602087  432266 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 12:41:38.763862  432266 docker.go:233] disabling docker service ...
	I0805 12:41:38.763955  432266 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 12:41:38.781897  432266 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 12:41:38.797314  432266 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 12:41:38.931631  432266 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 12:41:39.048061  432266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 12:41:39.062812  432266 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 12:41:39.084279  432266 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0805 12:41:39.084350  432266 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:41:39.095500  432266 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 12:41:39.095564  432266 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:41:39.106162  432266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:41:39.116827  432266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:41:39.127381  432266 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 12:41:39.138794  432266 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 12:41:39.148679  432266 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0805 12:41:39.148759  432266 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0805 12:41:39.162610  432266 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 12:41:39.172904  432266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:41:39.287172  432266 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 12:41:39.440603  432266 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 12:41:39.440687  432266 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 12:41:39.446822  432266 start.go:563] Will wait 60s for crictl version
	I0805 12:41:39.446890  432266 ssh_runner.go:195] Run: which crictl
	I0805 12:41:39.451064  432266 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 12:41:39.496875  432266 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 12:41:39.496987  432266 ssh_runner.go:195] Run: crio --version
	I0805 12:41:39.537436  432266 ssh_runner.go:195] Run: crio --version
	I0805 12:41:39.572854  432266 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0805 12:41:39.574253  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetIP
	I0805 12:41:39.577618  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:39.578076  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:63:a0", ip: ""} in network mk-kubernetes-upgrade-515808: {Iface:virbr3 ExpiryTime:2024-08-05 13:41:27 +0000 UTC Type:0 Mac:52:54:00:c9:63:a0 Iaid: IPaddr:192.168.61.242 Prefix:24 Hostname:kubernetes-upgrade-515808 Clientid:01:52:54:00:c9:63:a0}
	I0805 12:41:39.578112  432266 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined IP address 192.168.61.242 and MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:41:39.578330  432266 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0805 12:41:39.582873  432266 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:41:39.596411  432266 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-515808 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-515808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.242 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 12:41:39.596554  432266 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0805 12:41:39.596628  432266 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:41:39.628142  432266 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0805 12:41:39.628211  432266 ssh_runner.go:195] Run: which lz4
	I0805 12:41:39.632400  432266 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0805 12:41:39.636759  432266 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 12:41:39.636793  432266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0805 12:41:41.352143  432266 crio.go:462] duration metric: took 1.719780832s to copy over tarball
	I0805 12:41:41.352213  432266 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 12:41:43.921914  432266 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.56966942s)
	I0805 12:41:43.921958  432266 crio.go:469] duration metric: took 2.569784155s to extract the tarball
	I0805 12:41:43.921977  432266 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0805 12:41:43.983435  432266 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:41:44.026837  432266 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0805 12:41:44.026863  432266 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0805 12:41:44.026925  432266 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:41:44.026963  432266 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0805 12:41:44.026989  432266 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0805 12:41:44.027015  432266 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0805 12:41:44.027030  432266 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0805 12:41:44.027119  432266 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0805 12:41:44.026949  432266 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0805 12:41:44.026972  432266 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0805 12:41:44.028840  432266 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0805 12:41:44.028867  432266 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0805 12:41:44.028882  432266 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0805 12:41:44.028841  432266 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0805 12:41:44.028848  432266 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:41:44.028907  432266 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0805 12:41:44.028936  432266 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0805 12:41:44.028945  432266 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0805 12:41:44.189906  432266 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0805 12:41:44.231639  432266 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0805 12:41:44.233257  432266 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0805 12:41:44.233302  432266 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0805 12:41:44.233342  432266 ssh_runner.go:195] Run: which crictl
	I0805 12:41:44.276482  432266 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0805 12:41:44.276504  432266 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0805 12:41:44.276538  432266 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0805 12:41:44.276567  432266 ssh_runner.go:195] Run: which crictl
	I0805 12:41:44.281518  432266 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0805 12:41:44.331226  432266 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0805 12:41:44.353162  432266 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0805 12:41:44.354533  432266 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0805 12:41:44.358141  432266 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0805 12:41:44.367657  432266 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0805 12:41:44.368730  432266 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0805 12:41:44.392225  432266 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0805 12:41:44.430034  432266 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0805 12:41:44.430069  432266 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0805 12:41:44.430081  432266 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0805 12:41:44.430104  432266 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0805 12:41:44.430129  432266 ssh_runner.go:195] Run: which crictl
	I0805 12:41:44.430146  432266 ssh_runner.go:195] Run: which crictl
	I0805 12:41:44.472839  432266 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0805 12:41:44.472922  432266 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0805 12:41:44.472865  432266 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0805 12:41:44.472981  432266 ssh_runner.go:195] Run: which crictl
	I0805 12:41:44.473004  432266 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0805 12:41:44.473051  432266 ssh_runner.go:195] Run: which crictl
	I0805 12:41:44.495110  432266 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0805 12:41:44.495127  432266 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0805 12:41:44.495171  432266 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0805 12:41:44.495176  432266 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0805 12:41:44.495205  432266 ssh_runner.go:195] Run: which crictl
	I0805 12:41:44.495247  432266 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0805 12:41:44.495277  432266 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0805 12:41:44.588867  432266 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0805 12:41:44.588978  432266 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0805 12:41:44.589000  432266 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0805 12:41:44.589039  432266 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0805 12:41:44.589068  432266 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0805 12:41:44.620883  432266 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0805 12:41:44.883024  432266 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:41:45.025890  432266 cache_images.go:92] duration metric: took 999.010271ms to LoadCachedImages
	W0805 12:41:45.026001  432266 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0805 12:41:45.026022  432266 kubeadm.go:934] updating node { 192.168.61.242 8443 v1.20.0 crio true true} ...
	I0805 12:41:45.026142  432266 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-515808 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.242
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-515808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 12:41:45.026231  432266 ssh_runner.go:195] Run: crio config
	I0805 12:41:45.075256  432266 cni.go:84] Creating CNI manager for ""
	I0805 12:41:45.075277  432266 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:41:45.075293  432266 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 12:41:45.075316  432266 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.242 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-515808 NodeName:kubernetes-upgrade-515808 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.242"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.242 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0805 12:41:45.075495  432266 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.242
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-515808"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.242
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.242"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 12:41:45.075578  432266 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0805 12:41:45.086252  432266 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 12:41:45.086332  432266 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 12:41:45.096475  432266 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0805 12:41:45.113991  432266 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 12:41:45.130831  432266 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0805 12:41:45.148250  432266 ssh_runner.go:195] Run: grep 192.168.61.242	control-plane.minikube.internal$ /etc/hosts
	I0805 12:41:45.152841  432266 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.242	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:41:45.165638  432266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:41:45.296737  432266 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 12:41:45.316703  432266 certs.go:68] Setting up /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kubernetes-upgrade-515808 for IP: 192.168.61.242
	I0805 12:41:45.316745  432266 certs.go:194] generating shared ca certs ...
	I0805 12:41:45.316763  432266 certs.go:226] acquiring lock for ca certs: {Name:mk0abfcaff3883fbb5243c47b487f9200d9166d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:41:45.316945  432266 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key
	I0805 12:41:45.316997  432266 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key
	I0805 12:41:45.317010  432266 certs.go:256] generating profile certs ...
	I0805 12:41:45.317141  432266 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kubernetes-upgrade-515808/client.key
	I0805 12:41:45.317162  432266 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kubernetes-upgrade-515808/client.crt with IP's: []
	I0805 12:41:45.410592  432266 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kubernetes-upgrade-515808/client.crt ...
	I0805 12:41:45.410629  432266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kubernetes-upgrade-515808/client.crt: {Name:mk03808bd4485fca19bf2abab77cc4d5b82d00e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:41:45.410859  432266 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kubernetes-upgrade-515808/client.key ...
	I0805 12:41:45.410882  432266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kubernetes-upgrade-515808/client.key: {Name:mkf46af290d8118f2e374bae98f0e81d018691c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:41:45.411025  432266 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kubernetes-upgrade-515808/apiserver.key.2ebe000d
	I0805 12:41:45.411056  432266 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kubernetes-upgrade-515808/apiserver.crt.2ebe000d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.242]
	I0805 12:41:45.608972  432266 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kubernetes-upgrade-515808/apiserver.crt.2ebe000d ...
	I0805 12:41:45.609007  432266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kubernetes-upgrade-515808/apiserver.crt.2ebe000d: {Name:mk29f2de44c7b2fbe662290e4d80133b7fd08394 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:41:45.609169  432266 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kubernetes-upgrade-515808/apiserver.key.2ebe000d ...
	I0805 12:41:45.609182  432266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kubernetes-upgrade-515808/apiserver.key.2ebe000d: {Name:mka1a224927d91562ccdb7479550a97d3322643d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:41:45.609248  432266 certs.go:381] copying /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kubernetes-upgrade-515808/apiserver.crt.2ebe000d -> /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kubernetes-upgrade-515808/apiserver.crt
	I0805 12:41:45.609343  432266 certs.go:385] copying /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kubernetes-upgrade-515808/apiserver.key.2ebe000d -> /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kubernetes-upgrade-515808/apiserver.key
	I0805 12:41:45.609411  432266 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kubernetes-upgrade-515808/proxy-client.key
	I0805 12:41:45.609435  432266 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kubernetes-upgrade-515808/proxy-client.crt with IP's: []
	I0805 12:41:45.757591  432266 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kubernetes-upgrade-515808/proxy-client.crt ...
	I0805 12:41:45.757634  432266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kubernetes-upgrade-515808/proxy-client.crt: {Name:mk79844cfcab94d2d5b80158663ba7b12e5e4b6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:41:45.757828  432266 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kubernetes-upgrade-515808/proxy-client.key ...
	I0805 12:41:45.757847  432266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kubernetes-upgrade-515808/proxy-client.key: {Name:mk6c4e92c6b39b494c886a374f30edefee365734 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:41:45.758068  432266 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem (1338 bytes)
	W0805 12:41:45.758126  432266 certs.go:480] ignoring /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219_empty.pem, impossibly tiny 0 bytes
	I0805 12:41:45.758144  432266 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 12:41:45.758174  432266 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem (1082 bytes)
	I0805 12:41:45.758208  432266 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem (1123 bytes)
	I0805 12:41:45.758240  432266 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem (1675 bytes)
	I0805 12:41:45.758292  432266 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:41:45.758874  432266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 12:41:45.788238  432266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 12:41:45.814600  432266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 12:41:45.840322  432266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 12:41:45.865463  432266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kubernetes-upgrade-515808/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0805 12:41:45.892533  432266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kubernetes-upgrade-515808/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0805 12:41:45.919414  432266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kubernetes-upgrade-515808/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 12:41:45.945937  432266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kubernetes-upgrade-515808/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0805 12:41:45.970960  432266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem --> /usr/share/ca-certificates/391219.pem (1338 bytes)
	I0805 12:41:45.997024  432266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /usr/share/ca-certificates/3912192.pem (1708 bytes)
	I0805 12:41:46.023181  432266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 12:41:46.048315  432266 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 12:41:46.065449  432266 ssh_runner.go:195] Run: openssl version
	I0805 12:41:46.071827  432266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3912192.pem && ln -fs /usr/share/ca-certificates/3912192.pem /etc/ssl/certs/3912192.pem"
	I0805 12:41:46.082412  432266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3912192.pem
	I0805 12:41:46.088253  432266 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 11:39 /usr/share/ca-certificates/3912192.pem
	I0805 12:41:46.088316  432266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3912192.pem
	I0805 12:41:46.094171  432266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3912192.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 12:41:46.108361  432266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 12:41:46.119358  432266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:41:46.123893  432266 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:41:46.123943  432266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:41:46.129681  432266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 12:41:46.146756  432266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391219.pem && ln -fs /usr/share/ca-certificates/391219.pem /etc/ssl/certs/391219.pem"
	I0805 12:41:46.160381  432266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/391219.pem
	I0805 12:41:46.168185  432266 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 11:39 /usr/share/ca-certificates/391219.pem
	I0805 12:41:46.168266  432266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391219.pem
	I0805 12:41:46.177994  432266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391219.pem /etc/ssl/certs/51391683.0"
	I0805 12:41:46.194706  432266 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 12:41:46.203484  432266 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0805 12:41:46.203570  432266 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-515808 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-515808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.242 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:41:46.203673  432266 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 12:41:46.203789  432266 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:41:46.259515  432266 cri.go:89] found id: ""
	I0805 12:41:46.259606  432266 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 12:41:46.270135  432266 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 12:41:46.280643  432266 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 12:41:46.290186  432266 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 12:41:46.290222  432266 kubeadm.go:157] found existing configuration files:
	
	I0805 12:41:46.290289  432266 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 12:41:46.300809  432266 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 12:41:46.300881  432266 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 12:41:46.311794  432266 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 12:41:46.321257  432266 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 12:41:46.321323  432266 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 12:41:46.331055  432266 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 12:41:46.343798  432266 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 12:41:46.343874  432266 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 12:41:46.356850  432266 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 12:41:46.366454  432266 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 12:41:46.366536  432266 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 12:41:46.380242  432266 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 12:41:46.662616  432266 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 12:43:45.210172  432266 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0805 12:43:45.210293  432266 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0805 12:43:45.211940  432266 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0805 12:43:45.212054  432266 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 12:43:45.212158  432266 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 12:43:45.212280  432266 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 12:43:45.212431  432266 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 12:43:45.212532  432266 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 12:43:45.214463  432266 out.go:204]   - Generating certificates and keys ...
	I0805 12:43:45.214565  432266 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 12:43:45.214665  432266 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 12:43:45.214785  432266 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0805 12:43:45.214880  432266 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0805 12:43:45.214969  432266 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0805 12:43:45.215040  432266 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0805 12:43:45.215112  432266 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0805 12:43:45.215293  432266 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-515808 localhost] and IPs [192.168.61.242 127.0.0.1 ::1]
	I0805 12:43:45.215399  432266 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0805 12:43:45.215596  432266 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-515808 localhost] and IPs [192.168.61.242 127.0.0.1 ::1]
	I0805 12:43:45.215684  432266 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0805 12:43:45.215814  432266 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0805 12:43:45.215891  432266 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0805 12:43:45.215970  432266 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 12:43:45.216041  432266 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 12:43:45.216112  432266 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 12:43:45.216234  432266 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 12:43:45.216313  432266 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 12:43:45.216456  432266 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 12:43:45.216564  432266 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 12:43:45.216630  432266 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 12:43:45.216729  432266 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 12:43:45.218318  432266 out.go:204]   - Booting up control plane ...
	I0805 12:43:45.218424  432266 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 12:43:45.218510  432266 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 12:43:45.218591  432266 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 12:43:45.218686  432266 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 12:43:45.218840  432266 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0805 12:43:45.218897  432266 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0805 12:43:45.218979  432266 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 12:43:45.219173  432266 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 12:43:45.219265  432266 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 12:43:45.219501  432266 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 12:43:45.219595  432266 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 12:43:45.219832  432266 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 12:43:45.219908  432266 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 12:43:45.220174  432266 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 12:43:45.220280  432266 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 12:43:45.220496  432266 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 12:43:45.220510  432266 kubeadm.go:310] 
	I0805 12:43:45.220554  432266 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0805 12:43:45.220619  432266 kubeadm.go:310] 		timed out waiting for the condition
	I0805 12:43:45.220637  432266 kubeadm.go:310] 
	I0805 12:43:45.220671  432266 kubeadm.go:310] 	This error is likely caused by:
	I0805 12:43:45.220703  432266 kubeadm.go:310] 		- The kubelet is not running
	I0805 12:43:45.220810  432266 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0805 12:43:45.220818  432266 kubeadm.go:310] 
	I0805 12:43:45.220936  432266 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0805 12:43:45.220966  432266 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0805 12:43:45.221011  432266 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0805 12:43:45.221021  432266 kubeadm.go:310] 
	I0805 12:43:45.221141  432266 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0805 12:43:45.221237  432266 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0805 12:43:45.221246  432266 kubeadm.go:310] 
	I0805 12:43:45.221365  432266 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0805 12:43:45.221474  432266 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0805 12:43:45.221565  432266 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0805 12:43:45.221677  432266 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0805 12:43:45.221707  432266 kubeadm.go:310] 
	W0805 12:43:45.221827  432266 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-515808 localhost] and IPs [192.168.61.242 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-515808 localhost] and IPs [192.168.61.242 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-515808 localhost] and IPs [192.168.61.242 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-515808 localhost] and IPs [192.168.61.242 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0805 12:43:45.221886  432266 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0805 12:43:46.201159  432266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 12:43:46.215723  432266 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 12:43:46.226272  432266 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 12:43:46.226305  432266 kubeadm.go:157] found existing configuration files:
	
	I0805 12:43:46.226384  432266 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 12:43:46.240631  432266 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 12:43:46.240697  432266 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 12:43:46.254990  432266 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 12:43:46.265854  432266 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 12:43:46.265925  432266 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 12:43:46.279888  432266 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 12:43:46.293387  432266 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 12:43:46.293460  432266 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 12:43:46.307388  432266 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 12:43:46.321386  432266 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 12:43:46.321468  432266 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 12:43:46.331207  432266 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 12:43:46.601996  432266 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 12:45:42.945840  432266 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0805 12:45:42.945958  432266 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0805 12:45:42.947950  432266 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0805 12:45:42.948043  432266 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 12:45:42.948187  432266 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 12:45:42.948275  432266 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 12:45:42.948371  432266 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 12:45:42.948463  432266 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 12:45:42.950702  432266 out.go:204]   - Generating certificates and keys ...
	I0805 12:45:42.950805  432266 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 12:45:42.950905  432266 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 12:45:42.951027  432266 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0805 12:45:42.951131  432266 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0805 12:45:42.951234  432266 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0805 12:45:42.951308  432266 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0805 12:45:42.951415  432266 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0805 12:45:42.951507  432266 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0805 12:45:42.951618  432266 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0805 12:45:42.951727  432266 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0805 12:45:42.951793  432266 kubeadm.go:310] [certs] Using the existing "sa" key
	I0805 12:45:42.951878  432266 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 12:45:42.952034  432266 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 12:45:42.952117  432266 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 12:45:42.952205  432266 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 12:45:42.952271  432266 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 12:45:42.952405  432266 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 12:45:42.952523  432266 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 12:45:42.952581  432266 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 12:45:42.952683  432266 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 12:45:42.954465  432266 out.go:204]   - Booting up control plane ...
	I0805 12:45:42.954569  432266 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 12:45:42.954692  432266 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 12:45:42.954783  432266 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 12:45:42.954893  432266 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 12:45:42.955114  432266 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0805 12:45:42.955164  432266 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0805 12:45:42.955220  432266 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 12:45:42.955411  432266 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 12:45:42.955473  432266 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 12:45:42.955687  432266 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 12:45:42.955767  432266 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 12:45:42.955954  432266 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 12:45:42.956044  432266 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 12:45:42.956219  432266 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 12:45:42.956313  432266 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 12:45:42.956508  432266 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 12:45:42.956522  432266 kubeadm.go:310] 
	I0805 12:45:42.956587  432266 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0805 12:45:42.956642  432266 kubeadm.go:310] 		timed out waiting for the condition
	I0805 12:45:42.956651  432266 kubeadm.go:310] 
	I0805 12:45:42.956708  432266 kubeadm.go:310] 	This error is likely caused by:
	I0805 12:45:42.956754  432266 kubeadm.go:310] 		- The kubelet is not running
	I0805 12:45:42.956869  432266 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0805 12:45:42.956878  432266 kubeadm.go:310] 
	I0805 12:45:42.956985  432266 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0805 12:45:42.957025  432266 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0805 12:45:42.957053  432266 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0805 12:45:42.957077  432266 kubeadm.go:310] 
	I0805 12:45:42.957190  432266 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0805 12:45:42.957311  432266 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0805 12:45:42.957322  432266 kubeadm.go:310] 
	I0805 12:45:42.957481  432266 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0805 12:45:42.957601  432266 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0805 12:45:42.957710  432266 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0805 12:45:42.957810  432266 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0805 12:45:42.957894  432266 kubeadm.go:310] 
	I0805 12:45:42.957897  432266 kubeadm.go:394] duration metric: took 3m56.754338394s to StartCluster
	I0805 12:45:42.957967  432266 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 12:45:42.958051  432266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 12:45:43.013664  432266 cri.go:89] found id: ""
	I0805 12:45:43.013693  432266 logs.go:276] 0 containers: []
	W0805 12:45:43.013703  432266 logs.go:278] No container was found matching "kube-apiserver"
	I0805 12:45:43.013711  432266 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 12:45:43.013774  432266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 12:45:43.049961  432266 cri.go:89] found id: ""
	I0805 12:45:43.049992  432266 logs.go:276] 0 containers: []
	W0805 12:45:43.050004  432266 logs.go:278] No container was found matching "etcd"
	I0805 12:45:43.050022  432266 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 12:45:43.050079  432266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 12:45:43.096944  432266 cri.go:89] found id: ""
	I0805 12:45:43.096975  432266 logs.go:276] 0 containers: []
	W0805 12:45:43.096988  432266 logs.go:278] No container was found matching "coredns"
	I0805 12:45:43.096997  432266 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 12:45:43.097065  432266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 12:45:43.133461  432266 cri.go:89] found id: ""
	I0805 12:45:43.133498  432266 logs.go:276] 0 containers: []
	W0805 12:45:43.133510  432266 logs.go:278] No container was found matching "kube-scheduler"
	I0805 12:45:43.133519  432266 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 12:45:43.133590  432266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 12:45:43.171375  432266 cri.go:89] found id: ""
	I0805 12:45:43.171405  432266 logs.go:276] 0 containers: []
	W0805 12:45:43.171417  432266 logs.go:278] No container was found matching "kube-proxy"
	I0805 12:45:43.171425  432266 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 12:45:43.171493  432266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 12:45:43.209550  432266 cri.go:89] found id: ""
	I0805 12:45:43.209578  432266 logs.go:276] 0 containers: []
	W0805 12:45:43.209586  432266 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 12:45:43.209593  432266 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 12:45:43.209719  432266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 12:45:43.249989  432266 cri.go:89] found id: ""
	I0805 12:45:43.250022  432266 logs.go:276] 0 containers: []
	W0805 12:45:43.250035  432266 logs.go:278] No container was found matching "kindnet"
	I0805 12:45:43.250048  432266 logs.go:123] Gathering logs for kubelet ...
	I0805 12:45:43.250072  432266 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 12:45:43.302532  432266 logs.go:123] Gathering logs for dmesg ...
	I0805 12:45:43.302571  432266 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 12:45:43.318370  432266 logs.go:123] Gathering logs for describe nodes ...
	I0805 12:45:43.318408  432266 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 12:45:43.447119  432266 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 12:45:43.447145  432266 logs.go:123] Gathering logs for CRI-O ...
	I0805 12:45:43.447163  432266 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 12:45:43.559505  432266 logs.go:123] Gathering logs for container status ...
	I0805 12:45:43.559550  432266 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0805 12:45:43.604296  432266 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0805 12:45:43.604351  432266 out.go:239] * 
	* 
	W0805 12:45:43.604435  432266 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0805 12:45:43.604467  432266 out.go:239] * 
	* 
	W0805 12:45:43.605736  432266 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 12:45:43.610255  432266 out.go:177] 
	W0805 12:45:43.611932  432266 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0805 12:45:43.612004  432266 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0805 12:45:43.612036  432266 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0805 12:45:43.613765  432266 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-515808 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-515808
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-515808: (2.315323825s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-515808 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-515808 status --format={{.Host}}: exit status 7 (67.369058ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-515808 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-515808 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (59.579189971s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-515808 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-515808 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-515808 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (86.520293ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-515808] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19377
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19377-383955/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-383955/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-rc.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-515808
	    minikube start -p kubernetes-upgrade-515808 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5158082 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-515808 --kubernetes-version=v1.31.0-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-515808 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0805 12:46:50.809745  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.crt: no such file or directory
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-515808 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m29.83819466s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-08-05 12:48:15.62392933 +0000 UTC m=+4882.581253033
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-515808 -n kubernetes-upgrade-515808
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-515808 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-515808 logs -n 25: (3.803257454s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p custom-flannel-119870                             | custom-flannel-119870 | jenkins | v1.33.1 | 05 Aug 24 12:48 UTC |                     |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /etc/kube-flannel/cni-conf.json                      |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-119870 sudo                        | custom-flannel-119870 | jenkins | v1.33.1 | 05 Aug 24 12:48 UTC | 05 Aug 24 12:48 UTC |
	|         | systemctl status kubelet --all                       |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-119870                             | custom-flannel-119870 | jenkins | v1.33.1 | 05 Aug 24 12:48 UTC | 05 Aug 24 12:48 UTC |
	|         | sudo systemctl cat kubelet                           |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-119870 sudo                        | custom-flannel-119870 | jenkins | v1.33.1 | 05 Aug 24 12:48 UTC | 05 Aug 24 12:48 UTC |
	|         | journalctl -xeu kubelet --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-119870                             | custom-flannel-119870 | jenkins | v1.33.1 | 05 Aug 24 12:48 UTC | 05 Aug 24 12:48 UTC |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-119870                             | custom-flannel-119870 | jenkins | v1.33.1 | 05 Aug 24 12:48 UTC | 05 Aug 24 12:48 UTC |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-119870 sudo                        | custom-flannel-119870 | jenkins | v1.33.1 | 05 Aug 24 12:48 UTC |                     |
	|         | systemctl status docker --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-119870                             | custom-flannel-119870 | jenkins | v1.33.1 | 05 Aug 24 12:48 UTC | 05 Aug 24 12:48 UTC |
	|         | sudo systemctl cat docker                            |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-119870 sudo                        | custom-flannel-119870 | jenkins | v1.33.1 | 05 Aug 24 12:48 UTC | 05 Aug 24 12:48 UTC |
	|         | cat /etc/docker/daemon.json                          |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-119870 sudo                        | custom-flannel-119870 | jenkins | v1.33.1 | 05 Aug 24 12:48 UTC |                     |
	|         | docker system info                                   |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-119870 sudo                        | custom-flannel-119870 | jenkins | v1.33.1 | 05 Aug 24 12:48 UTC |                     |
	|         | systemctl status cri-docker                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-119870                             | custom-flannel-119870 | jenkins | v1.33.1 | 05 Aug 24 12:48 UTC | 05 Aug 24 12:48 UTC |
	|         | sudo systemctl cat cri-docker                        |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-119870 sudo cat                    | custom-flannel-119870 | jenkins | v1.33.1 | 05 Aug 24 12:48 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-119870 sudo cat                    | custom-flannel-119870 | jenkins | v1.33.1 | 05 Aug 24 12:48 UTC | 05 Aug 24 12:48 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-119870 sudo                        | custom-flannel-119870 | jenkins | v1.33.1 | 05 Aug 24 12:48 UTC | 05 Aug 24 12:48 UTC |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-119870 sudo                        | custom-flannel-119870 | jenkins | v1.33.1 | 05 Aug 24 12:48 UTC |                     |
	|         | systemctl status containerd                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-119870                             | custom-flannel-119870 | jenkins | v1.33.1 | 05 Aug 24 12:48 UTC | 05 Aug 24 12:48 UTC |
	|         | sudo systemctl cat containerd                        |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-119870 sudo cat                    | custom-flannel-119870 | jenkins | v1.33.1 | 05 Aug 24 12:48 UTC | 05 Aug 24 12:48 UTC |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-119870                             | custom-flannel-119870 | jenkins | v1.33.1 | 05 Aug 24 12:48 UTC | 05 Aug 24 12:48 UTC |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-119870 sudo                        | custom-flannel-119870 | jenkins | v1.33.1 | 05 Aug 24 12:48 UTC | 05 Aug 24 12:48 UTC |
	|         | containerd config dump                               |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-119870 sudo                        | custom-flannel-119870 | jenkins | v1.33.1 | 05 Aug 24 12:48 UTC | 05 Aug 24 12:48 UTC |
	|         | systemctl status crio --all                          |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-119870 sudo                        | custom-flannel-119870 | jenkins | v1.33.1 | 05 Aug 24 12:48 UTC | 05 Aug 24 12:48 UTC |
	|         | systemctl cat crio --no-pager                        |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-119870 sudo                        | custom-flannel-119870 | jenkins | v1.33.1 | 05 Aug 24 12:48 UTC | 05 Aug 24 12:48 UTC |
	|         | find /etc/crio -type f -exec                         |                       |         |         |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-119870 sudo                        | custom-flannel-119870 | jenkins | v1.33.1 | 05 Aug 24 12:48 UTC | 05 Aug 24 12:48 UTC |
	|         | crio config                                          |                       |         |         |                     |                     |
	| delete  | -p custom-flannel-119870                             | custom-flannel-119870 | jenkins | v1.33.1 | 05 Aug 24 12:48 UTC |                     |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 12:47:45
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 12:47:45.164780  441429 out.go:291] Setting OutFile to fd 1 ...
	I0805 12:47:45.164964  441429 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:47:45.164976  441429 out.go:304] Setting ErrFile to fd 2...
	I0805 12:47:45.164981  441429 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:47:45.165171  441429 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
	I0805 12:47:45.165810  441429 out.go:298] Setting JSON to false
	I0805 12:47:45.167063  441429 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":9012,"bootTime":1722853053,"procs":297,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0805 12:47:45.167132  441429 start.go:139] virtualization: kvm guest
	I0805 12:47:45.169430  441429 out.go:177] * [flannel-119870] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0805 12:47:45.170681  441429 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 12:47:45.170677  441429 notify.go:220] Checking for updates...
	I0805 12:47:45.173074  441429 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 12:47:45.174307  441429 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 12:47:45.175500  441429 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 12:47:45.176737  441429 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0805 12:47:45.177884  441429 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 12:47:45.179504  441429 config.go:182] Loaded profile config "custom-flannel-119870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:47:45.179660  441429 config.go:182] Loaded profile config "enable-default-cni-119870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:47:45.179802  441429 config.go:182] Loaded profile config "kubernetes-upgrade-515808": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0805 12:47:45.179922  441429 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 12:47:45.220816  441429 out.go:177] * Using the kvm2 driver based on user configuration
	I0805 12:47:45.221962  441429 start.go:297] selected driver: kvm2
	I0805 12:47:45.221978  441429 start.go:901] validating driver "kvm2" against <nil>
	I0805 12:47:45.221989  441429 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 12:47:45.222661  441429 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 12:47:45.222745  441429 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19377-383955/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0805 12:47:45.239151  441429 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0805 12:47:45.239224  441429 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 12:47:45.239550  441429 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 12:47:45.239598  441429 cni.go:84] Creating CNI manager for "flannel"
	I0805 12:47:45.239606  441429 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0805 12:47:45.239702  441429 start.go:340] cluster config:
	{Name:flannel-119870 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:flannel-119870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:47:45.239866  441429 iso.go:125] acquiring lock: {Name:mk78a4988ea0dfb86bb6f7367e362683a39fd912 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 12:47:45.241668  441429 out.go:177] * Starting "flannel-119870" primary control-plane node in "flannel-119870" cluster
	I0805 12:47:45.242899  441429 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 12:47:45.242961  441429 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0805 12:47:45.242971  441429 cache.go:56] Caching tarball of preloaded images
	I0805 12:47:45.243104  441429 preload.go:172] Found /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0805 12:47:45.243118  441429 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0805 12:47:45.243249  441429 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/flannel-119870/config.json ...
	I0805 12:47:45.243279  441429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/flannel-119870/config.json: {Name:mk9174df7716e20d18a3563da384e736d5d500bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:47:45.243460  441429 start.go:360] acquireMachinesLock for flannel-119870: {Name:mk3babe91d55c30c0b650587cdec6489eb3a7ed6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 12:47:45.243505  441429 start.go:364] duration metric: took 25.12µs to acquireMachinesLock for "flannel-119870"
	I0805 12:47:45.243525  441429 start.go:93] Provisioning new machine with config: &{Name:flannel-119870 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:flannel-119870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 12:47:45.243617  441429 start.go:125] createHost starting for "" (driver="kvm2")
	I0805 12:47:42.529454  438454 pod_ready.go:102] pod "coredns-7db6d8ff4d-w2bpw" in "kube-system" namespace has status "Ready":"False"
	I0805 12:47:45.028795  438454 pod_ready.go:102] pod "coredns-7db6d8ff4d-w2bpw" in "kube-system" namespace has status "Ready":"False"
	I0805 12:47:46.528142  438454 pod_ready.go:92] pod "coredns-7db6d8ff4d-w2bpw" in "kube-system" namespace has status "Ready":"True"
	I0805 12:47:46.528170  438454 pod_ready.go:81] duration metric: took 12.506298435s for pod "coredns-7db6d8ff4d-w2bpw" in "kube-system" namespace to be "Ready" ...
	I0805 12:47:46.528181  438454 pod_ready.go:78] waiting up to 15m0s for pod "etcd-custom-flannel-119870" in "kube-system" namespace to be "Ready" ...
	I0805 12:47:46.534219  438454 pod_ready.go:92] pod "etcd-custom-flannel-119870" in "kube-system" namespace has status "Ready":"True"
	I0805 12:47:46.534247  438454 pod_ready.go:81] duration metric: took 6.057787ms for pod "etcd-custom-flannel-119870" in "kube-system" namespace to be "Ready" ...
	I0805 12:47:46.534260  438454 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-custom-flannel-119870" in "kube-system" namespace to be "Ready" ...
	I0805 12:47:46.538741  438454 pod_ready.go:92] pod "kube-apiserver-custom-flannel-119870" in "kube-system" namespace has status "Ready":"True"
	I0805 12:47:46.538765  438454 pod_ready.go:81] duration metric: took 4.496408ms for pod "kube-apiserver-custom-flannel-119870" in "kube-system" namespace to be "Ready" ...
	I0805 12:47:46.538774  438454 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-custom-flannel-119870" in "kube-system" namespace to be "Ready" ...
	I0805 12:47:46.543987  438454 pod_ready.go:92] pod "kube-controller-manager-custom-flannel-119870" in "kube-system" namespace has status "Ready":"True"
	I0805 12:47:46.544011  438454 pod_ready.go:81] duration metric: took 5.229114ms for pod "kube-controller-manager-custom-flannel-119870" in "kube-system" namespace to be "Ready" ...
	I0805 12:47:46.544023  438454 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-jpg2m" in "kube-system" namespace to be "Ready" ...
	I0805 12:47:46.548786  438454 pod_ready.go:92] pod "kube-proxy-jpg2m" in "kube-system" namespace has status "Ready":"True"
	I0805 12:47:46.548810  438454 pod_ready.go:81] duration metric: took 4.779044ms for pod "kube-proxy-jpg2m" in "kube-system" namespace to be "Ready" ...
	I0805 12:47:46.548821  438454 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-custom-flannel-119870" in "kube-system" namespace to be "Ready" ...
	I0805 12:47:46.926123  438454 pod_ready.go:92] pod "kube-scheduler-custom-flannel-119870" in "kube-system" namespace has status "Ready":"True"
	I0805 12:47:46.926152  438454 pod_ready.go:81] duration metric: took 377.321928ms for pod "kube-scheduler-custom-flannel-119870" in "kube-system" namespace to be "Ready" ...
	I0805 12:47:46.926166  438454 pod_ready.go:38] duration metric: took 12.91430273s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 12:47:46.926182  438454 api_server.go:52] waiting for apiserver process to appear ...
	I0805 12:47:46.926236  438454 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:47:46.942286  438454 api_server.go:72] duration metric: took 21.893155293s to wait for apiserver process to appear ...
	I0805 12:47:46.942321  438454 api_server.go:88] waiting for apiserver healthz status ...
	I0805 12:47:46.942343  438454 api_server.go:253] Checking apiserver healthz at https://192.168.50.196:8443/healthz ...
	I0805 12:47:46.947454  438454 api_server.go:279] https://192.168.50.196:8443/healthz returned 200:
	ok
	I0805 12:47:46.948355  438454 api_server.go:141] control plane version: v1.30.3
	I0805 12:47:46.948383  438454 api_server.go:131] duration metric: took 6.0538ms to wait for apiserver health ...
	I0805 12:47:46.948392  438454 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 12:47:47.131217  438454 system_pods.go:59] 7 kube-system pods found
	I0805 12:47:47.131248  438454 system_pods.go:61] "coredns-7db6d8ff4d-w2bpw" [373a3bd2-e9d2-43a2-ab71-f479c6777cb8] Running
	I0805 12:47:47.131254  438454 system_pods.go:61] "etcd-custom-flannel-119870" [7acc878a-1909-4cf9-9dba-01e697456c67] Running
	I0805 12:47:47.131258  438454 system_pods.go:61] "kube-apiserver-custom-flannel-119870" [8f35041e-52c4-48f7-9a9b-3935885eaf2f] Running
	I0805 12:47:47.131263  438454 system_pods.go:61] "kube-controller-manager-custom-flannel-119870" [8add7af7-c2db-4fb7-a7f9-81b598a6f37a] Running
	I0805 12:47:47.131266  438454 system_pods.go:61] "kube-proxy-jpg2m" [d6ba4963-9765-4e4c-a9ae-449963ec64bd] Running
	I0805 12:47:47.131270  438454 system_pods.go:61] "kube-scheduler-custom-flannel-119870" [083e6d06-7aa5-4047-93f2-372b16f6dee4] Running
	I0805 12:47:47.131273  438454 system_pods.go:61] "storage-provisioner" [4cb32b5d-86ca-4c34-b0f9-7d459a11dfb3] Running
	I0805 12:47:47.131278  438454 system_pods.go:74] duration metric: took 182.879873ms to wait for pod list to return data ...
	I0805 12:47:47.131286  438454 default_sa.go:34] waiting for default service account to be created ...
	I0805 12:47:47.325415  438454 default_sa.go:45] found service account: "default"
	I0805 12:47:47.325443  438454 default_sa.go:55] duration metric: took 194.149622ms for default service account to be created ...
	I0805 12:47:47.325454  438454 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 12:47:47.529228  438454 system_pods.go:86] 7 kube-system pods found
	I0805 12:47:47.529261  438454 system_pods.go:89] "coredns-7db6d8ff4d-w2bpw" [373a3bd2-e9d2-43a2-ab71-f479c6777cb8] Running
	I0805 12:47:47.529269  438454 system_pods.go:89] "etcd-custom-flannel-119870" [7acc878a-1909-4cf9-9dba-01e697456c67] Running
	I0805 12:47:47.529275  438454 system_pods.go:89] "kube-apiserver-custom-flannel-119870" [8f35041e-52c4-48f7-9a9b-3935885eaf2f] Running
	I0805 12:47:47.529279  438454 system_pods.go:89] "kube-controller-manager-custom-flannel-119870" [8add7af7-c2db-4fb7-a7f9-81b598a6f37a] Running
	I0805 12:47:47.529283  438454 system_pods.go:89] "kube-proxy-jpg2m" [d6ba4963-9765-4e4c-a9ae-449963ec64bd] Running
	I0805 12:47:47.529289  438454 system_pods.go:89] "kube-scheduler-custom-flannel-119870" [083e6d06-7aa5-4047-93f2-372b16f6dee4] Running
	I0805 12:47:47.529295  438454 system_pods.go:89] "storage-provisioner" [4cb32b5d-86ca-4c34-b0f9-7d459a11dfb3] Running
	I0805 12:47:47.529305  438454 system_pods.go:126] duration metric: took 203.842547ms to wait for k8s-apps to be running ...
	I0805 12:47:47.529317  438454 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 12:47:47.529375  438454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 12:47:47.546403  438454 system_svc.go:56] duration metric: took 17.075825ms WaitForService to wait for kubelet
	I0805 12:47:47.546442  438454 kubeadm.go:582] duration metric: took 22.497318832s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 12:47:47.546469  438454 node_conditions.go:102] verifying NodePressure condition ...
	I0805 12:47:47.726918  438454 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 12:47:47.726949  438454 node_conditions.go:123] node cpu capacity is 2
	I0805 12:47:47.726962  438454 node_conditions.go:105] duration metric: took 180.486857ms to run NodePressure ...
	I0805 12:47:47.726974  438454 start.go:241] waiting for startup goroutines ...
	I0805 12:47:47.726983  438454 start.go:246] waiting for cluster config update ...
	I0805 12:47:47.726997  438454 start.go:255] writing updated cluster config ...
	I0805 12:47:47.727284  438454 ssh_runner.go:195] Run: rm -f paused
	I0805 12:47:47.780293  438454 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0805 12:47:47.781978  438454 out.go:177] * Done! kubectl is now configured to use "custom-flannel-119870" cluster and "default" namespace by default
	I0805 12:47:43.064510  439101 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:47:43.563791  439101 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:47:44.063914  439101 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:47:44.564546  439101 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:47:45.064655  439101 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:47:45.563909  439101 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:47:46.064484  439101 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:47:46.563805  439101 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:47:47.064486  439101 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:47:47.564237  439101 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:47:45.245242  441429 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 12:47:45.245427  441429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:47:45.245485  441429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:47:45.261114  441429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37199
	I0805 12:47:45.261606  441429 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:47:45.262161  441429 main.go:141] libmachine: Using API Version  1
	I0805 12:47:45.262186  441429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:47:45.262517  441429 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:47:45.262723  441429 main.go:141] libmachine: (flannel-119870) Calling .GetMachineName
	I0805 12:47:45.262881  441429 main.go:141] libmachine: (flannel-119870) Calling .DriverName
	I0805 12:47:45.263013  441429 start.go:159] libmachine.API.Create for "flannel-119870" (driver="kvm2")
	I0805 12:47:45.263044  441429 client.go:168] LocalClient.Create starting
	I0805 12:47:45.263089  441429 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem
	I0805 12:47:45.263127  441429 main.go:141] libmachine: Decoding PEM data...
	I0805 12:47:45.263156  441429 main.go:141] libmachine: Parsing certificate...
	I0805 12:47:45.263235  441429 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem
	I0805 12:47:45.263265  441429 main.go:141] libmachine: Decoding PEM data...
	I0805 12:47:45.263288  441429 main.go:141] libmachine: Parsing certificate...
	I0805 12:47:45.263317  441429 main.go:141] libmachine: Running pre-create checks...
	I0805 12:47:45.263330  441429 main.go:141] libmachine: (flannel-119870) Calling .PreCreateCheck
	I0805 12:47:45.263759  441429 main.go:141] libmachine: (flannel-119870) Calling .GetConfigRaw
	I0805 12:47:45.264130  441429 main.go:141] libmachine: Creating machine...
	I0805 12:47:45.264145  441429 main.go:141] libmachine: (flannel-119870) Calling .Create
	I0805 12:47:45.264278  441429 main.go:141] libmachine: (flannel-119870) Creating KVM machine...
	I0805 12:47:45.265545  441429 main.go:141] libmachine: (flannel-119870) DBG | found existing default KVM network
	I0805 12:47:45.267416  441429 main.go:141] libmachine: (flannel-119870) DBG | I0805 12:47:45.267219  441451 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015970}
	I0805 12:47:45.267473  441429 main.go:141] libmachine: (flannel-119870) DBG | created network xml: 
	I0805 12:47:45.267501  441429 main.go:141] libmachine: (flannel-119870) DBG | <network>
	I0805 12:47:45.267520  441429 main.go:141] libmachine: (flannel-119870) DBG |   <name>mk-flannel-119870</name>
	I0805 12:47:45.267539  441429 main.go:141] libmachine: (flannel-119870) DBG |   <dns enable='no'/>
	I0805 12:47:45.267556  441429 main.go:141] libmachine: (flannel-119870) DBG |   
	I0805 12:47:45.267580  441429 main.go:141] libmachine: (flannel-119870) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0805 12:47:45.267599  441429 main.go:141] libmachine: (flannel-119870) DBG |     <dhcp>
	I0805 12:47:45.267616  441429 main.go:141] libmachine: (flannel-119870) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0805 12:47:45.267646  441429 main.go:141] libmachine: (flannel-119870) DBG |     </dhcp>
	I0805 12:47:45.267664  441429 main.go:141] libmachine: (flannel-119870) DBG |   </ip>
	I0805 12:47:45.267671  441429 main.go:141] libmachine: (flannel-119870) DBG |   
	I0805 12:47:45.267682  441429 main.go:141] libmachine: (flannel-119870) DBG | </network>
	I0805 12:47:45.267692  441429 main.go:141] libmachine: (flannel-119870) DBG | 
	I0805 12:47:45.273495  441429 main.go:141] libmachine: (flannel-119870) DBG | trying to create private KVM network mk-flannel-119870 192.168.39.0/24...
	I0805 12:47:45.355817  441429 main.go:141] libmachine: (flannel-119870) DBG | private KVM network mk-flannel-119870 192.168.39.0/24 created
	I0805 12:47:45.355855  441429 main.go:141] libmachine: (flannel-119870) DBG | I0805 12:47:45.355794  441451 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 12:47:45.355869  441429 main.go:141] libmachine: (flannel-119870) Setting up store path in /home/jenkins/minikube-integration/19377-383955/.minikube/machines/flannel-119870 ...
	I0805 12:47:45.355902  441429 main.go:141] libmachine: (flannel-119870) Building disk image from file:///home/jenkins/minikube-integration/19377-383955/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0805 12:47:45.355922  441429 main.go:141] libmachine: (flannel-119870) Downloading /home/jenkins/minikube-integration/19377-383955/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19377-383955/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0805 12:47:45.641128  441429 main.go:141] libmachine: (flannel-119870) DBG | I0805 12:47:45.640983  441451 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/flannel-119870/id_rsa...
	I0805 12:47:45.762466  441429 main.go:141] libmachine: (flannel-119870) DBG | I0805 12:47:45.762310  441451 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/flannel-119870/flannel-119870.rawdisk...
	I0805 12:47:45.762498  441429 main.go:141] libmachine: (flannel-119870) DBG | Writing magic tar header
	I0805 12:47:45.762517  441429 main.go:141] libmachine: (flannel-119870) DBG | Writing SSH key tar header
	I0805 12:47:45.762530  441429 main.go:141] libmachine: (flannel-119870) DBG | I0805 12:47:45.762473  441451 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19377-383955/.minikube/machines/flannel-119870 ...
	I0805 12:47:45.762611  441429 main.go:141] libmachine: (flannel-119870) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/flannel-119870
	I0805 12:47:45.762639  441429 main.go:141] libmachine: (flannel-119870) Setting executable bit set on /home/jenkins/minikube-integration/19377-383955/.minikube/machines/flannel-119870 (perms=drwx------)
	I0805 12:47:45.762653  441429 main.go:141] libmachine: (flannel-119870) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19377-383955/.minikube/machines
	I0805 12:47:45.762668  441429 main.go:141] libmachine: (flannel-119870) Setting executable bit set on /home/jenkins/minikube-integration/19377-383955/.minikube/machines (perms=drwxr-xr-x)
	I0805 12:47:45.762685  441429 main.go:141] libmachine: (flannel-119870) Setting executable bit set on /home/jenkins/minikube-integration/19377-383955/.minikube (perms=drwxr-xr-x)
	I0805 12:47:45.762699  441429 main.go:141] libmachine: (flannel-119870) Setting executable bit set on /home/jenkins/minikube-integration/19377-383955 (perms=drwxrwxr-x)
	I0805 12:47:45.762711  441429 main.go:141] libmachine: (flannel-119870) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0805 12:47:45.762725  441429 main.go:141] libmachine: (flannel-119870) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 12:47:45.762739  441429 main.go:141] libmachine: (flannel-119870) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19377-383955
	I0805 12:47:45.762751  441429 main.go:141] libmachine: (flannel-119870) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0805 12:47:45.762766  441429 main.go:141] libmachine: (flannel-119870) Creating domain...
	I0805 12:47:45.762780  441429 main.go:141] libmachine: (flannel-119870) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0805 12:47:45.762792  441429 main.go:141] libmachine: (flannel-119870) DBG | Checking permissions on dir: /home/jenkins
	I0805 12:47:45.762810  441429 main.go:141] libmachine: (flannel-119870) DBG | Checking permissions on dir: /home
	I0805 12:47:45.762817  441429 main.go:141] libmachine: (flannel-119870) DBG | Skipping /home - not owner
	I0805 12:47:45.764102  441429 main.go:141] libmachine: (flannel-119870) define libvirt domain using xml: 
	I0805 12:47:45.764155  441429 main.go:141] libmachine: (flannel-119870) <domain type='kvm'>
	I0805 12:47:45.764175  441429 main.go:141] libmachine: (flannel-119870)   <name>flannel-119870</name>
	I0805 12:47:45.764188  441429 main.go:141] libmachine: (flannel-119870)   <memory unit='MiB'>3072</memory>
	I0805 12:47:45.764197  441429 main.go:141] libmachine: (flannel-119870)   <vcpu>2</vcpu>
	I0805 12:47:45.764208  441429 main.go:141] libmachine: (flannel-119870)   <features>
	I0805 12:47:45.764219  441429 main.go:141] libmachine: (flannel-119870)     <acpi/>
	I0805 12:47:45.764226  441429 main.go:141] libmachine: (flannel-119870)     <apic/>
	I0805 12:47:45.764236  441429 main.go:141] libmachine: (flannel-119870)     <pae/>
	I0805 12:47:45.764247  441429 main.go:141] libmachine: (flannel-119870)     
	I0805 12:47:45.764258  441429 main.go:141] libmachine: (flannel-119870)   </features>
	I0805 12:47:45.764268  441429 main.go:141] libmachine: (flannel-119870)   <cpu mode='host-passthrough'>
	I0805 12:47:45.764279  441429 main.go:141] libmachine: (flannel-119870)   
	I0805 12:47:45.764287  441429 main.go:141] libmachine: (flannel-119870)   </cpu>
	I0805 12:47:45.764298  441429 main.go:141] libmachine: (flannel-119870)   <os>
	I0805 12:47:45.764305  441429 main.go:141] libmachine: (flannel-119870)     <type>hvm</type>
	I0805 12:47:45.764316  441429 main.go:141] libmachine: (flannel-119870)     <boot dev='cdrom'/>
	I0805 12:47:45.764323  441429 main.go:141] libmachine: (flannel-119870)     <boot dev='hd'/>
	I0805 12:47:45.764335  441429 main.go:141] libmachine: (flannel-119870)     <bootmenu enable='no'/>
	I0805 12:47:45.764345  441429 main.go:141] libmachine: (flannel-119870)   </os>
	I0805 12:47:45.764355  441429 main.go:141] libmachine: (flannel-119870)   <devices>
	I0805 12:47:45.764368  441429 main.go:141] libmachine: (flannel-119870)     <disk type='file' device='cdrom'>
	I0805 12:47:45.764405  441429 main.go:141] libmachine: (flannel-119870)       <source file='/home/jenkins/minikube-integration/19377-383955/.minikube/machines/flannel-119870/boot2docker.iso'/>
	I0805 12:47:45.764427  441429 main.go:141] libmachine: (flannel-119870)       <target dev='hdc' bus='scsi'/>
	I0805 12:47:45.764440  441429 main.go:141] libmachine: (flannel-119870)       <readonly/>
	I0805 12:47:45.764452  441429 main.go:141] libmachine: (flannel-119870)     </disk>
	I0805 12:47:45.764487  441429 main.go:141] libmachine: (flannel-119870)     <disk type='file' device='disk'>
	I0805 12:47:45.764513  441429 main.go:141] libmachine: (flannel-119870)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0805 12:47:45.764533  441429 main.go:141] libmachine: (flannel-119870)       <source file='/home/jenkins/minikube-integration/19377-383955/.minikube/machines/flannel-119870/flannel-119870.rawdisk'/>
	I0805 12:47:45.764545  441429 main.go:141] libmachine: (flannel-119870)       <target dev='hda' bus='virtio'/>
	I0805 12:47:45.764558  441429 main.go:141] libmachine: (flannel-119870)     </disk>
	I0805 12:47:45.764584  441429 main.go:141] libmachine: (flannel-119870)     <interface type='network'>
	I0805 12:47:45.764599  441429 main.go:141] libmachine: (flannel-119870)       <source network='mk-flannel-119870'/>
	I0805 12:47:45.764614  441429 main.go:141] libmachine: (flannel-119870)       <model type='virtio'/>
	I0805 12:47:45.764650  441429 main.go:141] libmachine: (flannel-119870)     </interface>
	I0805 12:47:45.764674  441429 main.go:141] libmachine: (flannel-119870)     <interface type='network'>
	I0805 12:47:45.764688  441429 main.go:141] libmachine: (flannel-119870)       <source network='default'/>
	I0805 12:47:45.764698  441429 main.go:141] libmachine: (flannel-119870)       <model type='virtio'/>
	I0805 12:47:45.764720  441429 main.go:141] libmachine: (flannel-119870)     </interface>
	I0805 12:47:45.764728  441429 main.go:141] libmachine: (flannel-119870)     <serial type='pty'>
	I0805 12:47:45.764759  441429 main.go:141] libmachine: (flannel-119870)       <target port='0'/>
	I0805 12:47:45.764782  441429 main.go:141] libmachine: (flannel-119870)     </serial>
	I0805 12:47:45.764794  441429 main.go:141] libmachine: (flannel-119870)     <console type='pty'>
	I0805 12:47:45.764806  441429 main.go:141] libmachine: (flannel-119870)       <target type='serial' port='0'/>
	I0805 12:47:45.764818  441429 main.go:141] libmachine: (flannel-119870)     </console>
	I0805 12:47:45.764828  441429 main.go:141] libmachine: (flannel-119870)     <rng model='virtio'>
	I0805 12:47:45.764840  441429 main.go:141] libmachine: (flannel-119870)       <backend model='random'>/dev/random</backend>
	I0805 12:47:45.764853  441429 main.go:141] libmachine: (flannel-119870)     </rng>
	I0805 12:47:45.764864  441429 main.go:141] libmachine: (flannel-119870)     
	I0805 12:47:45.764874  441429 main.go:141] libmachine: (flannel-119870)     
	I0805 12:47:45.764885  441429 main.go:141] libmachine: (flannel-119870)   </devices>
	I0805 12:47:45.764896  441429 main.go:141] libmachine: (flannel-119870) </domain>
	I0805 12:47:45.764909  441429 main.go:141] libmachine: (flannel-119870) 
	I0805 12:47:45.768934  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined MAC address 52:54:00:49:9a:49 in network default
	I0805 12:47:45.769541  441429 main.go:141] libmachine: (flannel-119870) Ensuring networks are active...
	I0805 12:47:45.769565  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:47:45.770216  441429 main.go:141] libmachine: (flannel-119870) Ensuring network default is active
	I0805 12:47:45.770571  441429 main.go:141] libmachine: (flannel-119870) Ensuring network mk-flannel-119870 is active
	I0805 12:47:45.771126  441429 main.go:141] libmachine: (flannel-119870) Getting domain xml...
	I0805 12:47:45.772062  441429 main.go:141] libmachine: (flannel-119870) Creating domain...
	I0805 12:47:47.068238  441429 main.go:141] libmachine: (flannel-119870) Waiting to get IP...
	I0805 12:47:47.070344  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:47:47.070850  441429 main.go:141] libmachine: (flannel-119870) DBG | unable to find current IP address of domain flannel-119870 in network mk-flannel-119870
	I0805 12:47:47.070874  441429 main.go:141] libmachine: (flannel-119870) DBG | I0805 12:47:47.070810  441451 retry.go:31] will retry after 214.299046ms: waiting for machine to come up
	I0805 12:47:47.287298  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:47:47.287840  441429 main.go:141] libmachine: (flannel-119870) DBG | unable to find current IP address of domain flannel-119870 in network mk-flannel-119870
	I0805 12:47:47.287872  441429 main.go:141] libmachine: (flannel-119870) DBG | I0805 12:47:47.287802  441451 retry.go:31] will retry after 314.817627ms: waiting for machine to come up
	I0805 12:47:47.604398  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:47:47.605034  441429 main.go:141] libmachine: (flannel-119870) DBG | unable to find current IP address of domain flannel-119870 in network mk-flannel-119870
	I0805 12:47:47.605055  441429 main.go:141] libmachine: (flannel-119870) DBG | I0805 12:47:47.604984  441451 retry.go:31] will retry after 327.203026ms: waiting for machine to come up
	I0805 12:47:47.933187  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:47:47.933729  441429 main.go:141] libmachine: (flannel-119870) DBG | unable to find current IP address of domain flannel-119870 in network mk-flannel-119870
	I0805 12:47:47.933757  441429 main.go:141] libmachine: (flannel-119870) DBG | I0805 12:47:47.933694  441451 retry.go:31] will retry after 453.81601ms: waiting for machine to come up
	I0805 12:47:48.389256  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:47:48.389787  441429 main.go:141] libmachine: (flannel-119870) DBG | unable to find current IP address of domain flannel-119870 in network mk-flannel-119870
	I0805 12:47:48.389816  441429 main.go:141] libmachine: (flannel-119870) DBG | I0805 12:47:48.389740  441451 retry.go:31] will retry after 525.487564ms: waiting for machine to come up
	I0805 12:47:48.917067  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:47:48.917575  441429 main.go:141] libmachine: (flannel-119870) DBG | unable to find current IP address of domain flannel-119870 in network mk-flannel-119870
	I0805 12:47:48.917613  441429 main.go:141] libmachine: (flannel-119870) DBG | I0805 12:47:48.917528  441451 retry.go:31] will retry after 721.146211ms: waiting for machine to come up
	I0805 12:47:49.640498  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:47:49.642846  441429 main.go:141] libmachine: (flannel-119870) DBG | unable to find current IP address of domain flannel-119870 in network mk-flannel-119870
	I0805 12:47:49.642879  441429 main.go:141] libmachine: (flannel-119870) DBG | I0805 12:47:49.642781  441451 retry.go:31] will retry after 965.211903ms: waiting for machine to come up
	I0805 12:47:48.064266  439101 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:47:48.564388  439101 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:47:49.064654  439101 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:47:49.564747  439101 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:47:50.064351  439101 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:47:50.563946  439101 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:47:51.063681  439101 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:47:51.563893  439101 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:47:52.064235  439101 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:47:52.564201  439101 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:47:53.063631  439101 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:47:53.563901  439101 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:47:53.664405  439101 kubeadm.go:1113] duration metric: took 12.743922638s to wait for elevateKubeSystemPrivileges
	I0805 12:47:53.664442  439101 kubeadm.go:394] duration metric: took 24.066138203s to StartCluster
	I0805 12:47:53.664460  439101 settings.go:142] acquiring lock: {Name:mkef693333292ed53a03690c72ec170ce2e26d3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:47:53.664547  439101 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 12:47:53.666295  439101 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/kubeconfig: {Name:mkf2ea766e58530103015ce4ba9d1ed3336f3926 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:47:53.666575  439101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0805 12:47:53.666581  439101 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 12:47:53.666675  439101 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 12:47:53.666755  439101 addons.go:69] Setting storage-provisioner=true in profile "enable-default-cni-119870"
	I0805 12:47:53.666794  439101 addons.go:234] Setting addon storage-provisioner=true in "enable-default-cni-119870"
	I0805 12:47:53.666828  439101 host.go:66] Checking if "enable-default-cni-119870" exists ...
	I0805 12:47:53.666868  439101 config.go:182] Loaded profile config "enable-default-cni-119870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:47:53.667520  439101 addons.go:69] Setting default-storageclass=true in profile "enable-default-cni-119870"
	I0805 12:47:53.667579  439101 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "enable-default-cni-119870"
	I0805 12:47:53.668275  439101 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:47:53.668329  439101 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:47:53.669129  439101 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:47:53.669438  439101 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:47:53.669945  439101 out.go:177] * Verifying Kubernetes components...
	I0805 12:47:53.671268  439101 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:47:53.685733  439101 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44827
	I0805 12:47:53.686286  439101 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:47:53.686892  439101 main.go:141] libmachine: Using API Version  1
	I0805 12:47:53.686920  439101 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:47:53.687274  439101 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:47:53.687508  439101 main.go:141] libmachine: (enable-default-cni-119870) Calling .GetState
	I0805 12:47:53.688911  439101 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42141
	I0805 12:47:53.689375  439101 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:47:53.689913  439101 main.go:141] libmachine: Using API Version  1
	I0805 12:47:53.689935  439101 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:47:53.690277  439101 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:47:53.690956  439101 addons.go:234] Setting addon default-storageclass=true in "enable-default-cni-119870"
	I0805 12:47:53.690992  439101 host.go:66] Checking if "enable-default-cni-119870" exists ...
	I0805 12:47:53.691116  439101 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:47:53.691162  439101 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:47:53.691245  439101 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:47:53.691278  439101 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:47:53.706912  439101 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44605
	I0805 12:47:53.707331  439101 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:47:53.707800  439101 main.go:141] libmachine: Using API Version  1
	I0805 12:47:53.707821  439101 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:47:53.708178  439101 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:47:53.708855  439101 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:47:53.708918  439101 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:47:53.709884  439101 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41475
	I0805 12:47:53.710377  439101 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:47:53.710900  439101 main.go:141] libmachine: Using API Version  1
	I0805 12:47:53.711250  439101 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:47:53.711693  439101 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:47:53.711904  439101 main.go:141] libmachine: (enable-default-cni-119870) Calling .GetState
	I0805 12:47:53.713887  439101 main.go:141] libmachine: (enable-default-cni-119870) Calling .DriverName
	I0805 12:47:53.716262  439101 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:47:53.717607  439101 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 12:47:53.717627  439101 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 12:47:53.717647  439101 main.go:141] libmachine: (enable-default-cni-119870) Calling .GetSSHHostname
	I0805 12:47:53.722088  439101 main.go:141] libmachine: (enable-default-cni-119870) DBG | domain enable-default-cni-119870 has defined MAC address 52:54:00:16:6f:6d in network mk-enable-default-cni-119870
	I0805 12:47:53.722121  439101 main.go:141] libmachine: (enable-default-cni-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:6f:6d", ip: ""} in network mk-enable-default-cni-119870: {Iface:virbr4 ExpiryTime:2024-08-05 13:47:10 +0000 UTC Type:0 Mac:52:54:00:16:6f:6d Iaid: IPaddr:192.168.72.68 Prefix:24 Hostname:enable-default-cni-119870 Clientid:01:52:54:00:16:6f:6d}
	I0805 12:47:53.722140  439101 main.go:141] libmachine: (enable-default-cni-119870) DBG | domain enable-default-cni-119870 has defined IP address 192.168.72.68 and MAC address 52:54:00:16:6f:6d in network mk-enable-default-cni-119870
	I0805 12:47:53.722186  439101 main.go:141] libmachine: (enable-default-cni-119870) Calling .GetSSHPort
	I0805 12:47:53.722371  439101 main.go:141] libmachine: (enable-default-cni-119870) Calling .GetSSHKeyPath
	I0805 12:47:53.722555  439101 main.go:141] libmachine: (enable-default-cni-119870) Calling .GetSSHUsername
	I0805 12:47:53.722720  439101 sshutil.go:53] new ssh client: &{IP:192.168.72.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/enable-default-cni-119870/id_rsa Username:docker}
	I0805 12:47:53.725672  439101 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46011
	I0805 12:47:53.726114  439101 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:47:53.726648  439101 main.go:141] libmachine: Using API Version  1
	I0805 12:47:53.726667  439101 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:47:53.727071  439101 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:47:53.727326  439101 main.go:141] libmachine: (enable-default-cni-119870) Calling .GetState
	I0805 12:47:53.729275  439101 main.go:141] libmachine: (enable-default-cni-119870) Calling .DriverName
	I0805 12:47:53.729509  439101 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 12:47:53.729526  439101 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 12:47:53.729546  439101 main.go:141] libmachine: (enable-default-cni-119870) Calling .GetSSHHostname
	I0805 12:47:53.733296  439101 main.go:141] libmachine: (enable-default-cni-119870) DBG | domain enable-default-cni-119870 has defined MAC address 52:54:00:16:6f:6d in network mk-enable-default-cni-119870
	I0805 12:47:53.733497  439101 main.go:141] libmachine: (enable-default-cni-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:6f:6d", ip: ""} in network mk-enable-default-cni-119870: {Iface:virbr4 ExpiryTime:2024-08-05 13:47:10 +0000 UTC Type:0 Mac:52:54:00:16:6f:6d Iaid: IPaddr:192.168.72.68 Prefix:24 Hostname:enable-default-cni-119870 Clientid:01:52:54:00:16:6f:6d}
	I0805 12:47:53.733523  439101 main.go:141] libmachine: (enable-default-cni-119870) DBG | domain enable-default-cni-119870 has defined IP address 192.168.72.68 and MAC address 52:54:00:16:6f:6d in network mk-enable-default-cni-119870
	I0805 12:47:53.733713  439101 main.go:141] libmachine: (enable-default-cni-119870) Calling .GetSSHPort
	I0805 12:47:53.733900  439101 main.go:141] libmachine: (enable-default-cni-119870) Calling .GetSSHKeyPath
	I0805 12:47:53.734061  439101 main.go:141] libmachine: (enable-default-cni-119870) Calling .GetSSHUsername
	I0805 12:47:53.734189  439101 sshutil.go:53] new ssh client: &{IP:192.168.72.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/enable-default-cni-119870/id_rsa Username:docker}
	I0805 12:47:53.842546  439101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0805 12:47:53.875274  439101 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 12:47:54.052955  439101 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 12:47:54.137696  439101 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 12:47:54.398733  439101 main.go:141] libmachine: Making call to close driver server
	I0805 12:47:54.398773  439101 main.go:141] libmachine: (enable-default-cni-119870) Calling .Close
	I0805 12:47:54.399216  439101 main.go:141] libmachine: (enable-default-cni-119870) DBG | Closing plugin on server side
	I0805 12:47:54.398618  439101 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0805 12:47:54.399304  439101 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:47:54.399330  439101 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:47:54.399341  439101 main.go:141] libmachine: Making call to close driver server
	I0805 12:47:54.399350  439101 main.go:141] libmachine: (enable-default-cni-119870) Calling .Close
	I0805 12:47:54.399761  439101 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:47:54.399777  439101 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:47:54.400209  439101 node_ready.go:35] waiting up to 15m0s for node "enable-default-cni-119870" to be "Ready" ...
	I0805 12:47:54.416412  439101 node_ready.go:49] node "enable-default-cni-119870" has status "Ready":"True"
	I0805 12:47:54.416444  439101 node_ready.go:38] duration metric: took 16.196222ms for node "enable-default-cni-119870" to be "Ready" ...
	I0805 12:47:54.416456  439101 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 12:47:54.425421  439101 main.go:141] libmachine: Making call to close driver server
	I0805 12:47:54.425448  439101 main.go:141] libmachine: (enable-default-cni-119870) Calling .Close
	I0805 12:47:54.425772  439101 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:47:54.425797  439101 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:47:54.431177  439101 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-2bvv7" in "kube-system" namespace to be "Ready" ...
	I0805 12:47:54.909289  439101 kapi.go:214] "coredns" deployment in "kube-system" namespace and "enable-default-cni-119870" context rescaled to 1 replicas
	I0805 12:47:55.093722  439101 main.go:141] libmachine: Making call to close driver server
	I0805 12:47:55.093781  439101 main.go:141] libmachine: (enable-default-cni-119870) Calling .Close
	I0805 12:47:55.095915  439101 main.go:141] libmachine: (enable-default-cni-119870) DBG | Closing plugin on server side
	I0805 12:47:55.095929  439101 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:47:55.096013  439101 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:47:55.096033  439101 main.go:141] libmachine: Making call to close driver server
	I0805 12:47:55.096042  439101 main.go:141] libmachine: (enable-default-cni-119870) Calling .Close
	I0805 12:47:55.096385  439101 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:47:55.096404  439101 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:47:55.099263  439101 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0805 12:47:50.609453  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:47:50.610155  441429 main.go:141] libmachine: (flannel-119870) DBG | unable to find current IP address of domain flannel-119870 in network mk-flannel-119870
	I0805 12:47:50.610188  441429 main.go:141] libmachine: (flannel-119870) DBG | I0805 12:47:50.610123  441451 retry.go:31] will retry after 1.21476036s: waiting for machine to come up
	I0805 12:47:51.826452  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:47:51.827122  441429 main.go:141] libmachine: (flannel-119870) DBG | unable to find current IP address of domain flannel-119870 in network mk-flannel-119870
	I0805 12:47:51.827155  441429 main.go:141] libmachine: (flannel-119870) DBG | I0805 12:47:51.827092  441451 retry.go:31] will retry after 1.134765762s: waiting for machine to come up
	I0805 12:47:52.963197  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:47:52.963782  441429 main.go:141] libmachine: (flannel-119870) DBG | unable to find current IP address of domain flannel-119870 in network mk-flannel-119870
	I0805 12:47:52.963813  441429 main.go:141] libmachine: (flannel-119870) DBG | I0805 12:47:52.963707  441451 retry.go:31] will retry after 1.620475744s: waiting for machine to come up
	I0805 12:47:54.585311  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:47:54.585811  441429 main.go:141] libmachine: (flannel-119870) DBG | unable to find current IP address of domain flannel-119870 in network mk-flannel-119870
	I0805 12:47:54.585843  441429 main.go:141] libmachine: (flannel-119870) DBG | I0805 12:47:54.585751  441451 retry.go:31] will retry after 2.642870577s: waiting for machine to come up
	I0805 12:47:55.100715  439101 addons.go:510] duration metric: took 1.434045061s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0805 12:47:56.439577  439101 pod_ready.go:102] pod "coredns-7db6d8ff4d-2bvv7" in "kube-system" namespace has status "Ready":"False"
	I0805 12:47:57.229950  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:47:57.230557  441429 main.go:141] libmachine: (flannel-119870) DBG | unable to find current IP address of domain flannel-119870 in network mk-flannel-119870
	I0805 12:47:57.230586  441429 main.go:141] libmachine: (flannel-119870) DBG | I0805 12:47:57.230484  441451 retry.go:31] will retry after 3.281915621s: waiting for machine to come up
	I0805 12:47:58.937681  439101 pod_ready.go:102] pod "coredns-7db6d8ff4d-2bvv7" in "kube-system" namespace has status "Ready":"False"
	I0805 12:48:00.938089  439101 pod_ready.go:102] pod "coredns-7db6d8ff4d-2bvv7" in "kube-system" namespace has status "Ready":"False"
	I0805 12:48:00.514222  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:48:00.514670  441429 main.go:141] libmachine: (flannel-119870) DBG | unable to find current IP address of domain flannel-119870 in network mk-flannel-119870
	I0805 12:48:00.514700  441429 main.go:141] libmachine: (flannel-119870) DBG | I0805 12:48:00.514637  441451 retry.go:31] will retry after 4.22525912s: waiting for machine to come up
	I0805 12:48:04.745005  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:48:04.745452  441429 main.go:141] libmachine: (flannel-119870) DBG | unable to find current IP address of domain flannel-119870 in network mk-flannel-119870
	I0805 12:48:04.745480  441429 main.go:141] libmachine: (flannel-119870) DBG | I0805 12:48:04.745411  441451 retry.go:31] will retry after 3.713416553s: waiting for machine to come up
	I0805 12:48:03.438491  439101 pod_ready.go:102] pod "coredns-7db6d8ff4d-2bvv7" in "kube-system" namespace has status "Ready":"False"
	I0805 12:48:05.439190  439101 pod_ready.go:102] pod "coredns-7db6d8ff4d-2bvv7" in "kube-system" namespace has status "Ready":"False"
	I0805 12:48:08.460642  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:48:08.461094  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has current primary IP address 192.168.39.69 and MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:48:08.461118  441429 main.go:141] libmachine: (flannel-119870) Found IP for machine: 192.168.39.69
	I0805 12:48:08.461131  441429 main.go:141] libmachine: (flannel-119870) Reserving static IP address...
	I0805 12:48:08.461521  441429 main.go:141] libmachine: (flannel-119870) DBG | unable to find host DHCP lease matching {name: "flannel-119870", mac: "52:54:00:5a:41:d8", ip: "192.168.39.69"} in network mk-flannel-119870
	I0805 12:48:08.539396  441429 main.go:141] libmachine: (flannel-119870) DBG | Getting to WaitForSSH function...
	I0805 12:48:08.539426  441429 main.go:141] libmachine: (flannel-119870) Reserved static IP address: 192.168.39.69
	I0805 12:48:08.539439  441429 main.go:141] libmachine: (flannel-119870) Waiting for SSH to be available...
	I0805 12:48:08.542565  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:48:08.543101  441429 main.go:141] libmachine: (flannel-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:41:d8", ip: ""} in network mk-flannel-119870: {Iface:virbr1 ExpiryTime:2024-08-05 13:48:01 +0000 UTC Type:0 Mac:52:54:00:5a:41:d8 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5a:41:d8}
	I0805 12:48:08.543132  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined IP address 192.168.39.69 and MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:48:08.543279  441429 main.go:141] libmachine: (flannel-119870) DBG | Using SSH client type: external
	I0805 12:48:08.543298  441429 main.go:141] libmachine: (flannel-119870) DBG | Using SSH private key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/flannel-119870/id_rsa (-rw-------)
	I0805 12:48:08.543340  441429 main.go:141] libmachine: (flannel-119870) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.69 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19377-383955/.minikube/machines/flannel-119870/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 12:48:08.543359  441429 main.go:141] libmachine: (flannel-119870) DBG | About to run SSH command:
	I0805 12:48:08.543371  441429 main.go:141] libmachine: (flannel-119870) DBG | exit 0
	I0805 12:48:08.676632  441429 main.go:141] libmachine: (flannel-119870) DBG | SSH cmd err, output: <nil>: 
	I0805 12:48:08.677122  441429 main.go:141] libmachine: (flannel-119870) KVM machine creation complete!
	I0805 12:48:08.677502  441429 main.go:141] libmachine: (flannel-119870) Calling .GetConfigRaw
	I0805 12:48:08.680341  441429 main.go:141] libmachine: (flannel-119870) Calling .DriverName
	I0805 12:48:08.683080  441429 main.go:141] libmachine: (flannel-119870) Calling .DriverName
	I0805 12:48:08.683252  441429 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0805 12:48:08.683279  441429 main.go:141] libmachine: (flannel-119870) Calling .GetState
	I0805 12:48:08.684944  441429 main.go:141] libmachine: Detecting operating system of created instance...
	I0805 12:48:08.684962  441429 main.go:141] libmachine: Waiting for SSH to be available...
	I0805 12:48:08.684969  441429 main.go:141] libmachine: Getting to WaitForSSH function...
	I0805 12:48:08.684978  441429 main.go:141] libmachine: (flannel-119870) Calling .GetSSHHostname
	I0805 12:48:08.688832  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:48:08.689400  441429 main.go:141] libmachine: (flannel-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:41:d8", ip: ""} in network mk-flannel-119870: {Iface:virbr1 ExpiryTime:2024-08-05 13:48:01 +0000 UTC Type:0 Mac:52:54:00:5a:41:d8 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:flannel-119870 Clientid:01:52:54:00:5a:41:d8}
	I0805 12:48:08.689424  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined IP address 192.168.39.69 and MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:48:08.689453  441429 main.go:141] libmachine: (flannel-119870) Calling .GetSSHPort
	I0805 12:48:08.689657  441429 main.go:141] libmachine: (flannel-119870) Calling .GetSSHKeyPath
	I0805 12:48:08.689804  441429 main.go:141] libmachine: (flannel-119870) Calling .GetSSHKeyPath
	I0805 12:48:08.689934  441429 main.go:141] libmachine: (flannel-119870) Calling .GetSSHUsername
	I0805 12:48:08.690072  441429 main.go:141] libmachine: Using SSH client type: native
	I0805 12:48:08.690320  441429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0805 12:48:08.690331  441429 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0805 12:48:08.799730  441429 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 12:48:08.799772  441429 main.go:141] libmachine: Detecting the provisioner...
	I0805 12:48:08.799784  441429 main.go:141] libmachine: (flannel-119870) Calling .GetSSHHostname
	I0805 12:48:08.803168  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:48:08.803586  441429 main.go:141] libmachine: (flannel-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:41:d8", ip: ""} in network mk-flannel-119870: {Iface:virbr1 ExpiryTime:2024-08-05 13:48:01 +0000 UTC Type:0 Mac:52:54:00:5a:41:d8 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:flannel-119870 Clientid:01:52:54:00:5a:41:d8}
	I0805 12:48:08.803622  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined IP address 192.168.39.69 and MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:48:08.803809  441429 main.go:141] libmachine: (flannel-119870) Calling .GetSSHPort
	I0805 12:48:08.804031  441429 main.go:141] libmachine: (flannel-119870) Calling .GetSSHKeyPath
	I0805 12:48:08.804272  441429 main.go:141] libmachine: (flannel-119870) Calling .GetSSHKeyPath
	I0805 12:48:08.804469  441429 main.go:141] libmachine: (flannel-119870) Calling .GetSSHUsername
	I0805 12:48:08.804671  441429 main.go:141] libmachine: Using SSH client type: native
	I0805 12:48:08.804859  441429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0805 12:48:08.804870  441429 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0805 12:48:08.922168  441429 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0805 12:48:08.922264  441429 main.go:141] libmachine: found compatible host: buildroot
	I0805 12:48:08.922278  441429 main.go:141] libmachine: Provisioning with buildroot...
	I0805 12:48:08.922291  441429 main.go:141] libmachine: (flannel-119870) Calling .GetMachineName
	I0805 12:48:08.922574  441429 buildroot.go:166] provisioning hostname "flannel-119870"
	I0805 12:48:08.922598  441429 main.go:141] libmachine: (flannel-119870) Calling .GetMachineName
	I0805 12:48:08.922785  441429 main.go:141] libmachine: (flannel-119870) Calling .GetSSHHostname
	I0805 12:48:08.925858  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:48:08.926277  441429 main.go:141] libmachine: (flannel-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:41:d8", ip: ""} in network mk-flannel-119870: {Iface:virbr1 ExpiryTime:2024-08-05 13:48:01 +0000 UTC Type:0 Mac:52:54:00:5a:41:d8 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:flannel-119870 Clientid:01:52:54:00:5a:41:d8}
	I0805 12:48:08.926305  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined IP address 192.168.39.69 and MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:48:08.926552  441429 main.go:141] libmachine: (flannel-119870) Calling .GetSSHPort
	I0805 12:48:08.926745  441429 main.go:141] libmachine: (flannel-119870) Calling .GetSSHKeyPath
	I0805 12:48:08.926916  441429 main.go:141] libmachine: (flannel-119870) Calling .GetSSHKeyPath
	I0805 12:48:08.927065  441429 main.go:141] libmachine: (flannel-119870) Calling .GetSSHUsername
	I0805 12:48:08.927236  441429 main.go:141] libmachine: Using SSH client type: native
	I0805 12:48:08.927494  441429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0805 12:48:08.927515  441429 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-119870 && echo "flannel-119870" | sudo tee /etc/hostname
	I0805 12:48:09.056100  441429 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-119870
	
	I0805 12:48:09.056140  441429 main.go:141] libmachine: (flannel-119870) Calling .GetSSHHostname
	I0805 12:48:09.059319  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:48:09.059706  441429 main.go:141] libmachine: (flannel-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:41:d8", ip: ""} in network mk-flannel-119870: {Iface:virbr1 ExpiryTime:2024-08-05 13:48:01 +0000 UTC Type:0 Mac:52:54:00:5a:41:d8 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:flannel-119870 Clientid:01:52:54:00:5a:41:d8}
	I0805 12:48:09.059729  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined IP address 192.168.39.69 and MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:48:09.060027  441429 main.go:141] libmachine: (flannel-119870) Calling .GetSSHPort
	I0805 12:48:09.060269  441429 main.go:141] libmachine: (flannel-119870) Calling .GetSSHKeyPath
	I0805 12:48:09.060460  441429 main.go:141] libmachine: (flannel-119870) Calling .GetSSHKeyPath
	I0805 12:48:09.060639  441429 main.go:141] libmachine: (flannel-119870) Calling .GetSSHUsername
	I0805 12:48:09.060836  441429 main.go:141] libmachine: Using SSH client type: native
	I0805 12:48:09.061058  441429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0805 12:48:09.061082  441429 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-119870' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-119870/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-119870' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 12:48:09.182604  441429 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 12:48:09.182634  441429 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19377-383955/.minikube CaCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19377-383955/.minikube}
	I0805 12:48:09.182678  441429 buildroot.go:174] setting up certificates
	I0805 12:48:09.182691  441429 provision.go:84] configureAuth start
	I0805 12:48:09.182717  441429 main.go:141] libmachine: (flannel-119870) Calling .GetMachineName
	I0805 12:48:09.183072  441429 main.go:141] libmachine: (flannel-119870) Calling .GetIP
	I0805 12:48:09.185904  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:48:09.186295  441429 main.go:141] libmachine: (flannel-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:41:d8", ip: ""} in network mk-flannel-119870: {Iface:virbr1 ExpiryTime:2024-08-05 13:48:01 +0000 UTC Type:0 Mac:52:54:00:5a:41:d8 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:flannel-119870 Clientid:01:52:54:00:5a:41:d8}
	I0805 12:48:09.186342  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined IP address 192.168.39.69 and MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:48:09.186529  441429 main.go:141] libmachine: (flannel-119870) Calling .GetSSHHostname
	I0805 12:48:09.188866  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:48:09.189107  441429 main.go:141] libmachine: (flannel-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:41:d8", ip: ""} in network mk-flannel-119870: {Iface:virbr1 ExpiryTime:2024-08-05 13:48:01 +0000 UTC Type:0 Mac:52:54:00:5a:41:d8 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:flannel-119870 Clientid:01:52:54:00:5a:41:d8}
	I0805 12:48:09.189138  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined IP address 192.168.39.69 and MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:48:09.189370  441429 provision.go:143] copyHostCerts
	I0805 12:48:09.189480  441429 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem, removing ...
	I0805 12:48:09.189514  441429 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 12:48:09.189618  441429 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem (1082 bytes)
	I0805 12:48:09.189813  441429 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem, removing ...
	I0805 12:48:09.189848  441429 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 12:48:09.189900  441429 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem (1123 bytes)
	I0805 12:48:09.190018  441429 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem, removing ...
	I0805 12:48:09.190046  441429 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 12:48:09.190087  441429 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem (1675 bytes)
	I0805 12:48:09.190194  441429 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem org=jenkins.flannel-119870 san=[127.0.0.1 192.168.39.69 flannel-119870 localhost minikube]
	I0805 12:48:09.401007  441429 provision.go:177] copyRemoteCerts
	I0805 12:48:09.401077  441429 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 12:48:09.401135  441429 main.go:141] libmachine: (flannel-119870) Calling .GetSSHHostname
	I0805 12:48:09.406277  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:48:09.406711  441429 main.go:141] libmachine: (flannel-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:41:d8", ip: ""} in network mk-flannel-119870: {Iface:virbr1 ExpiryTime:2024-08-05 13:48:01 +0000 UTC Type:0 Mac:52:54:00:5a:41:d8 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:flannel-119870 Clientid:01:52:54:00:5a:41:d8}
	I0805 12:48:09.406745  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined IP address 192.168.39.69 and MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:48:09.406921  441429 main.go:141] libmachine: (flannel-119870) Calling .GetSSHPort
	I0805 12:48:09.407104  441429 main.go:141] libmachine: (flannel-119870) Calling .GetSSHKeyPath
	I0805 12:48:09.407304  441429 main.go:141] libmachine: (flannel-119870) Calling .GetSSHUsername
	I0805 12:48:09.407545  441429 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/flannel-119870/id_rsa Username:docker}
	I0805 12:48:09.497165  441429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 12:48:09.530032  441429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0805 12:48:09.558060  441429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0805 12:48:09.589100  441429 provision.go:87] duration metric: took 406.394408ms to configureAuth
	I0805 12:48:09.589140  441429 buildroot.go:189] setting minikube options for container-runtime
	I0805 12:48:09.589360  441429 config.go:182] Loaded profile config "flannel-119870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:48:09.589443  441429 main.go:141] libmachine: (flannel-119870) Calling .GetSSHHostname
	I0805 12:48:09.592594  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:48:09.592986  441429 main.go:141] libmachine: (flannel-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:41:d8", ip: ""} in network mk-flannel-119870: {Iface:virbr1 ExpiryTime:2024-08-05 13:48:01 +0000 UTC Type:0 Mac:52:54:00:5a:41:d8 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:flannel-119870 Clientid:01:52:54:00:5a:41:d8}
	I0805 12:48:09.593016  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined IP address 192.168.39.69 and MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:48:09.593298  441429 main.go:141] libmachine: (flannel-119870) Calling .GetSSHPort
	I0805 12:48:09.593473  441429 main.go:141] libmachine: (flannel-119870) Calling .GetSSHKeyPath
	I0805 12:48:09.593642  441429 main.go:141] libmachine: (flannel-119870) Calling .GetSSHKeyPath
	I0805 12:48:09.593778  441429 main.go:141] libmachine: (flannel-119870) Calling .GetSSHUsername
	I0805 12:48:09.593963  441429 main.go:141] libmachine: Using SSH client type: native
	I0805 12:48:09.594150  441429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0805 12:48:09.594173  441429 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 12:48:09.927423  441429 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 12:48:09.927461  441429 main.go:141] libmachine: Checking connection to Docker...
	I0805 12:48:09.927473  441429 main.go:141] libmachine: (flannel-119870) Calling .GetURL
	I0805 12:48:09.928951  441429 main.go:141] libmachine: (flannel-119870) DBG | Using libvirt version 6000000
	I0805 12:48:09.932941  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:48:09.933507  441429 main.go:141] libmachine: (flannel-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:41:d8", ip: ""} in network mk-flannel-119870: {Iface:virbr1 ExpiryTime:2024-08-05 13:48:01 +0000 UTC Type:0 Mac:52:54:00:5a:41:d8 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:flannel-119870 Clientid:01:52:54:00:5a:41:d8}
	I0805 12:48:09.933878  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined IP address 192.168.39.69 and MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:48:09.934099  441429 main.go:141] libmachine: Docker is up and running!
	I0805 12:48:09.934117  441429 main.go:141] libmachine: Reticulating splines...
	I0805 12:48:09.934126  441429 client.go:171] duration metric: took 24.671070393s to LocalClient.Create
	I0805 12:48:09.934144  441429 start.go:167] duration metric: took 24.671131248s to libmachine.API.Create "flannel-119870"
	I0805 12:48:09.934153  441429 start.go:293] postStartSetup for "flannel-119870" (driver="kvm2")
	I0805 12:48:09.934163  441429 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 12:48:09.934177  441429 main.go:141] libmachine: (flannel-119870) Calling .DriverName
	I0805 12:48:09.934378  441429 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 12:48:09.934410  441429 main.go:141] libmachine: (flannel-119870) Calling .GetSSHHostname
	I0805 12:48:09.940727  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:48:09.941326  441429 main.go:141] libmachine: (flannel-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:41:d8", ip: ""} in network mk-flannel-119870: {Iface:virbr1 ExpiryTime:2024-08-05 13:48:01 +0000 UTC Type:0 Mac:52:54:00:5a:41:d8 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:flannel-119870 Clientid:01:52:54:00:5a:41:d8}
	I0805 12:48:09.941353  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined IP address 192.168.39.69 and MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:48:09.941544  441429 main.go:141] libmachine: (flannel-119870) Calling .GetSSHPort
	I0805 12:48:09.943614  441429 main.go:141] libmachine: (flannel-119870) Calling .GetSSHKeyPath
	I0805 12:48:09.943822  441429 main.go:141] libmachine: (flannel-119870) Calling .GetSSHUsername
	I0805 12:48:09.943987  441429 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/flannel-119870/id_rsa Username:docker}
	I0805 12:48:10.037117  441429 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 12:48:10.042581  441429 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 12:48:10.042607  441429 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/addons for local assets ...
	I0805 12:48:10.042667  441429 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/files for local assets ...
	I0805 12:48:10.042778  441429 filesync.go:149] local asset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> 3912192.pem in /etc/ssl/certs
	I0805 12:48:10.042906  441429 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 12:48:10.056520  441429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:48:10.087352  441429 start.go:296] duration metric: took 153.185725ms for postStartSetup
	I0805 12:48:10.087408  441429 main.go:141] libmachine: (flannel-119870) Calling .GetConfigRaw
	I0805 12:48:10.088306  441429 main.go:141] libmachine: (flannel-119870) Calling .GetIP
	I0805 12:48:10.091453  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:48:10.091940  441429 main.go:141] libmachine: (flannel-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:41:d8", ip: ""} in network mk-flannel-119870: {Iface:virbr1 ExpiryTime:2024-08-05 13:48:01 +0000 UTC Type:0 Mac:52:54:00:5a:41:d8 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:flannel-119870 Clientid:01:52:54:00:5a:41:d8}
	I0805 12:48:10.091973  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined IP address 192.168.39.69 and MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:48:10.092244  441429 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/flannel-119870/config.json ...
	I0805 12:48:10.092510  441429 start.go:128] duration metric: took 24.848880409s to createHost
	I0805 12:48:10.092536  441429 main.go:141] libmachine: (flannel-119870) Calling .GetSSHHostname
	I0805 12:48:10.095281  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:48:10.095711  441429 main.go:141] libmachine: (flannel-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:41:d8", ip: ""} in network mk-flannel-119870: {Iface:virbr1 ExpiryTime:2024-08-05 13:48:01 +0000 UTC Type:0 Mac:52:54:00:5a:41:d8 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:flannel-119870 Clientid:01:52:54:00:5a:41:d8}
	I0805 12:48:10.095736  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined IP address 192.168.39.69 and MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:48:10.096001  441429 main.go:141] libmachine: (flannel-119870) Calling .GetSSHPort
	I0805 12:48:10.096202  441429 main.go:141] libmachine: (flannel-119870) Calling .GetSSHKeyPath
	I0805 12:48:10.096386  441429 main.go:141] libmachine: (flannel-119870) Calling .GetSSHKeyPath
	I0805 12:48:10.096498  441429 main.go:141] libmachine: (flannel-119870) Calling .GetSSHUsername
	I0805 12:48:10.096636  441429 main.go:141] libmachine: Using SSH client type: native
	I0805 12:48:10.096839  441429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0805 12:48:10.097164  441429 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 12:48:06.522760  439495 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 51af140d38858f241be8fe00f1d18e1ee1599065fdb37e7114ef3632f792d431 a1e167cf1fb877d98db648281722a22eab503002146399d5f94af44080472c37 a97291ccbcdcbf5b02208e879737961e11d7aecc6dcb103cdd892cab35a50f18 8a07512b8f6360aa6c35dc4e3efa0cb0ce510101fd73c5feac954abc9beb326b 6d386440dfe8bcf6194e173d6205798db7ca8e6ec13bbed49371ebb17722f0c7 fdc64156cc6d50affde37564721f63f165d0bbc8837f91c9220c11a9488cc49e 32b56232b8eb2518beecfb5172f2ee19153e268dfb8022e07add1844d11e5576 9a3c8a3e06cfad02af17e2352ae61f1f6079479da55b116ca8c3ba0253fa4c58 551d7159d1293013640ec118e54216b3f6e03ab53aa2f6657cee96805fa348e7 dd0c1c82dd53c2862daa98ee2b0ee03486df3c66afd626860f87f912f2191533 524fcbd99259abee9655029cbad61e39c93d9b7f8c9925450bfbfb142b3b0c9b e8f1e1bb2d415cf272b7c9a841d6a3d26e005699239ac7a5ec2065af3a9d5420 6ca2cccb0be4d0187bad362facde848442ba0d8c15bd96f5f8f0677e730a1b13 a1cfde7457f288c6441396f3a0deab0b08244e4d39c1b050767c7caf3e335fac bd5542
c19e49881bea760e54f771c0601631d3e96768ff964860d017594ee130 c474681074aedae6c2877d6323a904ee355f828fd63a58450869d4811e9cc603: (25.747637454s)
	W0805 12:48:06.522861  439495 kubeadm.go:644] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 51af140d38858f241be8fe00f1d18e1ee1599065fdb37e7114ef3632f792d431 a1e167cf1fb877d98db648281722a22eab503002146399d5f94af44080472c37 a97291ccbcdcbf5b02208e879737961e11d7aecc6dcb103cdd892cab35a50f18 8a07512b8f6360aa6c35dc4e3efa0cb0ce510101fd73c5feac954abc9beb326b 6d386440dfe8bcf6194e173d6205798db7ca8e6ec13bbed49371ebb17722f0c7 fdc64156cc6d50affde37564721f63f165d0bbc8837f91c9220c11a9488cc49e 32b56232b8eb2518beecfb5172f2ee19153e268dfb8022e07add1844d11e5576 9a3c8a3e06cfad02af17e2352ae61f1f6079479da55b116ca8c3ba0253fa4c58 551d7159d1293013640ec118e54216b3f6e03ab53aa2f6657cee96805fa348e7 dd0c1c82dd53c2862daa98ee2b0ee03486df3c66afd626860f87f912f2191533 524fcbd99259abee9655029cbad61e39c93d9b7f8c9925450bfbfb142b3b0c9b e8f1e1bb2d415cf272b7c9a841d6a3d26e005699239ac7a5ec2065af3a9d5420 6ca2cccb0be4d0187bad362facde848442ba0d8c15bd96f5f8f0677e730a1b13 a1cfde
7457f288c6441396f3a0deab0b08244e4d39c1b050767c7caf3e335fac bd5542c19e49881bea760e54f771c0601631d3e96768ff964860d017594ee130 c474681074aedae6c2877d6323a904ee355f828fd63a58450869d4811e9cc603: Process exited with status 1
	stdout:
	51af140d38858f241be8fe00f1d18e1ee1599065fdb37e7114ef3632f792d431
	a1e167cf1fb877d98db648281722a22eab503002146399d5f94af44080472c37
	a97291ccbcdcbf5b02208e879737961e11d7aecc6dcb103cdd892cab35a50f18
	8a07512b8f6360aa6c35dc4e3efa0cb0ce510101fd73c5feac954abc9beb326b
	6d386440dfe8bcf6194e173d6205798db7ca8e6ec13bbed49371ebb17722f0c7
	fdc64156cc6d50affde37564721f63f165d0bbc8837f91c9220c11a9488cc49e
	32b56232b8eb2518beecfb5172f2ee19153e268dfb8022e07add1844d11e5576
	9a3c8a3e06cfad02af17e2352ae61f1f6079479da55b116ca8c3ba0253fa4c58
	
	stderr:
	E0805 12:48:06.514241    3311 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"551d7159d1293013640ec118e54216b3f6e03ab53aa2f6657cee96805fa348e7\": container with ID starting with 551d7159d1293013640ec118e54216b3f6e03ab53aa2f6657cee96805fa348e7 not found: ID does not exist" containerID="551d7159d1293013640ec118e54216b3f6e03ab53aa2f6657cee96805fa348e7"
	time="2024-08-05T12:48:06Z" level=fatal msg="stopping the container \"551d7159d1293013640ec118e54216b3f6e03ab53aa2f6657cee96805fa348e7\": rpc error: code = NotFound desc = could not find container \"551d7159d1293013640ec118e54216b3f6e03ab53aa2f6657cee96805fa348e7\": container with ID starting with 551d7159d1293013640ec118e54216b3f6e03ab53aa2f6657cee96805fa348e7 not found: ID does not exist"
	I0805 12:48:06.522947  439495 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0805 12:48:06.578721  439495 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 12:48:06.591914  439495 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5647 Aug  5 12:46 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 Aug  5 12:46 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5755 Aug  5 12:46 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Aug  5 12:46 /etc/kubernetes/scheduler.conf
	
	I0805 12:48:06.591983  439495 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 12:48:06.601933  439495 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 12:48:06.613004  439495 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 12:48:06.624091  439495 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0805 12:48:06.624157  439495 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 12:48:06.634726  439495 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 12:48:06.645938  439495 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0805 12:48:06.646000  439495 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 12:48:06.658538  439495 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 12:48:06.669343  439495 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:48:06.728851  439495 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:48:07.701687  439495 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:48:07.967791  439495 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:48:08.046521  439495 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:48:08.131167  439495 api_server.go:52] waiting for apiserver process to appear ...
	I0805 12:48:08.131247  439495 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:48:08.631960  439495 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:48:09.132182  439495 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:48:09.152532  439495 api_server.go:72] duration metric: took 1.021363936s to wait for apiserver process to appear ...
	I0805 12:48:09.152561  439495 api_server.go:88] waiting for apiserver healthz status ...
	I0805 12:48:09.152584  439495 api_server.go:253] Checking apiserver healthz at https://192.168.61.242:8443/healthz ...
	I0805 12:48:10.217686  441429 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722862090.190383323
	
	I0805 12:48:10.217708  441429 fix.go:216] guest clock: 1722862090.190383323
	I0805 12:48:10.217715  441429 fix.go:229] Guest: 2024-08-05 12:48:10.190383323 +0000 UTC Remote: 2024-08-05 12:48:10.092524025 +0000 UTC m=+24.972177921 (delta=97.859298ms)
	I0805 12:48:10.217735  441429 fix.go:200] guest clock delta is within tolerance: 97.859298ms
	I0805 12:48:10.217740  441429 start.go:83] releasing machines lock for "flannel-119870", held for 24.974228057s
	I0805 12:48:10.217758  441429 main.go:141] libmachine: (flannel-119870) Calling .DriverName
	I0805 12:48:10.218117  441429 main.go:141] libmachine: (flannel-119870) Calling .GetIP
	I0805 12:48:10.221321  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:48:10.221685  441429 main.go:141] libmachine: (flannel-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:41:d8", ip: ""} in network mk-flannel-119870: {Iface:virbr1 ExpiryTime:2024-08-05 13:48:01 +0000 UTC Type:0 Mac:52:54:00:5a:41:d8 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:flannel-119870 Clientid:01:52:54:00:5a:41:d8}
	I0805 12:48:10.221716  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined IP address 192.168.39.69 and MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:48:10.221867  441429 main.go:141] libmachine: (flannel-119870) Calling .DriverName
	I0805 12:48:10.222435  441429 main.go:141] libmachine: (flannel-119870) Calling .DriverName
	I0805 12:48:10.222621  441429 main.go:141] libmachine: (flannel-119870) Calling .DriverName
	I0805 12:48:10.222715  441429 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 12:48:10.222766  441429 main.go:141] libmachine: (flannel-119870) Calling .GetSSHHostname
	I0805 12:48:10.222996  441429 ssh_runner.go:195] Run: cat /version.json
	I0805 12:48:10.223021  441429 main.go:141] libmachine: (flannel-119870) Calling .GetSSHHostname
	I0805 12:48:10.226009  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:48:10.226277  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:48:10.226424  441429 main.go:141] libmachine: (flannel-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:41:d8", ip: ""} in network mk-flannel-119870: {Iface:virbr1 ExpiryTime:2024-08-05 13:48:01 +0000 UTC Type:0 Mac:52:54:00:5a:41:d8 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:flannel-119870 Clientid:01:52:54:00:5a:41:d8}
	I0805 12:48:10.226463  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined IP address 192.168.39.69 and MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:48:10.226651  441429 main.go:141] libmachine: (flannel-119870) Calling .GetSSHPort
	I0805 12:48:10.226867  441429 main.go:141] libmachine: (flannel-119870) Calling .GetSSHKeyPath
	I0805 12:48:10.226917  441429 main.go:141] libmachine: (flannel-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:41:d8", ip: ""} in network mk-flannel-119870: {Iface:virbr1 ExpiryTime:2024-08-05 13:48:01 +0000 UTC Type:0 Mac:52:54:00:5a:41:d8 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:flannel-119870 Clientid:01:52:54:00:5a:41:d8}
	I0805 12:48:10.226943  441429 main.go:141] libmachine: (flannel-119870) Calling .GetSSHPort
	I0805 12:48:10.226947  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined IP address 192.168.39.69 and MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:48:10.227081  441429 main.go:141] libmachine: (flannel-119870) Calling .GetSSHKeyPath
	I0805 12:48:10.227138  441429 main.go:141] libmachine: (flannel-119870) Calling .GetSSHUsername
	I0805 12:48:10.227226  441429 main.go:141] libmachine: (flannel-119870) Calling .GetSSHUsername
	I0805 12:48:10.227307  441429 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/flannel-119870/id_rsa Username:docker}
	I0805 12:48:10.227737  441429 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/flannel-119870/id_rsa Username:docker}
	I0805 12:48:10.305127  441429 ssh_runner.go:195] Run: systemctl --version
	I0805 12:48:10.336923  441429 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 12:48:10.515962  441429 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 12:48:10.523106  441429 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 12:48:10.523190  441429 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 12:48:10.541450  441429 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 12:48:10.541477  441429 start.go:495] detecting cgroup driver to use...
	I0805 12:48:10.541548  441429 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 12:48:10.566807  441429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 12:48:10.589480  441429 docker.go:217] disabling cri-docker service (if available) ...
	I0805 12:48:10.589540  441429 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 12:48:10.609046  441429 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 12:48:10.628820  441429 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 12:48:10.775543  441429 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 12:48:10.988354  441429 docker.go:233] disabling docker service ...
	I0805 12:48:10.988437  441429 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 12:48:11.003528  441429 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 12:48:11.022194  441429 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 12:48:11.173751  441429 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 12:48:11.319473  441429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 12:48:11.335809  441429 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 12:48:11.359178  441429 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0805 12:48:11.359245  441429 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:48:11.370158  441429 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 12:48:11.370233  441429 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:48:11.381056  441429 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:48:11.391364  441429 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:48:11.401887  441429 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 12:48:11.414510  441429 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:48:11.427330  441429 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:48:11.447551  441429 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:48:11.461204  441429 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 12:48:11.473569  441429 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0805 12:48:11.473629  441429 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0805 12:48:11.490857  441429 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 12:48:11.502965  441429 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:48:11.637883  441429 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 12:48:11.836148  441429 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 12:48:11.836227  441429 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 12:48:11.842275  441429 start.go:563] Will wait 60s for crictl version
	I0805 12:48:11.842348  441429 ssh_runner.go:195] Run: which crictl
	I0805 12:48:11.846430  441429 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 12:48:11.891960  441429 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 12:48:11.892053  441429 ssh_runner.go:195] Run: crio --version
	I0805 12:48:11.921433  441429 ssh_runner.go:195] Run: crio --version
	I0805 12:48:11.961565  441429 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0805 12:48:07.939369  439101 pod_ready.go:102] pod "coredns-7db6d8ff4d-2bvv7" in "kube-system" namespace has status "Ready":"False"
	I0805 12:48:09.949581  439101 pod_ready.go:102] pod "coredns-7db6d8ff4d-2bvv7" in "kube-system" namespace has status "Ready":"False"
	I0805 12:48:12.450307  439101 pod_ready.go:102] pod "coredns-7db6d8ff4d-2bvv7" in "kube-system" namespace has status "Ready":"False"
	I0805 12:48:11.901257  439495 api_server.go:279] https://192.168.61.242:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0805 12:48:11.901292  439495 api_server.go:103] status: https://192.168.61.242:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0805 12:48:11.901308  439495 api_server.go:253] Checking apiserver healthz at https://192.168.61.242:8443/healthz ...
	I0805 12:48:11.990013  439495 api_server.go:279] https://192.168.61.242:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:48:11.990056  439495 api_server.go:103] status: https://192.168.61.242:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:48:12.153468  439495 api_server.go:253] Checking apiserver healthz at https://192.168.61.242:8443/healthz ...
	I0805 12:48:12.164160  439495 api_server.go:279] https://192.168.61.242:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:48:12.164200  439495 api_server.go:103] status: https://192.168.61.242:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:48:12.652976  439495 api_server.go:253] Checking apiserver healthz at https://192.168.61.242:8443/healthz ...
	I0805 12:48:12.664773  439495 api_server.go:279] https://192.168.61.242:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:48:12.664804  439495 api_server.go:103] status: https://192.168.61.242:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:48:13.153483  439495 api_server.go:253] Checking apiserver healthz at https://192.168.61.242:8443/healthz ...
	I0805 12:48:13.160798  439495 api_server.go:279] https://192.168.61.242:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:48:13.160836  439495 api_server.go:103] status: https://192.168.61.242:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:48:13.653259  439495 api_server.go:253] Checking apiserver healthz at https://192.168.61.242:8443/healthz ...
	I0805 12:48:13.668738  439495 api_server.go:279] https://192.168.61.242:8443/healthz returned 200:
	ok
	I0805 12:48:13.682345  439495 api_server.go:141] control plane version: v1.31.0-rc.0
	I0805 12:48:13.682379  439495 api_server.go:131] duration metric: took 4.529808629s to wait for apiserver health ...
	I0805 12:48:13.682392  439495 cni.go:84] Creating CNI manager for ""
	I0805 12:48:13.682401  439495 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:48:13.684410  439495 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0805 12:48:13.685819  439495 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0805 12:48:13.702381  439495 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0805 12:48:13.740292  439495 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 12:48:13.752717  439495 system_pods.go:59] 8 kube-system pods found
	I0805 12:48:13.752757  439495 system_pods.go:61] "coredns-6f6b679f8f-95bb6" [05b3d50e-12a6-4bf6-93dd-2ec9dd74becf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0805 12:48:13.752770  439495 system_pods.go:61] "coredns-6f6b679f8f-kkvck" [316d378b-33df-4d70-bb75-88db4972040d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0805 12:48:13.752780  439495 system_pods.go:61] "etcd-kubernetes-upgrade-515808" [26dd9eb2-bee6-4e02-ba75-3f9c419d3aeb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0805 12:48:13.752790  439495 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-515808" [f2cff29d-8abd-4c36-85e5-efd97106e410] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0805 12:48:13.752802  439495 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-515808" [ac6da7a1-a1c9-4c3d-8139-8837f6fcdc9a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0805 12:48:13.752851  439495 system_pods.go:61] "kube-proxy-9mp69" [ad9c256f-e608-49fc-87ca-be8bdc58a210] Running
	I0805 12:48:13.752872  439495 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-515808" [e3335b7f-b6d3-40d5-8e97-88fcb3157804] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0805 12:48:13.752890  439495 system_pods.go:61] "storage-provisioner" [38924464-08cd-48ff-84fe-f5aea7a7d198] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0805 12:48:13.752909  439495 system_pods.go:74] duration metric: took 12.581283ms to wait for pod list to return data ...
	I0805 12:48:13.752928  439495 node_conditions.go:102] verifying NodePressure condition ...
	I0805 12:48:13.757607  439495 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 12:48:13.757635  439495 node_conditions.go:123] node cpu capacity is 2
	I0805 12:48:13.757649  439495 node_conditions.go:105] duration metric: took 4.706503ms to run NodePressure ...
	I0805 12:48:13.757670  439495 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:48:14.099026  439495 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 12:48:14.116548  439495 ops.go:34] apiserver oom_adj: -16
	I0805 12:48:14.116580  439495 kubeadm.go:597] duration metric: took 33.435690685s to restartPrimaryControlPlane
	I0805 12:48:14.116593  439495 kubeadm.go:394] duration metric: took 33.600958142s to StartCluster
	I0805 12:48:14.116617  439495 settings.go:142] acquiring lock: {Name:mkef693333292ed53a03690c72ec170ce2e26d3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:48:14.116712  439495 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 12:48:14.118499  439495 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/kubeconfig: {Name:mkf2ea766e58530103015ce4ba9d1ed3336f3926 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:48:14.118810  439495 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.242 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 12:48:14.118943  439495 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 12:48:14.119051  439495 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-515808"
	I0805 12:48:14.119059  439495 config.go:182] Loaded profile config "kubernetes-upgrade-515808": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0805 12:48:14.119091  439495 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-515808"
	I0805 12:48:14.119102  439495 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-515808"
	W0805 12:48:14.119105  439495 addons.go:243] addon storage-provisioner should already be in state true
	I0805 12:48:14.119138  439495 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-515808"
	I0805 12:48:14.119156  439495 host.go:66] Checking if "kubernetes-upgrade-515808" exists ...
	I0805 12:48:14.119510  439495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:48:14.119574  439495 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:48:14.119596  439495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:48:14.119627  439495 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:48:14.120796  439495 out.go:177] * Verifying Kubernetes components...
	I0805 12:48:14.122179  439495 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:48:14.142316  439495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44795
	I0805 12:48:14.142354  439495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45007
	I0805 12:48:14.142995  439495 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:48:14.143048  439495 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:48:14.143589  439495 main.go:141] libmachine: Using API Version  1
	I0805 12:48:14.143594  439495 main.go:141] libmachine: Using API Version  1
	I0805 12:48:14.143607  439495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:48:14.143611  439495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:48:14.144015  439495 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:48:14.144048  439495 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:48:14.144200  439495 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetState
	I0805 12:48:14.144579  439495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:48:14.144610  439495 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:48:14.147160  439495 kapi.go:59] client config for kubernetes-upgrade-515808: &rest.Config{Host:"https://192.168.61.242:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kubernetes-upgrade-515808/client.crt", KeyFile:"/home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kubernetes-upgrade-515808/client.key", CAFile:"/home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(
nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 12:48:14.147456  439495 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-515808"
	W0805 12:48:14.147471  439495 addons.go:243] addon default-storageclass should already be in state true
	I0805 12:48:14.147503  439495 host.go:66] Checking if "kubernetes-upgrade-515808" exists ...
	I0805 12:48:14.147818  439495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:48:14.147850  439495 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:48:14.167509  439495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46751
	I0805 12:48:14.167788  439495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45269
	I0805 12:48:14.168258  439495 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:48:14.168866  439495 main.go:141] libmachine: Using API Version  1
	I0805 12:48:14.168884  439495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:48:14.169264  439495 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:48:14.169455  439495 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetState
	I0805 12:48:14.170377  439495 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:48:14.170891  439495 main.go:141] libmachine: Using API Version  1
	I0805 12:48:14.170909  439495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:48:14.171408  439495 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:48:14.172133  439495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:48:14.172172  439495 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:48:14.172436  439495 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .DriverName
	I0805 12:48:14.176219  439495 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:48:11.962983  441429 main.go:141] libmachine: (flannel-119870) Calling .GetIP
	I0805 12:48:11.966468  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:48:11.970549  441429 main.go:141] libmachine: (flannel-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:41:d8", ip: ""} in network mk-flannel-119870: {Iface:virbr1 ExpiryTime:2024-08-05 13:48:01 +0000 UTC Type:0 Mac:52:54:00:5a:41:d8 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:flannel-119870 Clientid:01:52:54:00:5a:41:d8}
	I0805 12:48:11.970572  441429 main.go:141] libmachine: (flannel-119870) DBG | domain flannel-119870 has defined IP address 192.168.39.69 and MAC address 52:54:00:5a:41:d8 in network mk-flannel-119870
	I0805 12:48:11.970851  441429 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0805 12:48:11.977654  441429 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:48:11.995703  441429 kubeadm.go:883] updating cluster {Name:flannel-119870 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:flannel-119870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 12:48:11.995857  441429 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 12:48:11.995919  441429 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:48:12.040702  441429 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0805 12:48:12.040787  441429 ssh_runner.go:195] Run: which lz4
	I0805 12:48:12.046427  441429 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0805 12:48:12.051843  441429 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 12:48:12.051879  441429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0805 12:48:13.746418  441429 crio.go:462] duration metric: took 1.700019848s to copy over tarball
	I0805 12:48:13.746545  441429 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 12:48:14.178273  439495 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 12:48:14.178292  439495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 12:48:14.178313  439495 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHHostname
	I0805 12:48:14.182572  439495 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:48:14.182610  439495 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:63:a0", ip: ""} in network mk-kubernetes-upgrade-515808: {Iface:virbr3 ExpiryTime:2024-08-05 13:41:27 +0000 UTC Type:0 Mac:52:54:00:c9:63:a0 Iaid: IPaddr:192.168.61.242 Prefix:24 Hostname:kubernetes-upgrade-515808 Clientid:01:52:54:00:c9:63:a0}
	I0805 12:48:14.182629  439495 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined IP address 192.168.61.242 and MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:48:14.182833  439495 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHPort
	I0805 12:48:14.183057  439495 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHKeyPath
	I0805 12:48:14.183229  439495 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHUsername
	I0805 12:48:14.183388  439495 sshutil.go:53] new ssh client: &{IP:192.168.61.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/kubernetes-upgrade-515808/id_rsa Username:docker}
	I0805 12:48:14.200709  439495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41445
	I0805 12:48:14.201187  439495 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:48:14.201825  439495 main.go:141] libmachine: Using API Version  1
	I0805 12:48:14.201848  439495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:48:14.202305  439495 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:48:14.202687  439495 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetState
	I0805 12:48:14.204531  439495 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .DriverName
	I0805 12:48:14.204858  439495 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 12:48:14.204878  439495 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 12:48:14.204897  439495 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHHostname
	I0805 12:48:14.207888  439495 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:48:14.208338  439495 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:63:a0", ip: ""} in network mk-kubernetes-upgrade-515808: {Iface:virbr3 ExpiryTime:2024-08-05 13:41:27 +0000 UTC Type:0 Mac:52:54:00:c9:63:a0 Iaid: IPaddr:192.168.61.242 Prefix:24 Hostname:kubernetes-upgrade-515808 Clientid:01:52:54:00:c9:63:a0}
	I0805 12:48:14.208359  439495 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | domain kubernetes-upgrade-515808 has defined IP address 192.168.61.242 and MAC address 52:54:00:c9:63:a0 in network mk-kubernetes-upgrade-515808
	I0805 12:48:14.208531  439495 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHPort
	I0805 12:48:14.208694  439495 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHKeyPath
	I0805 12:48:14.208828  439495 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .GetSSHUsername
	I0805 12:48:14.208951  439495 sshutil.go:53] new ssh client: &{IP:192.168.61.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/kubernetes-upgrade-515808/id_rsa Username:docker}
	I0805 12:48:14.392658  439495 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 12:48:14.413480  439495 api_server.go:52] waiting for apiserver process to appear ...
	I0805 12:48:14.413588  439495 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:48:14.437519  439495 api_server.go:72] duration metric: took 318.674555ms to wait for apiserver process to appear ...
	I0805 12:48:14.437541  439495 api_server.go:88] waiting for apiserver healthz status ...
	I0805 12:48:14.437566  439495 api_server.go:253] Checking apiserver healthz at https://192.168.61.242:8443/healthz ...
	I0805 12:48:14.444029  439495 api_server.go:279] https://192.168.61.242:8443/healthz returned 200:
	ok
	I0805 12:48:14.445382  439495 api_server.go:141] control plane version: v1.31.0-rc.0
	I0805 12:48:14.445405  439495 api_server.go:131] duration metric: took 7.856694ms to wait for apiserver health ...
	I0805 12:48:14.445416  439495 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 12:48:14.452861  439495 system_pods.go:59] 8 kube-system pods found
	I0805 12:48:14.452897  439495 system_pods.go:61] "coredns-6f6b679f8f-95bb6" [05b3d50e-12a6-4bf6-93dd-2ec9dd74becf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0805 12:48:14.452908  439495 system_pods.go:61] "coredns-6f6b679f8f-kkvck" [316d378b-33df-4d70-bb75-88db4972040d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0805 12:48:14.452920  439495 system_pods.go:61] "etcd-kubernetes-upgrade-515808" [26dd9eb2-bee6-4e02-ba75-3f9c419d3aeb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0805 12:48:14.452932  439495 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-515808" [f2cff29d-8abd-4c36-85e5-efd97106e410] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0805 12:48:14.452944  439495 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-515808" [ac6da7a1-a1c9-4c3d-8139-8837f6fcdc9a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0805 12:48:14.452955  439495 system_pods.go:61] "kube-proxy-9mp69" [ad9c256f-e608-49fc-87ca-be8bdc58a210] Running
	I0805 12:48:14.452964  439495 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-515808" [e3335b7f-b6d3-40d5-8e97-88fcb3157804] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0805 12:48:14.452974  439495 system_pods.go:61] "storage-provisioner" [38924464-08cd-48ff-84fe-f5aea7a7d198] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0805 12:48:14.452983  439495 system_pods.go:74] duration metric: took 7.558572ms to wait for pod list to return data ...
	I0805 12:48:14.452996  439495 kubeadm.go:582] duration metric: took 334.15453ms to wait for: map[apiserver:true system_pods:true]
	I0805 12:48:14.453018  439495 node_conditions.go:102] verifying NodePressure condition ...
	I0805 12:48:14.456551  439495 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 12:48:14.456577  439495 node_conditions.go:123] node cpu capacity is 2
	I0805 12:48:14.456589  439495 node_conditions.go:105] duration metric: took 3.563592ms to run NodePressure ...
	I0805 12:48:14.456603  439495 start.go:241] waiting for startup goroutines ...
	I0805 12:48:14.495031  439495 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 12:48:14.600580  439495 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 12:48:14.710446  439495 main.go:141] libmachine: Making call to close driver server
	I0805 12:48:14.710476  439495 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .Close
	I0805 12:48:14.710874  439495 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:48:14.710896  439495 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:48:14.710907  439495 main.go:141] libmachine: Making call to close driver server
	I0805 12:48:14.710916  439495 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .Close
	I0805 12:48:14.713050  439495 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | Closing plugin on server side
	I0805 12:48:14.713088  439495 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:48:14.713108  439495 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:48:14.729375  439495 main.go:141] libmachine: Making call to close driver server
	I0805 12:48:14.729401  439495 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .Close
	I0805 12:48:14.731531  439495 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | Closing plugin on server side
	I0805 12:48:14.731541  439495 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:48:14.731561  439495 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:48:15.518361  439495 main.go:141] libmachine: Making call to close driver server
	I0805 12:48:15.518391  439495 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .Close
	I0805 12:48:15.518767  439495 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | Closing plugin on server side
	I0805 12:48:15.518819  439495 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:48:15.518828  439495 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:48:15.518838  439495 main.go:141] libmachine: Making call to close driver server
	I0805 12:48:15.518854  439495 main.go:141] libmachine: (kubernetes-upgrade-515808) Calling .Close
	I0805 12:48:15.519247  439495 main.go:141] libmachine: (kubernetes-upgrade-515808) DBG | Closing plugin on server side
	I0805 12:48:15.519257  439495 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:48:15.519274  439495 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:48:15.521348  439495 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0805 12:48:15.522893  439495 addons.go:510] duration metric: took 1.403954391s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0805 12:48:15.522969  439495 start.go:246] waiting for cluster config update ...
	I0805 12:48:15.522993  439495 start.go:255] writing updated cluster config ...
	I0805 12:48:15.523314  439495 ssh_runner.go:195] Run: rm -f paused
	I0805 12:48:15.589390  439495 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-rc.0 (minor skew: 1)
	I0805 12:48:15.591476  439495 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-515808" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 05 12:48:17 kubernetes-upgrade-515808 crio[2343]: time="2024-08-05 12:48:17.705666240Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722862097705644875,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125243,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=423d8d6f-433b-4194-b108-3188a8220000 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 12:48:17 kubernetes-upgrade-515808 crio[2343]: time="2024-08-05 12:48:17.706529514Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b51582ad-19b6-4561-9741-6fefc5da193b name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:48:17 kubernetes-upgrade-515808 crio[2343]: time="2024-08-05 12:48:17.706661183Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b51582ad-19b6-4561-9741-6fefc5da193b name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:48:17 kubernetes-upgrade-515808 crio[2343]: time="2024-08-05 12:48:17.707194396Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ead981564c9b9e1ebcb5b936f6d11f69c258fb631ce23f57039f04f3ec1c2e4,PodSandboxId:a841b31994895f3eb551f42c6c07849c4d6aa27c24d3e73f7ed7893c4e6a9144,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722862092436877711,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-95bb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05b3d50e-12a6-4bf6-93dd-2ec9dd74becf,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e848f5e0e44ae2548074c56c3e94382fb03e55e40b874cb91e90b8b7a8669707,PodSandboxId:78d33f3b88691206bb39e03cfcab7ffd7f9d00d67efc6e5af32b5be5a3d8e682,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,CreatedAt:1722862092459203373,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9mp69,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: ad9c256f-e608-49fc-87ca-be8bdc58a210,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9d4fe9bd39faae04808ad9ff844adfe6fc41541f5b0c7a58cef141166e2e747,PodSandboxId:80bc0d85ddc4aa5fe19d1fa0c5a05939ddfb5b70c857d779f472d8b0c6c7cdf7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722862092457373157,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 38924464-08cd-48ff-84fe-f5aea7a7d198,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d4832c4ac9bc97c889014cb2cee01c66195289aa8f3d61bc03d9798f858590,PodSandboxId:744cf76e195698c92e7acd0830864611844fda7504a3279392802ecda52a83ba,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722862092413057003,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kkvck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 316d378b-33df-4d70-bb75-88d
b4972040d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b53081f4a2cd90c831060e851e45c1c0d1843bd23d4ee1fd8125f6df608faaef,PodSandboxId:c68c66f9ff987eaa21dd3542de639b18a1ae705b6d46439d8375afcf0a13291e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:172286208863
8117550,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-515808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd146f34bec7a9377039550b796b5bee,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fef8d0ece0b5669aa512a9b45019f8a287055b90b391f1f05e8cc33ff2d66d9d,PodSandboxId:e5c638f9d8812cecb3d7186ee39b325f29e23f32478937ac992433c50c1f5a54,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:172286208
8622908606,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-515808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b3094bea649f074ec2625cd825788cf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee1b4cd16333bb00f61a6087d3598d4b6170635ba973b77fc0b024f92d7f8bb2,PodSandboxId:9369ffa0cc6d989855c7005ce1e19418321ff81413e3c64de4289d5837053a00,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1722862088647644322,Labels:map[str
ing]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-515808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dda9c948bbb75d56d3854867bf448e27,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60a6450c8cbddcd99f66ee64fb249c188cab299b8eaf9e3887fe78bc638646a6,PodSandboxId:772c13650c2373c7501d18209cef648301e1ec02c1c37160ec91d189836d067d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1722862088608320152,Labels:map[string]s
tring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-515808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 271a38732326fd1c37bc5ff20f7c7d1d,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51af140d38858f241be8fe00f1d18e1ee1599065fdb37e7114ef3632f792d431,PodSandboxId:744cf76e195698c92e7acd0830864611844fda7504a3279392802ecda52a83ba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722862059830022908,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kkvck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 316d378b-33df-4d70-bb75-88db4972040d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1e167cf1fb877d98db648281722a22eab503002146399d5f94af44080472c37,PodSandboxId:a841b31994895f3eb551f42c6c07849c4d6aa27c24d3e73f7ed7893c4e6a9144,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722862059141332641,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-95bb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05b3d50e-12a6-4bf6-93dd-2ec9dd74becf,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a3c8a3e06cfad02af17e2352ae61f1f6079479da55b116ca8c3ba0253fa4c58,PodSandboxId:78d33f3b88691206bb39e03cfcab7ffd7f9d00d67efc6e5
af32b5be5a3d8e682,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_EXITED,CreatedAt:1722862057853843984,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9mp69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad9c256f-e608-49fc-87ca-be8bdc58a210,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a07512b8f6360aa6c35dc4e3efa0cb0ce510101fd73c5feac954abc9beb326b,PodSandboxId:772c13650c2373c7501d18209cef648301e1ec02c1c37160ec91d189836d067d,Metadata:&Contai
nerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1722862058190810941,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-515808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 271a38732326fd1c37bc5ff20f7c7d1d,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d386440dfe8bcf6194e173d6205798db7ca8e6ec13bbed49371ebb17722f0c7,PodSandboxId:c68c66f9ff987eaa21dd3542de639b18a1ae705b6d46439d8375afcf0a13291e,Metadata:&ContainerMet
adata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_EXITED,CreatedAt:1722862058136117664,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-515808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd146f34bec7a9377039550b796b5bee,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdc64156cc6d50affde37564721f63f165d0bbc8837f91c9220c11a9488cc49e,PodSandboxId:9369ffa0cc6d989855c7005ce1e19418321ff81413e3c64de4289d5837053a00,M
etadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_EXITED,CreatedAt:1722862058028917941,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-515808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dda9c948bbb75d56d3854867bf448e27,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b56232b8eb2518beecfb5172f2ee19153e268dfb8022e07add1844d11e5576,PodSandboxId:e5c638f9d8812cecb3d7186ee39b325f29e23f32478937ac992433c50c1f5a54,Metadat
a:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1722862057941631864,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-515808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b3094bea649f074ec2625cd825788cf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b51582ad-19b6-4561-9741-6fefc5da193b name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:48:17 kubernetes-upgrade-515808 crio[2343]: time="2024-08-05 12:48:17.757508715Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aa8339b5-7e7f-4112-ab35-406ce22afe27 name=/runtime.v1.RuntimeService/Version
	Aug 05 12:48:17 kubernetes-upgrade-515808 crio[2343]: time="2024-08-05 12:48:17.758000103Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aa8339b5-7e7f-4112-ab35-406ce22afe27 name=/runtime.v1.RuntimeService/Version
	Aug 05 12:48:17 kubernetes-upgrade-515808 crio[2343]: time="2024-08-05 12:48:17.760106656Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dbbd28c1-163e-4db3-9bb9-82bc3b034ef7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 12:48:17 kubernetes-upgrade-515808 crio[2343]: time="2024-08-05 12:48:17.760649536Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722862097760622021,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125243,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dbbd28c1-163e-4db3-9bb9-82bc3b034ef7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 12:48:17 kubernetes-upgrade-515808 crio[2343]: time="2024-08-05 12:48:17.761491415Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=435a4f99-f471-4ad9-9712-a8871db157b7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:48:17 kubernetes-upgrade-515808 crio[2343]: time="2024-08-05 12:48:17.761558375Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=435a4f99-f471-4ad9-9712-a8871db157b7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:48:17 kubernetes-upgrade-515808 crio[2343]: time="2024-08-05 12:48:17.761933508Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ead981564c9b9e1ebcb5b936f6d11f69c258fb631ce23f57039f04f3ec1c2e4,PodSandboxId:a841b31994895f3eb551f42c6c07849c4d6aa27c24d3e73f7ed7893c4e6a9144,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722862092436877711,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-95bb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05b3d50e-12a6-4bf6-93dd-2ec9dd74becf,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e848f5e0e44ae2548074c56c3e94382fb03e55e40b874cb91e90b8b7a8669707,PodSandboxId:78d33f3b88691206bb39e03cfcab7ffd7f9d00d67efc6e5af32b5be5a3d8e682,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,CreatedAt:1722862092459203373,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9mp69,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: ad9c256f-e608-49fc-87ca-be8bdc58a210,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9d4fe9bd39faae04808ad9ff844adfe6fc41541f5b0c7a58cef141166e2e747,PodSandboxId:80bc0d85ddc4aa5fe19d1fa0c5a05939ddfb5b70c857d779f472d8b0c6c7cdf7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722862092457373157,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 38924464-08cd-48ff-84fe-f5aea7a7d198,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d4832c4ac9bc97c889014cb2cee01c66195289aa8f3d61bc03d9798f858590,PodSandboxId:744cf76e195698c92e7acd0830864611844fda7504a3279392802ecda52a83ba,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722862092413057003,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kkvck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 316d378b-33df-4d70-bb75-88d
b4972040d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b53081f4a2cd90c831060e851e45c1c0d1843bd23d4ee1fd8125f6df608faaef,PodSandboxId:c68c66f9ff987eaa21dd3542de639b18a1ae705b6d46439d8375afcf0a13291e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:172286208863
8117550,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-515808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd146f34bec7a9377039550b796b5bee,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fef8d0ece0b5669aa512a9b45019f8a287055b90b391f1f05e8cc33ff2d66d9d,PodSandboxId:e5c638f9d8812cecb3d7186ee39b325f29e23f32478937ac992433c50c1f5a54,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:172286208
8622908606,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-515808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b3094bea649f074ec2625cd825788cf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee1b4cd16333bb00f61a6087d3598d4b6170635ba973b77fc0b024f92d7f8bb2,PodSandboxId:9369ffa0cc6d989855c7005ce1e19418321ff81413e3c64de4289d5837053a00,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1722862088647644322,Labels:map[str
ing]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-515808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dda9c948bbb75d56d3854867bf448e27,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60a6450c8cbddcd99f66ee64fb249c188cab299b8eaf9e3887fe78bc638646a6,PodSandboxId:772c13650c2373c7501d18209cef648301e1ec02c1c37160ec91d189836d067d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1722862088608320152,Labels:map[string]s
tring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-515808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 271a38732326fd1c37bc5ff20f7c7d1d,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51af140d38858f241be8fe00f1d18e1ee1599065fdb37e7114ef3632f792d431,PodSandboxId:744cf76e195698c92e7acd0830864611844fda7504a3279392802ecda52a83ba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722862059830022908,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kkvck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 316d378b-33df-4d70-bb75-88db4972040d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1e167cf1fb877d98db648281722a22eab503002146399d5f94af44080472c37,PodSandboxId:a841b31994895f3eb551f42c6c07849c4d6aa27c24d3e73f7ed7893c4e6a9144,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722862059141332641,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-95bb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05b3d50e-12a6-4bf6-93dd-2ec9dd74becf,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a3c8a3e06cfad02af17e2352ae61f1f6079479da55b116ca8c3ba0253fa4c58,PodSandboxId:78d33f3b88691206bb39e03cfcab7ffd7f9d00d67efc6e5
af32b5be5a3d8e682,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_EXITED,CreatedAt:1722862057853843984,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9mp69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad9c256f-e608-49fc-87ca-be8bdc58a210,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a07512b8f6360aa6c35dc4e3efa0cb0ce510101fd73c5feac954abc9beb326b,PodSandboxId:772c13650c2373c7501d18209cef648301e1ec02c1c37160ec91d189836d067d,Metadata:&Contai
nerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1722862058190810941,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-515808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 271a38732326fd1c37bc5ff20f7c7d1d,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d386440dfe8bcf6194e173d6205798db7ca8e6ec13bbed49371ebb17722f0c7,PodSandboxId:c68c66f9ff987eaa21dd3542de639b18a1ae705b6d46439d8375afcf0a13291e,Metadata:&ContainerMet
adata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_EXITED,CreatedAt:1722862058136117664,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-515808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd146f34bec7a9377039550b796b5bee,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdc64156cc6d50affde37564721f63f165d0bbc8837f91c9220c11a9488cc49e,PodSandboxId:9369ffa0cc6d989855c7005ce1e19418321ff81413e3c64de4289d5837053a00,M
etadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_EXITED,CreatedAt:1722862058028917941,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-515808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dda9c948bbb75d56d3854867bf448e27,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b56232b8eb2518beecfb5172f2ee19153e268dfb8022e07add1844d11e5576,PodSandboxId:e5c638f9d8812cecb3d7186ee39b325f29e23f32478937ac992433c50c1f5a54,Metadat
a:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1722862057941631864,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-515808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b3094bea649f074ec2625cd825788cf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=435a4f99-f471-4ad9-9712-a8871db157b7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:48:17 kubernetes-upgrade-515808 crio[2343]: time="2024-08-05 12:48:17.810635260Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4f4d6874-e09a-4478-bfed-33ed20353d31 name=/runtime.v1.RuntimeService/Version
	Aug 05 12:48:17 kubernetes-upgrade-515808 crio[2343]: time="2024-08-05 12:48:17.811103965Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4f4d6874-e09a-4478-bfed-33ed20353d31 name=/runtime.v1.RuntimeService/Version
	Aug 05 12:48:17 kubernetes-upgrade-515808 crio[2343]: time="2024-08-05 12:48:17.813207274Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=143056a3-25aa-49bb-8d35-29e906bcc2cd name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 12:48:17 kubernetes-upgrade-515808 crio[2343]: time="2024-08-05 12:48:17.813768127Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722862097813675613,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125243,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=143056a3-25aa-49bb-8d35-29e906bcc2cd name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 12:48:17 kubernetes-upgrade-515808 crio[2343]: time="2024-08-05 12:48:17.814490963Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fd8f46aa-df92-4841-a5a6-4c316aa77884 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:48:17 kubernetes-upgrade-515808 crio[2343]: time="2024-08-05 12:48:17.814628844Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fd8f46aa-df92-4841-a5a6-4c316aa77884 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:48:17 kubernetes-upgrade-515808 crio[2343]: time="2024-08-05 12:48:17.815342522Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ead981564c9b9e1ebcb5b936f6d11f69c258fb631ce23f57039f04f3ec1c2e4,PodSandboxId:a841b31994895f3eb551f42c6c07849c4d6aa27c24d3e73f7ed7893c4e6a9144,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722862092436877711,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-95bb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05b3d50e-12a6-4bf6-93dd-2ec9dd74becf,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e848f5e0e44ae2548074c56c3e94382fb03e55e40b874cb91e90b8b7a8669707,PodSandboxId:78d33f3b88691206bb39e03cfcab7ffd7f9d00d67efc6e5af32b5be5a3d8e682,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,CreatedAt:1722862092459203373,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9mp69,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: ad9c256f-e608-49fc-87ca-be8bdc58a210,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9d4fe9bd39faae04808ad9ff844adfe6fc41541f5b0c7a58cef141166e2e747,PodSandboxId:80bc0d85ddc4aa5fe19d1fa0c5a05939ddfb5b70c857d779f472d8b0c6c7cdf7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722862092457373157,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 38924464-08cd-48ff-84fe-f5aea7a7d198,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d4832c4ac9bc97c889014cb2cee01c66195289aa8f3d61bc03d9798f858590,PodSandboxId:744cf76e195698c92e7acd0830864611844fda7504a3279392802ecda52a83ba,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722862092413057003,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kkvck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 316d378b-33df-4d70-bb75-88d
b4972040d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b53081f4a2cd90c831060e851e45c1c0d1843bd23d4ee1fd8125f6df608faaef,PodSandboxId:c68c66f9ff987eaa21dd3542de639b18a1ae705b6d46439d8375afcf0a13291e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:172286208863
8117550,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-515808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd146f34bec7a9377039550b796b5bee,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fef8d0ece0b5669aa512a9b45019f8a287055b90b391f1f05e8cc33ff2d66d9d,PodSandboxId:e5c638f9d8812cecb3d7186ee39b325f29e23f32478937ac992433c50c1f5a54,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:172286208
8622908606,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-515808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b3094bea649f074ec2625cd825788cf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee1b4cd16333bb00f61a6087d3598d4b6170635ba973b77fc0b024f92d7f8bb2,PodSandboxId:9369ffa0cc6d989855c7005ce1e19418321ff81413e3c64de4289d5837053a00,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1722862088647644322,Labels:map[str
ing]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-515808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dda9c948bbb75d56d3854867bf448e27,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60a6450c8cbddcd99f66ee64fb249c188cab299b8eaf9e3887fe78bc638646a6,PodSandboxId:772c13650c2373c7501d18209cef648301e1ec02c1c37160ec91d189836d067d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1722862088608320152,Labels:map[string]s
tring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-515808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 271a38732326fd1c37bc5ff20f7c7d1d,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51af140d38858f241be8fe00f1d18e1ee1599065fdb37e7114ef3632f792d431,PodSandboxId:744cf76e195698c92e7acd0830864611844fda7504a3279392802ecda52a83ba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722862059830022908,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kkvck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 316d378b-33df-4d70-bb75-88db4972040d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1e167cf1fb877d98db648281722a22eab503002146399d5f94af44080472c37,PodSandboxId:a841b31994895f3eb551f42c6c07849c4d6aa27c24d3e73f7ed7893c4e6a9144,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722862059141332641,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-95bb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05b3d50e-12a6-4bf6-93dd-2ec9dd74becf,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a3c8a3e06cfad02af17e2352ae61f1f6079479da55b116ca8c3ba0253fa4c58,PodSandboxId:78d33f3b88691206bb39e03cfcab7ffd7f9d00d67efc6e5
af32b5be5a3d8e682,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_EXITED,CreatedAt:1722862057853843984,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9mp69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad9c256f-e608-49fc-87ca-be8bdc58a210,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a07512b8f6360aa6c35dc4e3efa0cb0ce510101fd73c5feac954abc9beb326b,PodSandboxId:772c13650c2373c7501d18209cef648301e1ec02c1c37160ec91d189836d067d,Metadata:&Contai
nerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1722862058190810941,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-515808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 271a38732326fd1c37bc5ff20f7c7d1d,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d386440dfe8bcf6194e173d6205798db7ca8e6ec13bbed49371ebb17722f0c7,PodSandboxId:c68c66f9ff987eaa21dd3542de639b18a1ae705b6d46439d8375afcf0a13291e,Metadata:&ContainerMet
adata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_EXITED,CreatedAt:1722862058136117664,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-515808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd146f34bec7a9377039550b796b5bee,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdc64156cc6d50affde37564721f63f165d0bbc8837f91c9220c11a9488cc49e,PodSandboxId:9369ffa0cc6d989855c7005ce1e19418321ff81413e3c64de4289d5837053a00,M
etadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_EXITED,CreatedAt:1722862058028917941,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-515808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dda9c948bbb75d56d3854867bf448e27,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b56232b8eb2518beecfb5172f2ee19153e268dfb8022e07add1844d11e5576,PodSandboxId:e5c638f9d8812cecb3d7186ee39b325f29e23f32478937ac992433c50c1f5a54,Metadat
a:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1722862057941631864,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-515808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b3094bea649f074ec2625cd825788cf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fd8f46aa-df92-4841-a5a6-4c316aa77884 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:48:17 kubernetes-upgrade-515808 crio[2343]: time="2024-08-05 12:48:17.866281833Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=549f619d-6d77-4f65-8078-b59430991455 name=/runtime.v1.RuntimeService/Version
	Aug 05 12:48:17 kubernetes-upgrade-515808 crio[2343]: time="2024-08-05 12:48:17.866398862Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=549f619d-6d77-4f65-8078-b59430991455 name=/runtime.v1.RuntimeService/Version
	Aug 05 12:48:17 kubernetes-upgrade-515808 crio[2343]: time="2024-08-05 12:48:17.868087969Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2bd80191-615b-411a-96d1-dc12f56913e7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 12:48:17 kubernetes-upgrade-515808 crio[2343]: time="2024-08-05 12:48:17.868532763Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722862097868497192,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125243,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2bd80191-615b-411a-96d1-dc12f56913e7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 12:48:17 kubernetes-upgrade-515808 crio[2343]: time="2024-08-05 12:48:17.869207073Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=92f17669-2da4-46ca-9ac4-4e746c73ff58 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:48:17 kubernetes-upgrade-515808 crio[2343]: time="2024-08-05 12:48:17.869265658Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=92f17669-2da4-46ca-9ac4-4e746c73ff58 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:48:17 kubernetes-upgrade-515808 crio[2343]: time="2024-08-05 12:48:17.869636095Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ead981564c9b9e1ebcb5b936f6d11f69c258fb631ce23f57039f04f3ec1c2e4,PodSandboxId:a841b31994895f3eb551f42c6c07849c4d6aa27c24d3e73f7ed7893c4e6a9144,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722862092436877711,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-95bb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05b3d50e-12a6-4bf6-93dd-2ec9dd74becf,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e848f5e0e44ae2548074c56c3e94382fb03e55e40b874cb91e90b8b7a8669707,PodSandboxId:78d33f3b88691206bb39e03cfcab7ffd7f9d00d67efc6e5af32b5be5a3d8e682,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,CreatedAt:1722862092459203373,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9mp69,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: ad9c256f-e608-49fc-87ca-be8bdc58a210,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9d4fe9bd39faae04808ad9ff844adfe6fc41541f5b0c7a58cef141166e2e747,PodSandboxId:80bc0d85ddc4aa5fe19d1fa0c5a05939ddfb5b70c857d779f472d8b0c6c7cdf7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722862092457373157,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 38924464-08cd-48ff-84fe-f5aea7a7d198,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d4832c4ac9bc97c889014cb2cee01c66195289aa8f3d61bc03d9798f858590,PodSandboxId:744cf76e195698c92e7acd0830864611844fda7504a3279392802ecda52a83ba,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722862092413057003,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kkvck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 316d378b-33df-4d70-bb75-88d
b4972040d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b53081f4a2cd90c831060e851e45c1c0d1843bd23d4ee1fd8125f6df608faaef,PodSandboxId:c68c66f9ff987eaa21dd3542de639b18a1ae705b6d46439d8375afcf0a13291e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:172286208863
8117550,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-515808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd146f34bec7a9377039550b796b5bee,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fef8d0ece0b5669aa512a9b45019f8a287055b90b391f1f05e8cc33ff2d66d9d,PodSandboxId:e5c638f9d8812cecb3d7186ee39b325f29e23f32478937ac992433c50c1f5a54,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:172286208
8622908606,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-515808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b3094bea649f074ec2625cd825788cf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee1b4cd16333bb00f61a6087d3598d4b6170635ba973b77fc0b024f92d7f8bb2,PodSandboxId:9369ffa0cc6d989855c7005ce1e19418321ff81413e3c64de4289d5837053a00,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1722862088647644322,Labels:map[str
ing]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-515808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dda9c948bbb75d56d3854867bf448e27,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60a6450c8cbddcd99f66ee64fb249c188cab299b8eaf9e3887fe78bc638646a6,PodSandboxId:772c13650c2373c7501d18209cef648301e1ec02c1c37160ec91d189836d067d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1722862088608320152,Labels:map[string]s
tring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-515808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 271a38732326fd1c37bc5ff20f7c7d1d,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51af140d38858f241be8fe00f1d18e1ee1599065fdb37e7114ef3632f792d431,PodSandboxId:744cf76e195698c92e7acd0830864611844fda7504a3279392802ecda52a83ba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722862059830022908,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kkvck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 316d378b-33df-4d70-bb75-88db4972040d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1e167cf1fb877d98db648281722a22eab503002146399d5f94af44080472c37,PodSandboxId:a841b31994895f3eb551f42c6c07849c4d6aa27c24d3e73f7ed7893c4e6a9144,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722862059141332641,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-95bb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05b3d50e-12a6-4bf6-93dd-2ec9dd74becf,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a3c8a3e06cfad02af17e2352ae61f1f6079479da55b116ca8c3ba0253fa4c58,PodSandboxId:78d33f3b88691206bb39e03cfcab7ffd7f9d00d67efc6e5
af32b5be5a3d8e682,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_EXITED,CreatedAt:1722862057853843984,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9mp69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad9c256f-e608-49fc-87ca-be8bdc58a210,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a07512b8f6360aa6c35dc4e3efa0cb0ce510101fd73c5feac954abc9beb326b,PodSandboxId:772c13650c2373c7501d18209cef648301e1ec02c1c37160ec91d189836d067d,Metadata:&Contai
nerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1722862058190810941,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-515808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 271a38732326fd1c37bc5ff20f7c7d1d,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d386440dfe8bcf6194e173d6205798db7ca8e6ec13bbed49371ebb17722f0c7,PodSandboxId:c68c66f9ff987eaa21dd3542de639b18a1ae705b6d46439d8375afcf0a13291e,Metadata:&ContainerMet
adata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_EXITED,CreatedAt:1722862058136117664,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-515808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd146f34bec7a9377039550b796b5bee,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdc64156cc6d50affde37564721f63f165d0bbc8837f91c9220c11a9488cc49e,PodSandboxId:9369ffa0cc6d989855c7005ce1e19418321ff81413e3c64de4289d5837053a00,M
etadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_EXITED,CreatedAt:1722862058028917941,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-515808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dda9c948bbb75d56d3854867bf448e27,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b56232b8eb2518beecfb5172f2ee19153e268dfb8022e07add1844d11e5576,PodSandboxId:e5c638f9d8812cecb3d7186ee39b325f29e23f32478937ac992433c50c1f5a54,Metadat
a:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1722862057941631864,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-515808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b3094bea649f074ec2625cd825788cf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=92f17669-2da4-46ca-9ac4-4e746c73ff58 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e848f5e0e44ae       41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318   6 seconds ago       Running             kube-proxy                2                   78d33f3b88691       kube-proxy-9mp69
	e9d4fe9bd39fa       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   6 seconds ago       Exited              storage-provisioner       3                   80bc0d85ddc4a       storage-provisioner
	5ead981564c9b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   6 seconds ago       Running             coredns                   2                   a841b31994895       coredns-6f6b679f8f-95bb6
	a9d4832c4ac9b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   6 seconds ago       Running             coredns                   2                   744cf76e19569       coredns-6f6b679f8f-kkvck
	ee1b4cd16333b       0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c   9 seconds ago       Running             kube-scheduler            2                   9369ffa0cc6d9       kube-scheduler-kubernetes-upgrade-515808
	b53081f4a2cd9       fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c   9 seconds ago       Running             kube-controller-manager   2                   c68c66f9ff987       kube-controller-manager-kubernetes-upgrade-515808
	fef8d0ece0b56       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 seconds ago       Running             etcd                      2                   e5c638f9d8812       etcd-kubernetes-upgrade-515808
	60a6450c8cbdd       c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0   9 seconds ago       Running             kube-apiserver            2                   772c13650c237       kube-apiserver-kubernetes-upgrade-515808
	51af140d38858       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   38 seconds ago      Exited              coredns                   1                   744cf76e19569       coredns-6f6b679f8f-kkvck
	a1e167cf1fb87       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   39 seconds ago      Exited              coredns                   1                   a841b31994895       coredns-6f6b679f8f-95bb6
	8a07512b8f636       c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0   40 seconds ago      Exited              kube-apiserver            1                   772c13650c237       kube-apiserver-kubernetes-upgrade-515808
	6d386440dfe8b       fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c   40 seconds ago      Exited              kube-controller-manager   1                   c68c66f9ff987       kube-controller-manager-kubernetes-upgrade-515808
	fdc64156cc6d5       0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c   40 seconds ago      Exited              kube-scheduler            1                   9369ffa0cc6d9       kube-scheduler-kubernetes-upgrade-515808
	32b56232b8eb2       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   40 seconds ago      Exited              etcd                      1                   e5c638f9d8812       etcd-kubernetes-upgrade-515808
	9a3c8a3e06cfa       41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318   40 seconds ago      Exited              kube-proxy                1                   78d33f3b88691       kube-proxy-9mp69
	
	
	==> coredns [51af140d38858f241be8fe00f1d18e1ee1599065fdb37e7114ef3632f792d431] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [5ead981564c9b9e1ebcb5b936f6d11f69c258fb631ce23f57039f04f3ec1c2e4] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [a1e167cf1fb877d98db648281722a22eab503002146399d5f94af44080472c37] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a9d4832c4ac9bc97c889014cb2cee01c66195289aa8f3d61bc03d9798f858590] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-515808
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-515808
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 12:46:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-515808
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 12:48:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 12:48:12 +0000   Mon, 05 Aug 2024 12:46:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 12:48:12 +0000   Mon, 05 Aug 2024 12:46:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 12:48:12 +0000   Mon, 05 Aug 2024 12:46:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 12:48:12 +0000   Mon, 05 Aug 2024 12:46:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.242
	  Hostname:    kubernetes-upgrade-515808
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 749b00157de84cf0a9afc77f7ea7f718
	  System UUID:                749b0015-7de8-4cf0-a9af-c77f7ea7f718
	  Boot ID:                    1900a6e1-8beb-49f2-a1f4-59d99bc4a32c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-rc.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-95bb6                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     92s
	  kube-system                 coredns-6f6b679f8f-kkvck                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     92s
	  kube-system                 etcd-kubernetes-upgrade-515808                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         95s
	  kube-system                 kube-apiserver-kubernetes-upgrade-515808             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-515808    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kube-system                 kube-proxy-9mp69                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 kube-scheduler-kubernetes-upgrade-515808             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 90s                  kube-proxy       
	  Normal  Starting                 6s                   kube-proxy       
	  Normal  Starting                 36s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  103s (x8 over 103s)  kubelet          Node kubernetes-upgrade-515808 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    103s (x8 over 103s)  kubelet          Node kubernetes-upgrade-515808 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     103s (x7 over 103s)  kubelet          Node kubernetes-upgrade-515808 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  103s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 103s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           93s                  node-controller  Node kubernetes-upgrade-515808 event: Registered Node kubernetes-upgrade-515808 in Controller
	  Normal  RegisteredNode           33s                  node-controller  Node kubernetes-upgrade-515808 event: Registered Node kubernetes-upgrade-515808 in Controller
	  Normal  Starting                 11s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11s (x8 over 11s)    kubelet          Node kubernetes-upgrade-515808 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11s (x8 over 11s)    kubelet          Node kubernetes-upgrade-515808 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11s (x7 over 11s)    kubelet          Node kubernetes-upgrade-515808 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4s                   node-controller  Node kubernetes-upgrade-515808 event: Registered Node kubernetes-upgrade-515808 in Controller
	
	
	==> dmesg <==
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.421966] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.067506] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.074095] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.239169] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.148711] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +1.723012] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +6.183708] systemd-fstab-generator[736]: Ignoring "noauto" option for root device
	[  +0.059802] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.241009] systemd-fstab-generator[856]: Ignoring "noauto" option for root device
	[  +8.748183] systemd-fstab-generator[1245]: Ignoring "noauto" option for root device
	[  +0.077328] kauditd_printk_skb: 97 callbacks suppressed
	[Aug 5 12:47] kauditd_printk_skb: 108 callbacks suppressed
	[ +12.345811] systemd-fstab-generator[2256]: Ignoring "noauto" option for root device
	[  +0.167807] systemd-fstab-generator[2268]: Ignoring "noauto" option for root device
	[  +0.184435] systemd-fstab-generator[2282]: Ignoring "noauto" option for root device
	[  +0.144538] systemd-fstab-generator[2294]: Ignoring "noauto" option for root device
	[  +0.642195] systemd-fstab-generator[2322]: Ignoring "noauto" option for root device
	[  +6.239694] kauditd_printk_skb: 100 callbacks suppressed
	[  +2.036741] systemd-fstab-generator[3143]: Ignoring "noauto" option for root device
	[  +3.644966] kauditd_printk_skb: 119 callbacks suppressed
	[Aug 5 12:48] systemd-fstab-generator[3634]: Ignoring "noauto" option for root device
	[  +4.656092] kauditd_printk_skb: 45 callbacks suppressed
	[  +1.758368] systemd-fstab-generator[4215]: Ignoring "noauto" option for root device
	
	
	==> etcd [32b56232b8eb2518beecfb5172f2ee19153e268dfb8022e07add1844d11e5576] <==
	{"level":"info","ts":"2024-08-05T12:47:40.076302Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e51295e6db0baf11 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-05T12:47:40.078305Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e51295e6db0baf11 received MsgPreVoteResp from e51295e6db0baf11 at term 2"}
	{"level":"info","ts":"2024-08-05T12:47:40.078372Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e51295e6db0baf11 became candidate at term 3"}
	{"level":"info","ts":"2024-08-05T12:47:40.078672Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e51295e6db0baf11 received MsgVoteResp from e51295e6db0baf11 at term 3"}
	{"level":"info","ts":"2024-08-05T12:47:40.083796Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e51295e6db0baf11 became leader at term 3"}
	{"level":"info","ts":"2024-08-05T12:47:40.083807Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e51295e6db0baf11 elected leader e51295e6db0baf11 at term 3"}
	{"level":"info","ts":"2024-08-05T12:47:40.090075Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"e51295e6db0baf11","local-member-attributes":"{Name:kubernetes-upgrade-515808 ClientURLs:[https://192.168.61.242:2379]}","request-path":"/0/members/e51295e6db0baf11/attributes","cluster-id":"967bd61a9fa17120","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-05T12:47:40.090136Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T12:47:40.090630Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T12:47:40.092519Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-05T12:47:40.096313Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-05T12:47:40.112859Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-05T12:47:40.114143Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.242:2379"}
	{"level":"info","ts":"2024-08-05T12:47:40.116816Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-05T12:47:40.116844Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-05T12:48:06.245418Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-05T12:48:06.245485Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"kubernetes-upgrade-515808","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.242:2380"],"advertise-client-urls":["https://192.168.61.242:2379"]}
	{"level":"warn","ts":"2024-08-05T12:48:06.245573Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.61.242:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-05T12:48:06.245604Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.61.242:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-05T12:48:06.245808Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-05T12:48:06.245827Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-05T12:48:06.247380Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"e51295e6db0baf11","current-leader-member-id":"e51295e6db0baf11"}
	{"level":"info","ts":"2024-08-05T12:48:06.251400Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.61.242:2380"}
	{"level":"info","ts":"2024-08-05T12:48:06.251513Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.61.242:2380"}
	{"level":"info","ts":"2024-08-05T12:48:06.251527Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"kubernetes-upgrade-515808","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.242:2380"],"advertise-client-urls":["https://192.168.61.242:2379"]}
	
	
	==> etcd [fef8d0ece0b5669aa512a9b45019f8a287055b90b391f1f05e8cc33ff2d66d9d] <==
	{"level":"info","ts":"2024-08-05T12:48:10.313086Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e51295e6db0baf11 received MsgVoteResp from e51295e6db0baf11 at term 4"}
	{"level":"info","ts":"2024-08-05T12:48:10.313098Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e51295e6db0baf11 became leader at term 4"}
	{"level":"info","ts":"2024-08-05T12:48:10.313108Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e51295e6db0baf11 elected leader e51295e6db0baf11 at term 4"}
	{"level":"info","ts":"2024-08-05T12:48:10.317550Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"e51295e6db0baf11","local-member-attributes":"{Name:kubernetes-upgrade-515808 ClientURLs:[https://192.168.61.242:2379]}","request-path":"/0/members/e51295e6db0baf11/attributes","cluster-id":"967bd61a9fa17120","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-05T12:48:10.317604Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T12:48:10.318168Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T12:48:10.318927Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-05T12:48:10.319652Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-05T12:48:10.320328Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-05T12:48:10.321251Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.242:2379"}
	{"level":"info","ts":"2024-08-05T12:48:10.328802Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-05T12:48:10.328841Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-05T12:48:16.656379Z","caller":"traceutil/trace.go:171","msg":"trace[1217111819] transaction","detail":"{read_only:false; response_revision:560; number_of_response:1; }","duration":"124.540415ms","start":"2024-08-05T12:48:16.531817Z","end":"2024-08-05T12:48:16.656357Z","steps":["trace[1217111819] 'process raft request'  (duration: 50.189941ms)","trace[1217111819] 'compare'  (duration: 74.174861ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-05T12:48:16.979806Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.469206ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12615023608959572419 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/kubernetes-upgrade-515808.17e8d5f0eb12f12b\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/kubernetes-upgrade-515808.17e8d5f0eb12f12b\" value_size:672 lease:3391651572104796582 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-08-05T12:48:16.979919Z","caller":"traceutil/trace.go:171","msg":"trace[2131861155] transaction","detail":"{read_only:false; response_revision:561; number_of_response:1; }","duration":"319.72571ms","start":"2024-08-05T12:48:16.660180Z","end":"2024-08-05T12:48:16.979905Z","steps":["trace[2131861155] 'process raft request'  (duration: 194.767494ms)","trace[2131861155] 'compare'  (duration: 124.326248ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-05T12:48:16.979964Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-05T12:48:16.660163Z","time spent":"319.781504ms","remote":"127.0.0.1:47458","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":757,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/default/kubernetes-upgrade-515808.17e8d5f0eb12f12b\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/kubernetes-upgrade-515808.17e8d5f0eb12f12b\" value_size:672 lease:3391651572104796582 >> failure:<>"}
	{"level":"warn","ts":"2024-08-05T12:48:17.233458Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.69162ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12615023608959572421 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/kubernetes-upgrade-515808.17e8d5f0ec72a422\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/kubernetes-upgrade-515808.17e8d5f0ec72a422\" value_size:649 lease:3391651572104796582 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-08-05T12:48:17.234096Z","caller":"traceutil/trace.go:171","msg":"trace[133031870] transaction","detail":"{read_only:false; response_revision:563; number_of_response:1; }","duration":"171.443383ms","start":"2024-08-05T12:48:17.062634Z","end":"2024-08-05T12:48:17.234077Z","steps":["trace[133031870] 'process raft request'  (duration: 62.998583ms)","trace[133031870] 'compare'  (duration: 107.551859ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-05T12:48:17.593809Z","caller":"traceutil/trace.go:171","msg":"trace[798118290] transaction","detail":"{read_only:false; response_revision:565; number_of_response:1; }","duration":"309.442148ms","start":"2024-08-05T12:48:17.284349Z","end":"2024-08-05T12:48:17.593791Z","steps":["trace[798118290] 'process raft request'  (duration: 240.459864ms)","trace[798118290] 'compare'  (duration: 68.813513ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-05T12:48:17.594126Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-05T12:48:17.284335Z","time spent":"309.552199ms","remote":"127.0.0.1:47458","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":757,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/default/kubernetes-upgrade-515808.17e8d5f0eb12f12b\" mod_revision:561 > success:<request_put:<key:\"/registry/events/default/kubernetes-upgrade-515808.17e8d5f0eb12f12b\" value_size:672 lease:3391651572104796582 >> failure:<request_range:<key:\"/registry/events/default/kubernetes-upgrade-515808.17e8d5f0eb12f12b\" > >"}
	{"level":"info","ts":"2024-08-05T12:48:17.880536Z","caller":"traceutil/trace.go:171","msg":"trace[1583638936] transaction","detail":"{read_only:false; response_revision:568; number_of_response:1; }","duration":"213.689914ms","start":"2024-08-05T12:48:17.666829Z","end":"2024-08-05T12:48:17.880519Z","steps":["trace[1583638936] 'process raft request'  (duration: 117.167739ms)","trace[1583638936] 'compare'  (duration: 96.357716ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-05T12:48:18.164281Z","caller":"traceutil/trace.go:171","msg":"trace[606114931] transaction","detail":"{read_only:false; response_revision:569; number_of_response:1; }","duration":"278.940972ms","start":"2024-08-05T12:48:17.885318Z","end":"2024-08-05T12:48:18.164259Z","steps":["trace[606114931] 'process raft request'  (duration: 224.862832ms)","trace[606114931] 'compare'  (duration: 53.980842ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-05T12:48:18.378083Z","caller":"traceutil/trace.go:171","msg":"trace[609160235] transaction","detail":"{read_only:false; response_revision:571; number_of_response:1; }","duration":"135.797296ms","start":"2024-08-05T12:48:18.242265Z","end":"2024-08-05T12:48:18.378063Z","steps":["trace[609160235] 'process raft request'  (duration: 70.320043ms)","trace[609160235] 'compare'  (duration: 65.375355ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-05T12:48:18.557012Z","caller":"traceutil/trace.go:171","msg":"trace[528409744] transaction","detail":"{read_only:false; response_revision:573; number_of_response:1; }","duration":"153.075941ms","start":"2024-08-05T12:48:18.403915Z","end":"2024-08-05T12:48:18.556991Z","steps":["trace[528409744] 'process raft request'  (duration: 134.317324ms)","trace[528409744] 'compare'  (duration: 18.648328ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-05T12:48:18.753618Z","caller":"traceutil/trace.go:171","msg":"trace[247172135] transaction","detail":"{read_only:false; response_revision:575; number_of_response:1; }","duration":"109.306999ms","start":"2024-08-05T12:48:18.644291Z","end":"2024-08-05T12:48:18.753598Z","steps":["trace[247172135] 'process raft request'  (duration: 101.729286ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:48:19 up 2 min,  0 users,  load average: 1.81, 0.64, 0.23
	Linux kubernetes-upgrade-515808 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [60a6450c8cbddcd99f66ee64fb249c188cab299b8eaf9e3887fe78bc638646a6] <==
	I0805 12:48:11.950334       1 policy_source.go:224] refreshing policies
	I0805 12:48:11.953550       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0805 12:48:11.953632       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0805 12:48:11.961007       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0805 12:48:11.971367       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0805 12:48:11.985652       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0805 12:48:11.988231       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0805 12:48:11.989745       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0805 12:48:11.991135       1 aggregator.go:171] initial CRD sync complete...
	I0805 12:48:11.991280       1 autoregister_controller.go:144] Starting autoregister controller
	I0805 12:48:11.991309       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0805 12:48:11.991388       1 cache.go:39] Caches are synced for autoregister controller
	I0805 12:48:12.029133       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0805 12:48:12.046346       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0805 12:48:12.046473       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0805 12:48:12.046655       1 shared_informer.go:320] Caches are synced for configmaps
	I0805 12:48:12.860490       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0805 12:48:13.292516       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.61.242]
	I0805 12:48:13.294176       1 controller.go:615] quota admission added evaluator for: endpoints
	I0805 12:48:13.301409       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0805 12:48:13.885127       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0805 12:48:13.899134       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0805 12:48:13.961028       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0805 12:48:14.065144       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0805 12:48:14.075294       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [8a07512b8f6360aa6c35dc4e3efa0cb0ce510101fd73c5feac954abc9beb326b] <==
	I0805 12:47:55.918332       1 apiapproval_controller.go:201] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0805 12:47:55.918359       1 nonstructuralschema_controller.go:207] Shutting down NonStructuralSchemaConditionController
	I0805 12:47:55.918369       1 establishing_controller.go:92] Shutting down EstablishingController
	I0805 12:47:55.918378       1 naming_controller.go:305] Shutting down NamingConditionController
	I0805 12:47:55.918453       1 controller.go:120] Shutting down OpenAPI V3 controller
	I0805 12:47:55.918481       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0805 12:47:55.918491       1 system_namespaces_controller.go:76] Shutting down system namespaces controller
	I0805 12:47:55.918550       1 customresource_discovery_controller.go:328] Shutting down DiscoveryController
	I0805 12:47:55.919051       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0805 12:47:55.919184       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0805 12:47:55.920177       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0805 12:47:55.920302       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0805 12:47:55.920419       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0805 12:47:55.920529       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0805 12:47:55.921848       1 dynamic_serving_content.go:149] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0805 12:47:55.921980       1 controller.go:157] Shutting down quota evaluator
	I0805 12:47:55.922068       1 controller.go:176] quota evaluator worker shutdown
	I0805 12:47:55.925625       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0805 12:47:55.925782       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0805 12:47:55.926558       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0805 12:47:55.926092       1 controller.go:176] quota evaluator worker shutdown
	I0805 12:47:55.926106       1 controller.go:176] quota evaluator worker shutdown
	I0805 12:47:55.926113       1 controller.go:176] quota evaluator worker shutdown
	I0805 12:47:55.926117       1 controller.go:176] quota evaluator worker shutdown
	I0805 12:47:55.930776       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-controller-manager [6d386440dfe8bcf6194e173d6205798db7ca8e6ec13bbed49371ebb17722f0c7] <==
	I0805 12:47:46.043257       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0805 12:47:46.043348       1 shared_informer.go:320] Caches are synced for taint
	I0805 12:47:46.043450       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0805 12:47:46.043555       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-515808"
	I0805 12:47:46.043615       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0805 12:47:46.043726       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0805 12:47:46.043790       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-515808"
	I0805 12:47:46.044817       1 shared_informer.go:320] Caches are synced for namespace
	I0805 12:47:46.045386       1 shared_informer.go:320] Caches are synced for disruption
	I0805 12:47:46.045613       1 shared_informer.go:320] Caches are synced for TTL
	I0805 12:47:46.049185       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0805 12:47:46.049811       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0805 12:47:46.050436       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0805 12:47:46.050478       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0805 12:47:46.073288       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0805 12:47:46.229774       1 shared_informer.go:320] Caches are synced for resource quota
	I0805 12:47:46.239741       1 shared_informer.go:320] Caches are synced for HPA
	I0805 12:47:46.240983       1 shared_informer.go:320] Caches are synced for attach detach
	I0805 12:47:46.244764       1 shared_informer.go:320] Caches are synced for resource quota
	I0805 12:47:46.683600       1 shared_informer.go:320] Caches are synced for garbage collector
	I0805 12:47:46.737729       1 shared_informer.go:320] Caches are synced for garbage collector
	I0805 12:47:46.737847       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0805 12:47:47.698865       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="28.941887ms"
	I0805 12:47:47.699338       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="73.623µs"
	I0805 12:47:50.917833       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="114.173µs"
	
	
	==> kube-controller-manager [b53081f4a2cd90c831060e851e45c1c0d1843bd23d4ee1fd8125f6df608faaef] <==
	I0805 12:48:15.276036       1 shared_informer.go:320] Caches are synced for crt configmap
	I0805 12:48:15.276047       1 shared_informer.go:320] Caches are synced for cronjob
	I0805 12:48:15.276060       1 shared_informer.go:320] Caches are synced for ephemeral
	I0805 12:48:15.291820       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0805 12:48:15.308328       1 shared_informer.go:320] Caches are synced for PVC protection
	I0805 12:48:15.315591       1 shared_informer.go:320] Caches are synced for taint
	I0805 12:48:15.315785       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0805 12:48:15.315940       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-515808"
	I0805 12:48:15.316001       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0805 12:48:15.316062       1 shared_informer.go:320] Caches are synced for endpoint
	I0805 12:48:15.316117       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0805 12:48:15.316152       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-515808"
	I0805 12:48:15.317046       1 shared_informer.go:320] Caches are synced for disruption
	I0805 12:48:15.318452       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0805 12:48:15.386019       1 shared_informer.go:320] Caches are synced for namespace
	I0805 12:48:15.422935       1 shared_informer.go:320] Caches are synced for service account
	I0805 12:48:15.465399       1 shared_informer.go:320] Caches are synced for stateful set
	I0805 12:48:15.469132       1 shared_informer.go:320] Caches are synced for daemon sets
	I0805 12:48:15.482776       1 shared_informer.go:320] Caches are synced for resource quota
	I0805 12:48:15.495408       1 shared_informer.go:320] Caches are synced for resource quota
	I0805 12:48:15.587399       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="365.277418ms"
	I0805 12:48:15.587544       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="61.994µs"
	I0805 12:48:15.888932       1 shared_informer.go:320] Caches are synced for garbage collector
	I0805 12:48:15.888969       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0805 12:48:15.954423       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-proxy [9a3c8a3e06cfad02af17e2352ae61f1f6079479da55b116ca8c3ba0253fa4c58] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0805 12:47:41.377337       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0805 12:47:42.828864       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.242"]
	E0805 12:47:42.834912       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0805 12:47:42.955462       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0805 12:47:42.955565       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0805 12:47:42.955638       1 server_linux.go:169] "Using iptables Proxier"
	I0805 12:47:42.958997       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0805 12:47:42.959390       1 server.go:483] "Version info" version="v1.31.0-rc.0"
	I0805 12:47:42.960570       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 12:47:42.962018       1 config.go:197] "Starting service config controller"
	I0805 12:47:42.962406       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 12:47:42.985380       1 shared_informer.go:320] Caches are synced for service config
	I0805 12:47:42.964399       1 config.go:326] "Starting node config controller"
	I0805 12:47:42.985564       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 12:47:42.985606       1 shared_informer.go:320] Caches are synced for node config
	I0805 12:47:42.963838       1 config.go:104] "Starting endpoint slice config controller"
	I0805 12:47:42.985765       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 12:47:42.985842       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [e848f5e0e44ae2548074c56c3e94382fb03e55e40b874cb91e90b8b7a8669707] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0805 12:48:12.897116       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0805 12:48:12.909109       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.242"]
	E0805 12:48:12.909944       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0805 12:48:12.974543       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0805 12:48:12.974571       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0805 12:48:12.974592       1 server_linux.go:169] "Using iptables Proxier"
	I0805 12:48:12.977454       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0805 12:48:12.977829       1 server.go:483] "Version info" version="v1.31.0-rc.0"
	I0805 12:48:12.978148       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 12:48:12.982761       1 config.go:197] "Starting service config controller"
	I0805 12:48:12.982823       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 12:48:12.982864       1 config.go:104] "Starting endpoint slice config controller"
	I0805 12:48:12.982880       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 12:48:12.984061       1 config.go:326] "Starting node config controller"
	I0805 12:48:12.984129       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 12:48:13.083861       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0805 12:48:13.083935       1 shared_informer.go:320] Caches are synced for service config
	I0805 12:48:13.086044       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [ee1b4cd16333bb00f61a6087d3598d4b6170635ba973b77fc0b024f92d7f8bb2] <==
	I0805 12:48:09.915754       1 serving.go:386] Generated self-signed cert in-memory
	W0805 12:48:11.899880       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0805 12:48:11.899974       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0805 12:48:11.899984       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0805 12:48:11.900052       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0805 12:48:11.997455       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0-rc.0"
	I0805 12:48:12.002762       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 12:48:12.008136       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0805 12:48:12.008246       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0805 12:48:12.009155       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0805 12:48:12.009316       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0805 12:48:12.109597       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [fdc64156cc6d50affde37564721f63f165d0bbc8837f91c9220c11a9488cc49e] <==
	I0805 12:47:41.723566       1 serving.go:386] Generated self-signed cert in-memory
	W0805 12:47:42.652101       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0805 12:47:42.652506       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0805 12:47:42.652654       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0805 12:47:42.652806       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0805 12:47:42.782570       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0-rc.0"
	I0805 12:47:42.785795       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 12:47:42.792265       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0805 12:47:42.792784       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0805 12:47:42.795916       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0805 12:47:42.792832       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0805 12:47:42.898935       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0805 12:48:06.108241       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0805 12:48:06.108630       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0805 12:48:06.108901       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 05 12:48:08 kubernetes-upgrade-515808 kubelet[3641]: E0805 12:48:08.983652    3641 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.61.242:8443: connect: connection refused" logger="UnhandledError"
	Aug 05 12:48:09 kubernetes-upgrade-515808 kubelet[3641]: W0805 12:48:09.099901    3641 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-515808&limit=500&resourceVersion=0": dial tcp 192.168.61.242:8443: connect: connection refused
	Aug 05 12:48:09 kubernetes-upgrade-515808 kubelet[3641]: E0805 12:48:09.099971    3641 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-515808&limit=500&resourceVersion=0\": dial tcp 192.168.61.242:8443: connect: connection refused" logger="UnhandledError"
	Aug 05 12:48:09 kubernetes-upgrade-515808 kubelet[3641]: W0805 12:48:09.130168    3641 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.61.242:8443: connect: connection refused
	Aug 05 12:48:09 kubernetes-upgrade-515808 kubelet[3641]: E0805 12:48:09.130246    3641 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.61.242:8443: connect: connection refused" logger="UnhandledError"
	Aug 05 12:48:09 kubernetes-upgrade-515808 kubelet[3641]: I0805 12:48:09.715740    3641 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-515808"
	Aug 05 12:48:12 kubernetes-upgrade-515808 kubelet[3641]: I0805 12:48:12.033895    3641 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-515808"
	Aug 05 12:48:12 kubernetes-upgrade-515808 kubelet[3641]: I0805 12:48:12.034042    3641 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-515808"
	Aug 05 12:48:12 kubernetes-upgrade-515808 kubelet[3641]: I0805 12:48:12.034072    3641 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 05 12:48:12 kubernetes-upgrade-515808 kubelet[3641]: I0805 12:48:12.035241    3641 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 05 12:48:12 kubernetes-upgrade-515808 kubelet[3641]: I0805 12:48:12.070514    3641 apiserver.go:52] "Watching apiserver"
	Aug 05 12:48:12 kubernetes-upgrade-515808 kubelet[3641]: I0805 12:48:12.092125    3641 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Aug 05 12:48:12 kubernetes-upgrade-515808 kubelet[3641]: I0805 12:48:12.130748    3641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ad9c256f-e608-49fc-87ca-be8bdc58a210-xtables-lock\") pod \"kube-proxy-9mp69\" (UID: \"ad9c256f-e608-49fc-87ca-be8bdc58a210\") " pod="kube-system/kube-proxy-9mp69"
	Aug 05 12:48:12 kubernetes-upgrade-515808 kubelet[3641]: I0805 12:48:12.130807    3641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/38924464-08cd-48ff-84fe-f5aea7a7d198-tmp\") pod \"storage-provisioner\" (UID: \"38924464-08cd-48ff-84fe-f5aea7a7d198\") " pod="kube-system/storage-provisioner"
	Aug 05 12:48:12 kubernetes-upgrade-515808 kubelet[3641]: I0805 12:48:12.130846    3641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ad9c256f-e608-49fc-87ca-be8bdc58a210-lib-modules\") pod \"kube-proxy-9mp69\" (UID: \"ad9c256f-e608-49fc-87ca-be8bdc58a210\") " pod="kube-system/kube-proxy-9mp69"
	Aug 05 12:48:12 kubernetes-upgrade-515808 kubelet[3641]: E0805 12:48:12.331524    3641 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"etcd-kubernetes-upgrade-515808\" already exists" pod="kube-system/etcd-kubernetes-upgrade-515808"
	Aug 05 12:48:12 kubernetes-upgrade-515808 kubelet[3641]: I0805 12:48:12.376887    3641 scope.go:117] "RemoveContainer" containerID="a1e167cf1fb877d98db648281722a22eab503002146399d5f94af44080472c37"
	Aug 05 12:48:12 kubernetes-upgrade-515808 kubelet[3641]: I0805 12:48:12.377454    3641 scope.go:117] "RemoveContainer" containerID="a97291ccbcdcbf5b02208e879737961e11d7aecc6dcb103cdd892cab35a50f18"
	Aug 05 12:48:12 kubernetes-upgrade-515808 kubelet[3641]: I0805 12:48:12.378040    3641 scope.go:117] "RemoveContainer" containerID="51af140d38858f241be8fe00f1d18e1ee1599065fdb37e7114ef3632f792d431"
	Aug 05 12:48:12 kubernetes-upgrade-515808 kubelet[3641]: I0805 12:48:12.379135    3641 scope.go:117] "RemoveContainer" containerID="9a3c8a3e06cfad02af17e2352ae61f1f6079479da55b116ca8c3ba0253fa4c58"
	Aug 05 12:48:13 kubernetes-upgrade-515808 kubelet[3641]: I0805 12:48:13.335918    3641 scope.go:117] "RemoveContainer" containerID="a97291ccbcdcbf5b02208e879737961e11d7aecc6dcb103cdd892cab35a50f18"
	Aug 05 12:48:13 kubernetes-upgrade-515808 kubelet[3641]: I0805 12:48:13.336179    3641 scope.go:117] "RemoveContainer" containerID="e9d4fe9bd39faae04808ad9ff844adfe6fc41541f5b0c7a58cef141166e2e747"
	Aug 05 12:48:13 kubernetes-upgrade-515808 kubelet[3641]: E0805 12:48:13.336305    3641 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(38924464-08cd-48ff-84fe-f5aea7a7d198)\"" pod="kube-system/storage-provisioner" podUID="38924464-08cd-48ff-84fe-f5aea7a7d198"
	Aug 05 12:48:18 kubernetes-upgrade-515808 kubelet[3641]: E0805 12:48:18.208885    3641 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722862098208548373,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125243,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 05 12:48:18 kubernetes-upgrade-515808 kubelet[3641]: E0805 12:48:18.208929    3641 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722862098208548373,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125243,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [e9d4fe9bd39faae04808ad9ff844adfe6fc41541f5b0c7a58cef141166e2e747] <==
	I0805 12:48:12.720599       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0805 12:48:12.722923       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-515808 -n kubernetes-upgrade-515808
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-515808 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-515808" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-515808
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-515808: (1.423295538s)
--- FAIL: TestKubernetesUpgrade (445.26s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (53.5s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-335738 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0805 12:45:27.753996  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-335738 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (49.312424402s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-335738] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19377
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19377-383955/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-383955/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-335738" primary control-plane node in "pause-335738" cluster
	* Updating the running kvm2 "pause-335738" VM ...
	* Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-335738" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 12:44:43.344035  435321 out.go:291] Setting OutFile to fd 1 ...
	I0805 12:44:43.344171  435321 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:44:43.344182  435321 out.go:304] Setting ErrFile to fd 2...
	I0805 12:44:43.344189  435321 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:44:43.344374  435321 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
	I0805 12:44:43.344967  435321 out.go:298] Setting JSON to false
	I0805 12:44:43.346018  435321 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8830,"bootTime":1722853053,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0805 12:44:43.346094  435321 start.go:139] virtualization: kvm guest
	I0805 12:44:43.368791  435321 out.go:177] * [pause-335738] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0805 12:44:43.453415  435321 notify.go:220] Checking for updates...
	I0805 12:44:43.453501  435321 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 12:44:43.537804  435321 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 12:44:43.636253  435321 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 12:44:43.774898  435321 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 12:44:43.808131  435321 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0805 12:44:43.851242  435321 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 12:44:43.901680  435321 config.go:182] Loaded profile config "pause-335738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:44:43.902170  435321 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:44:43.902229  435321 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:44:43.920143  435321 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41079
	I0805 12:44:43.920851  435321 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:44:43.921677  435321 main.go:141] libmachine: Using API Version  1
	I0805 12:44:43.921710  435321 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:44:43.922149  435321 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:44:43.922394  435321 main.go:141] libmachine: (pause-335738) Calling .DriverName
	I0805 12:44:43.922752  435321 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 12:44:43.923238  435321 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:44:43.923325  435321 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:44:43.940533  435321 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36477
	I0805 12:44:43.941054  435321 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:44:43.941600  435321 main.go:141] libmachine: Using API Version  1
	I0805 12:44:43.941632  435321 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:44:43.941981  435321 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:44:43.942163  435321 main.go:141] libmachine: (pause-335738) Calling .DriverName
	I0805 12:44:43.979976  435321 out.go:177] * Using the kvm2 driver based on existing profile
	I0805 12:44:43.981254  435321 start.go:297] selected driver: kvm2
	I0805 12:44:43.981275  435321 start.go:901] validating driver "kvm2" against &{Name:pause-335738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.30.3 ClusterName:pause-335738 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-devi
ce-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:44:43.981495  435321 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 12:44:43.981960  435321 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 12:44:43.982072  435321 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19377-383955/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0805 12:44:43.998073  435321 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0805 12:44:43.999104  435321 cni.go:84] Creating CNI manager for ""
	I0805 12:44:43.999134  435321 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:44:43.999241  435321 start.go:340] cluster config:
	{Name:pause-335738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:pause-335738 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:f
alse registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:44:43.999515  435321 iso.go:125] acquiring lock: {Name:mk78a4988ea0dfb86bb6f7367e362683a39fd912 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 12:44:44.001520  435321 out.go:177] * Starting "pause-335738" primary control-plane node in "pause-335738" cluster
	I0805 12:44:44.002746  435321 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 12:44:44.002789  435321 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0805 12:44:44.002805  435321 cache.go:56] Caching tarball of preloaded images
	I0805 12:44:44.002914  435321 preload.go:172] Found /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0805 12:44:44.002936  435321 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0805 12:44:44.003063  435321 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/pause-335738/config.json ...
	I0805 12:44:44.003287  435321 start.go:360] acquireMachinesLock for pause-335738: {Name:mk3babe91d55c30c0b650587cdec6489eb3a7ed6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 12:44:58.252843  435321 start.go:364] duration metric: took 14.249505532s to acquireMachinesLock for "pause-335738"
	I0805 12:44:58.252909  435321 start.go:96] Skipping create...Using existing machine configuration
	I0805 12:44:58.252921  435321 fix.go:54] fixHost starting: 
	I0805 12:44:58.253335  435321 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:44:58.253396  435321 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:44:58.273749  435321 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41657
	I0805 12:44:58.274208  435321 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:44:58.274727  435321 main.go:141] libmachine: Using API Version  1
	I0805 12:44:58.274754  435321 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:44:58.275052  435321 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:44:58.275256  435321 main.go:141] libmachine: (pause-335738) Calling .DriverName
	I0805 12:44:58.275414  435321 main.go:141] libmachine: (pause-335738) Calling .GetState
	I0805 12:44:58.277106  435321 fix.go:112] recreateIfNeeded on pause-335738: state=Running err=<nil>
	W0805 12:44:58.277151  435321 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 12:44:58.279288  435321 out.go:177] * Updating the running kvm2 "pause-335738" VM ...
	I0805 12:44:58.280673  435321 machine.go:94] provisionDockerMachine start ...
	I0805 12:44:58.280701  435321 main.go:141] libmachine: (pause-335738) Calling .DriverName
	I0805 12:44:58.280894  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHHostname
	I0805 12:44:58.284133  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:44:58.284613  435321 main.go:141] libmachine: (pause-335738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:22:6e", ip: ""} in network mk-pause-335738: {Iface:virbr1 ExpiryTime:2024-08-05 13:43:55 +0000 UTC Type:0 Mac:52:54:00:c5:22:6e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-335738 Clientid:01:52:54:00:c5:22:6e}
	I0805 12:44:58.284640  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined IP address 192.168.39.97 and MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:44:58.284791  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHPort
	I0805 12:44:58.284946  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHKeyPath
	I0805 12:44:58.285090  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHKeyPath
	I0805 12:44:58.285225  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHUsername
	I0805 12:44:58.285452  435321 main.go:141] libmachine: Using SSH client type: native
	I0805 12:44:58.285645  435321 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0805 12:44:58.285656  435321 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 12:44:58.400571  435321 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-335738
	
	I0805 12:44:58.400607  435321 main.go:141] libmachine: (pause-335738) Calling .GetMachineName
	I0805 12:44:58.400894  435321 buildroot.go:166] provisioning hostname "pause-335738"
	I0805 12:44:58.400928  435321 main.go:141] libmachine: (pause-335738) Calling .GetMachineName
	I0805 12:44:58.401212  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHHostname
	I0805 12:44:58.404011  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:44:58.404385  435321 main.go:141] libmachine: (pause-335738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:22:6e", ip: ""} in network mk-pause-335738: {Iface:virbr1 ExpiryTime:2024-08-05 13:43:55 +0000 UTC Type:0 Mac:52:54:00:c5:22:6e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-335738 Clientid:01:52:54:00:c5:22:6e}
	I0805 12:44:58.404407  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined IP address 192.168.39.97 and MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:44:58.404647  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHPort
	I0805 12:44:58.404816  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHKeyPath
	I0805 12:44:58.404970  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHKeyPath
	I0805 12:44:58.405127  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHUsername
	I0805 12:44:58.405377  435321 main.go:141] libmachine: Using SSH client type: native
	I0805 12:44:58.405594  435321 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0805 12:44:58.405612  435321 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-335738 && echo "pause-335738" | sudo tee /etc/hostname
	I0805 12:44:58.531104  435321 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-335738
	
	I0805 12:44:58.531139  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHHostname
	I0805 12:44:58.534469  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:44:58.534976  435321 main.go:141] libmachine: (pause-335738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:22:6e", ip: ""} in network mk-pause-335738: {Iface:virbr1 ExpiryTime:2024-08-05 13:43:55 +0000 UTC Type:0 Mac:52:54:00:c5:22:6e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-335738 Clientid:01:52:54:00:c5:22:6e}
	I0805 12:44:58.535021  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined IP address 192.168.39.97 and MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:44:58.535254  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHPort
	I0805 12:44:58.535511  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHKeyPath
	I0805 12:44:58.535713  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHKeyPath
	I0805 12:44:58.535906  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHUsername
	I0805 12:44:58.536130  435321 main.go:141] libmachine: Using SSH client type: native
	I0805 12:44:58.536346  435321 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0805 12:44:58.536371  435321 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-335738' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-335738/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-335738' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 12:44:58.654002  435321 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 12:44:58.654040  435321 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19377-383955/.minikube CaCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19377-383955/.minikube}
	I0805 12:44:58.654109  435321 buildroot.go:174] setting up certificates
	I0805 12:44:58.654119  435321 provision.go:84] configureAuth start
	I0805 12:44:58.654136  435321 main.go:141] libmachine: (pause-335738) Calling .GetMachineName
	I0805 12:44:58.654481  435321 main.go:141] libmachine: (pause-335738) Calling .GetIP
	I0805 12:44:58.657169  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:44:58.657539  435321 main.go:141] libmachine: (pause-335738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:22:6e", ip: ""} in network mk-pause-335738: {Iface:virbr1 ExpiryTime:2024-08-05 13:43:55 +0000 UTC Type:0 Mac:52:54:00:c5:22:6e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-335738 Clientid:01:52:54:00:c5:22:6e}
	I0805 12:44:58.657566  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined IP address 192.168.39.97 and MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:44:58.657679  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHHostname
	I0805 12:44:58.659937  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:44:58.660312  435321 main.go:141] libmachine: (pause-335738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:22:6e", ip: ""} in network mk-pause-335738: {Iface:virbr1 ExpiryTime:2024-08-05 13:43:55 +0000 UTC Type:0 Mac:52:54:00:c5:22:6e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-335738 Clientid:01:52:54:00:c5:22:6e}
	I0805 12:44:58.660336  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined IP address 192.168.39.97 and MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:44:58.660613  435321 provision.go:143] copyHostCerts
	I0805 12:44:58.660679  435321 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem, removing ...
	I0805 12:44:58.660690  435321 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 12:44:58.660740  435321 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem (1082 bytes)
	I0805 12:44:58.660833  435321 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem, removing ...
	I0805 12:44:58.660842  435321 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 12:44:58.660863  435321 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem (1123 bytes)
	I0805 12:44:58.660914  435321 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem, removing ...
	I0805 12:44:58.660921  435321 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 12:44:58.660945  435321 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem (1675 bytes)
	I0805 12:44:58.660988  435321 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem org=jenkins.pause-335738 san=[127.0.0.1 192.168.39.97 localhost minikube pause-335738]
	I0805 12:44:59.028284  435321 provision.go:177] copyRemoteCerts
	I0805 12:44:59.028377  435321 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 12:44:59.028414  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHHostname
	I0805 12:44:59.031279  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:44:59.031702  435321 main.go:141] libmachine: (pause-335738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:22:6e", ip: ""} in network mk-pause-335738: {Iface:virbr1 ExpiryTime:2024-08-05 13:43:55 +0000 UTC Type:0 Mac:52:54:00:c5:22:6e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-335738 Clientid:01:52:54:00:c5:22:6e}
	I0805 12:44:59.031760  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined IP address 192.168.39.97 and MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:44:59.031939  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHPort
	I0805 12:44:59.032172  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHKeyPath
	I0805 12:44:59.032322  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHUsername
	I0805 12:44:59.032465  435321 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/pause-335738/id_rsa Username:docker}
	I0805 12:44:59.122102  435321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 12:44:59.152102  435321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0805 12:44:59.185021  435321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0805 12:44:59.216147  435321 provision.go:87] duration metric: took 562.010148ms to configureAuth
	I0805 12:44:59.216200  435321 buildroot.go:189] setting minikube options for container-runtime
	I0805 12:44:59.216425  435321 config.go:182] Loaded profile config "pause-335738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:44:59.216544  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHHostname
	I0805 12:44:59.219728  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:44:59.220160  435321 main.go:141] libmachine: (pause-335738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:22:6e", ip: ""} in network mk-pause-335738: {Iface:virbr1 ExpiryTime:2024-08-05 13:43:55 +0000 UTC Type:0 Mac:52:54:00:c5:22:6e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-335738 Clientid:01:52:54:00:c5:22:6e}
	I0805 12:44:59.220193  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined IP address 192.168.39.97 and MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:44:59.220453  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHPort
	I0805 12:44:59.220684  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHKeyPath
	I0805 12:44:59.220862  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHKeyPath
	I0805 12:44:59.220995  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHUsername
	I0805 12:44:59.221192  435321 main.go:141] libmachine: Using SSH client type: native
	I0805 12:44:59.221433  435321 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0805 12:44:59.221465  435321 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 12:45:07.435365  435321 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 12:45:07.435400  435321 machine.go:97] duration metric: took 9.154706851s to provisionDockerMachine
	I0805 12:45:07.435416  435321 start.go:293] postStartSetup for "pause-335738" (driver="kvm2")
	I0805 12:45:07.435430  435321 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 12:45:07.435463  435321 main.go:141] libmachine: (pause-335738) Calling .DriverName
	I0805 12:45:07.435973  435321 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 12:45:07.436011  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHHostname
	I0805 12:45:07.439119  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:45:07.439536  435321 main.go:141] libmachine: (pause-335738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:22:6e", ip: ""} in network mk-pause-335738: {Iface:virbr1 ExpiryTime:2024-08-05 13:43:55 +0000 UTC Type:0 Mac:52:54:00:c5:22:6e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-335738 Clientid:01:52:54:00:c5:22:6e}
	I0805 12:45:07.439568  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined IP address 192.168.39.97 and MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:45:07.439811  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHPort
	I0805 12:45:07.440026  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHKeyPath
	I0805 12:45:07.440198  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHUsername
	I0805 12:45:07.440359  435321 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/pause-335738/id_rsa Username:docker}
	I0805 12:45:07.531327  435321 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 12:45:07.536064  435321 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 12:45:07.536093  435321 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/addons for local assets ...
	I0805 12:45:07.536168  435321 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/files for local assets ...
	I0805 12:45:07.536277  435321 filesync.go:149] local asset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> 3912192.pem in /etc/ssl/certs
	I0805 12:45:07.536401  435321 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 12:45:07.548192  435321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:45:07.583195  435321 start.go:296] duration metric: took 147.760389ms for postStartSetup
	I0805 12:45:07.583246  435321 fix.go:56] duration metric: took 9.330325706s for fixHost
	I0805 12:45:07.583273  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHHostname
	I0805 12:45:07.586518  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:45:07.586949  435321 main.go:141] libmachine: (pause-335738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:22:6e", ip: ""} in network mk-pause-335738: {Iface:virbr1 ExpiryTime:2024-08-05 13:43:55 +0000 UTC Type:0 Mac:52:54:00:c5:22:6e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-335738 Clientid:01:52:54:00:c5:22:6e}
	I0805 12:45:07.586981  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined IP address 192.168.39.97 and MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:45:07.587188  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHPort
	I0805 12:45:07.587426  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHKeyPath
	I0805 12:45:07.587614  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHKeyPath
	I0805 12:45:07.587795  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHUsername
	I0805 12:45:07.587976  435321 main.go:141] libmachine: Using SSH client type: native
	I0805 12:45:07.588199  435321 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0805 12:45:07.588214  435321 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0805 12:45:07.709190  435321 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722861907.704347151
	
	I0805 12:45:07.709217  435321 fix.go:216] guest clock: 1722861907.704347151
	I0805 12:45:07.709227  435321 fix.go:229] Guest: 2024-08-05 12:45:07.704347151 +0000 UTC Remote: 2024-08-05 12:45:07.583251272 +0000 UTC m=+24.284926934 (delta=121.095879ms)
	I0805 12:45:07.709254  435321 fix.go:200] guest clock delta is within tolerance: 121.095879ms
	I0805 12:45:07.709261  435321 start.go:83] releasing machines lock for "pause-335738", held for 9.456379931s
	I0805 12:45:07.709285  435321 main.go:141] libmachine: (pause-335738) Calling .DriverName
	I0805 12:45:07.709564  435321 main.go:141] libmachine: (pause-335738) Calling .GetIP
	I0805 12:45:07.713014  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:45:07.713434  435321 main.go:141] libmachine: (pause-335738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:22:6e", ip: ""} in network mk-pause-335738: {Iface:virbr1 ExpiryTime:2024-08-05 13:43:55 +0000 UTC Type:0 Mac:52:54:00:c5:22:6e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-335738 Clientid:01:52:54:00:c5:22:6e}
	I0805 12:45:07.713461  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined IP address 192.168.39.97 and MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:45:07.713671  435321 main.go:141] libmachine: (pause-335738) Calling .DriverName
	I0805 12:45:07.714276  435321 main.go:141] libmachine: (pause-335738) Calling .DriverName
	I0805 12:45:07.714515  435321 main.go:141] libmachine: (pause-335738) Calling .DriverName
	I0805 12:45:07.714639  435321 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 12:45:07.714697  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHHostname
	I0805 12:45:07.714762  435321 ssh_runner.go:195] Run: cat /version.json
	I0805 12:45:07.714791  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHHostname
	I0805 12:45:07.717917  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:45:07.717952  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:45:07.718167  435321 main.go:141] libmachine: (pause-335738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:22:6e", ip: ""} in network mk-pause-335738: {Iface:virbr1 ExpiryTime:2024-08-05 13:43:55 +0000 UTC Type:0 Mac:52:54:00:c5:22:6e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-335738 Clientid:01:52:54:00:c5:22:6e}
	I0805 12:45:07.718188  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined IP address 192.168.39.97 and MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:45:07.718332  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHPort
	I0805 12:45:07.718447  435321 main.go:141] libmachine: (pause-335738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:22:6e", ip: ""} in network mk-pause-335738: {Iface:virbr1 ExpiryTime:2024-08-05 13:43:55 +0000 UTC Type:0 Mac:52:54:00:c5:22:6e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-335738 Clientid:01:52:54:00:c5:22:6e}
	I0805 12:45:07.718470  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined IP address 192.168.39.97 and MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:45:07.718508  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHKeyPath
	I0805 12:45:07.718569  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHPort
	I0805 12:45:07.718707  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHKeyPath
	I0805 12:45:07.718708  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHUsername
	I0805 12:45:07.718884  435321 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/pause-335738/id_rsa Username:docker}
	I0805 12:45:07.718987  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHUsername
	I0805 12:45:07.719193  435321 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/pause-335738/id_rsa Username:docker}
	I0805 12:45:07.805733  435321 ssh_runner.go:195] Run: systemctl --version
	I0805 12:45:07.828521  435321 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 12:45:07.992325  435321 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 12:45:08.000564  435321 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 12:45:08.000647  435321 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 12:45:08.013511  435321 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0805 12:45:08.013535  435321 start.go:495] detecting cgroup driver to use...
	I0805 12:45:08.013612  435321 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 12:45:08.037227  435321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 12:45:08.056733  435321 docker.go:217] disabling cri-docker service (if available) ...
	I0805 12:45:08.056797  435321 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 12:45:08.077383  435321 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 12:45:08.135695  435321 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 12:45:08.373203  435321 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 12:45:08.650299  435321 docker.go:233] disabling docker service ...
	I0805 12:45:08.650369  435321 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 12:45:08.733071  435321 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 12:45:08.780524  435321 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 12:45:09.135883  435321 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 12:45:09.436578  435321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 12:45:09.464255  435321 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 12:45:09.496891  435321 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0805 12:45:09.496966  435321 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:45:09.517774  435321 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 12:45:09.517853  435321 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:45:09.554633  435321 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:45:09.572207  435321 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:45:09.585896  435321 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 12:45:09.610404  435321 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:45:09.627089  435321 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:45:09.648767  435321 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:45:09.668714  435321 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 12:45:09.690244  435321 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 12:45:09.705043  435321 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:45:09.934592  435321 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 12:45:10.469200  435321 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 12:45:10.469293  435321 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 12:45:10.475998  435321 start.go:563] Will wait 60s for crictl version
	I0805 12:45:10.476070  435321 ssh_runner.go:195] Run: which crictl
	I0805 12:45:10.515388  435321 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 12:45:10.668529  435321 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 12:45:10.668682  435321 ssh_runner.go:195] Run: crio --version
	I0805 12:45:10.900094  435321 ssh_runner.go:195] Run: crio --version
	I0805 12:45:10.975777  435321 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0805 12:45:10.977172  435321 main.go:141] libmachine: (pause-335738) Calling .GetIP
	I0805 12:45:10.980756  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:45:10.981259  435321 main.go:141] libmachine: (pause-335738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:22:6e", ip: ""} in network mk-pause-335738: {Iface:virbr1 ExpiryTime:2024-08-05 13:43:55 +0000 UTC Type:0 Mac:52:54:00:c5:22:6e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-335738 Clientid:01:52:54:00:c5:22:6e}
	I0805 12:45:10.981303  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined IP address 192.168.39.97 and MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:45:10.981603  435321 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0805 12:45:10.989940  435321 kubeadm.go:883] updating cluster {Name:pause-335738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:pause-335738 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 12:45:10.990128  435321 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 12:45:10.990202  435321 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:45:11.043291  435321 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 12:45:11.043319  435321 crio.go:433] Images already preloaded, skipping extraction
	I0805 12:45:11.043368  435321 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:45:11.090395  435321 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 12:45:11.090428  435321 cache_images.go:84] Images are preloaded, skipping loading
	I0805 12:45:11.090440  435321 kubeadm.go:934] updating node { 192.168.39.97 8443 v1.30.3 crio true true} ...
	I0805 12:45:11.090582  435321 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-335738 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:pause-335738 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 12:45:11.090699  435321 ssh_runner.go:195] Run: crio config
	I0805 12:45:11.189451  435321 cni.go:84] Creating CNI manager for ""
	I0805 12:45:11.189482  435321 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:45:11.189499  435321 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 12:45:11.189529  435321 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.97 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-335738 NodeName:pause-335738 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.97 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 12:45:11.189716  435321 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.97
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-335738"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.97
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.97"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 12:45:11.189792  435321 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 12:45:11.200510  435321 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 12:45:11.200601  435321 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 12:45:11.212958  435321 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0805 12:45:11.237802  435321 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 12:45:11.254534  435321 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0805 12:45:11.271371  435321 ssh_runner.go:195] Run: grep 192.168.39.97	control-plane.minikube.internal$ /etc/hosts
	I0805 12:45:11.275240  435321 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:45:11.410170  435321 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 12:45:11.425406  435321 certs.go:68] Setting up /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/pause-335738 for IP: 192.168.39.97
	I0805 12:45:11.425438  435321 certs.go:194] generating shared ca certs ...
	I0805 12:45:11.425460  435321 certs.go:226] acquiring lock for ca certs: {Name:mk0abfcaff3883fbb5243c47b487f9200d9166d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:45:11.425613  435321 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key
	I0805 12:45:11.425657  435321 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key
	I0805 12:45:11.425666  435321 certs.go:256] generating profile certs ...
	I0805 12:45:11.425737  435321 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/pause-335738/client.key
	I0805 12:45:11.425821  435321 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/pause-335738/apiserver.key.4c2e0008
	I0805 12:45:11.425881  435321 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/pause-335738/proxy-client.key
	I0805 12:45:11.425992  435321 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem (1338 bytes)
	W0805 12:45:11.426021  435321 certs.go:480] ignoring /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219_empty.pem, impossibly tiny 0 bytes
	I0805 12:45:11.426030  435321 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 12:45:11.426052  435321 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem (1082 bytes)
	I0805 12:45:11.426076  435321 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem (1123 bytes)
	I0805 12:45:11.426098  435321 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem (1675 bytes)
	I0805 12:45:11.426133  435321 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:45:11.426731  435321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 12:45:11.451227  435321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 12:45:11.477587  435321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 12:45:11.504930  435321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 12:45:11.529685  435321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/pause-335738/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0805 12:45:11.558933  435321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/pause-335738/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0805 12:45:11.585167  435321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/pause-335738/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 12:45:11.614871  435321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/pause-335738/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 12:45:11.644643  435321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem --> /usr/share/ca-certificates/391219.pem (1338 bytes)
	I0805 12:45:11.672724  435321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /usr/share/ca-certificates/3912192.pem (1708 bytes)
	I0805 12:45:11.732393  435321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 12:45:11.756262  435321 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 12:45:11.773242  435321 ssh_runner.go:195] Run: openssl version
	I0805 12:45:11.778989  435321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391219.pem && ln -fs /usr/share/ca-certificates/391219.pem /etc/ssl/certs/391219.pem"
	I0805 12:45:11.790709  435321 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/391219.pem
	I0805 12:45:11.795841  435321 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 11:39 /usr/share/ca-certificates/391219.pem
	I0805 12:45:11.795942  435321 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391219.pem
	I0805 12:45:11.802389  435321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391219.pem /etc/ssl/certs/51391683.0"
	I0805 12:45:11.812281  435321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3912192.pem && ln -fs /usr/share/ca-certificates/3912192.pem /etc/ssl/certs/3912192.pem"
	I0805 12:45:11.823184  435321 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3912192.pem
	I0805 12:45:11.827757  435321 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 11:39 /usr/share/ca-certificates/3912192.pem
	I0805 12:45:11.827815  435321 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3912192.pem
	I0805 12:45:11.833336  435321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3912192.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 12:45:11.843830  435321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 12:45:11.855223  435321 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:45:11.860007  435321 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:45:11.860059  435321 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:45:11.865896  435321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 12:45:11.876271  435321 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 12:45:11.881124  435321 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 12:45:11.887289  435321 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 12:45:11.896111  435321 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 12:45:11.901722  435321 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 12:45:11.907361  435321 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 12:45:11.913038  435321 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 12:45:11.918906  435321 kubeadm.go:392] StartCluster: {Name:pause-335738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:pause-335738 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false ol
m:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:45:11.919030  435321 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 12:45:11.919069  435321 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:45:11.969286  435321 cri.go:89] found id: "2d03d5b65be38adf050eac82d091e13a70488941b180fbfe98c242246dea6d02"
	I0805 12:45:11.969315  435321 cri.go:89] found id: "add61196ce4f029ecc2eb9dbd7dbded2932824edc54b7099b1ee0c73a8ac269d"
	I0805 12:45:11.969321  435321 cri.go:89] found id: "862df9ac2aa291a1fd67edc74d04423c203ac8b935809be931d7af85bab22892"
	I0805 12:45:11.969326  435321 cri.go:89] found id: "fe361230dd1265ebfe73cd0cb849c09c62c2b58b4281010ffaef1149e8bcfd51"
	I0805 12:45:11.969330  435321 cri.go:89] found id: "62e629ccbea51616692856cbf4046c26f2e54ef331e7b238b1aa3742c4a5d0de"
	I0805 12:45:11.969334  435321 cri.go:89] found id: "57dd9d3e8f34f97a6da8e9cb2772d12864a5ff5e3bd6fa93bcbb140763635832"
	I0805 12:45:11.969338  435321 cri.go:89] found id: ""
	I0805 12:45:11.969406  435321 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-335738 -n pause-335738
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-335738 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-335738 logs -n 25: (1.476232595s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p force-systemd-flag-960699          | force-systemd-flag-960699 | jenkins | v1.33.1 | 05 Aug 24 12:40 UTC | 05 Aug 24 12:40 UTC |
	| start   | -p running-upgrade-313656             | minikube                  | jenkins | v1.26.0 | 05 Aug 24 12:40 UTC | 05 Aug 24 12:42 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-833202                | NoKubernetes-833202       | jenkins | v1.33.1 | 05 Aug 24 12:40 UTC | 05 Aug 24 12:40 UTC |
	| start   | -p NoKubernetes-833202                | NoKubernetes-833202       | jenkins | v1.33.1 | 05 Aug 24 12:40 UTC | 05 Aug 24 12:41 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-823434 ssh               | cert-options-823434       | jenkins | v1.33.1 | 05 Aug 24 12:40 UTC | 05 Aug 24 12:40 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-823434 -- sudo        | cert-options-823434       | jenkins | v1.33.1 | 05 Aug 24 12:40 UTC | 05 Aug 24 12:40 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-823434                | cert-options-823434       | jenkins | v1.33.1 | 05 Aug 24 12:40 UTC | 05 Aug 24 12:40 UTC |
	| start   | -p kubernetes-upgrade-515808          | kubernetes-upgrade-515808 | jenkins | v1.33.1 | 05 Aug 24 12:40 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-833202 sudo           | NoKubernetes-833202       | jenkins | v1.33.1 | 05 Aug 24 12:41 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-833202                | NoKubernetes-833202       | jenkins | v1.33.1 | 05 Aug 24 12:41 UTC | 05 Aug 24 12:41 UTC |
	| start   | -p NoKubernetes-833202                | NoKubernetes-833202       | jenkins | v1.33.1 | 05 Aug 24 12:41 UTC | 05 Aug 24 12:42 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-833202 sudo           | NoKubernetes-833202       | jenkins | v1.33.1 | 05 Aug 24 12:42 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-833202                | NoKubernetes-833202       | jenkins | v1.33.1 | 05 Aug 24 12:42 UTC | 05 Aug 24 12:42 UTC |
	| start   | -p stopped-upgrade-938024             | minikube                  | jenkins | v1.26.0 | 05 Aug 24 12:42 UTC | 05 Aug 24 12:43 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| start   | -p running-upgrade-313656             | running-upgrade-313656    | jenkins | v1.33.1 | 05 Aug 24 12:42 UTC | 05 Aug 24 12:43 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p cert-expiration-623276             | cert-expiration-623276    | jenkins | v1.33.1 | 05 Aug 24 12:43 UTC | 05 Aug 24 12:43 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-938024 stop           | minikube                  | jenkins | v1.26.0 | 05 Aug 24 12:43 UTC | 05 Aug 24 12:43 UTC |
	| start   | -p stopped-upgrade-938024             | stopped-upgrade-938024    | jenkins | v1.33.1 | 05 Aug 24 12:43 UTC | 05 Aug 24 12:44 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-623276             | cert-expiration-623276    | jenkins | v1.33.1 | 05 Aug 24 12:43 UTC | 05 Aug 24 12:43 UTC |
	| start   | -p pause-335738 --memory=2048         | pause-335738              | jenkins | v1.33.1 | 05 Aug 24 12:43 UTC | 05 Aug 24 12:44 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-313656             | running-upgrade-313656    | jenkins | v1.33.1 | 05 Aug 24 12:43 UTC | 05 Aug 24 12:43 UTC |
	| start   | -p auto-119870 --memory=3072          | auto-119870               | jenkins | v1.33.1 | 05 Aug 24 12:43 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-938024             | stopped-upgrade-938024    | jenkins | v1.33.1 | 05 Aug 24 12:44 UTC | 05 Aug 24 12:44 UTC |
	| start   | -p kindnet-119870                     | kindnet-119870            | jenkins | v1.33.1 | 05 Aug 24 12:44 UTC |                     |
	|         | --memory=3072                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-335738                       | pause-335738              | jenkins | v1.33.1 | 05 Aug 24 12:44 UTC | 05 Aug 24 12:45 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 12:44:43
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 12:44:43.344035  435321 out.go:291] Setting OutFile to fd 1 ...
	I0805 12:44:43.344171  435321 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:44:43.344182  435321 out.go:304] Setting ErrFile to fd 2...
	I0805 12:44:43.344189  435321 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:44:43.344374  435321 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
	I0805 12:44:43.344967  435321 out.go:298] Setting JSON to false
	I0805 12:44:43.346018  435321 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8830,"bootTime":1722853053,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0805 12:44:43.346094  435321 start.go:139] virtualization: kvm guest
	I0805 12:44:43.368791  435321 out.go:177] * [pause-335738] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0805 12:44:43.453415  435321 notify.go:220] Checking for updates...
	I0805 12:44:43.453501  435321 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 12:44:43.537804  435321 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 12:44:43.636253  435321 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 12:44:43.774898  435321 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 12:44:43.808131  435321 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0805 12:44:43.851242  435321 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 12:44:43.901680  435321 config.go:182] Loaded profile config "pause-335738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:44:43.902170  435321 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:44:43.902229  435321 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:44:43.920143  435321 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41079
	I0805 12:44:43.920851  435321 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:44:43.921677  435321 main.go:141] libmachine: Using API Version  1
	I0805 12:44:43.921710  435321 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:44:43.922149  435321 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:44:43.922394  435321 main.go:141] libmachine: (pause-335738) Calling .DriverName
	I0805 12:44:43.922752  435321 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 12:44:43.923238  435321 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:44:43.923325  435321 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:44:43.940533  435321 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36477
	I0805 12:44:43.941054  435321 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:44:43.941600  435321 main.go:141] libmachine: Using API Version  1
	I0805 12:44:43.941632  435321 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:44:43.941981  435321 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:44:43.942163  435321 main.go:141] libmachine: (pause-335738) Calling .DriverName
	I0805 12:44:43.979976  435321 out.go:177] * Using the kvm2 driver based on existing profile
	I0805 12:44:43.981254  435321 start.go:297] selected driver: kvm2
	I0805 12:44:43.981275  435321 start.go:901] validating driver "kvm2" against &{Name:pause-335738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.30.3 ClusterName:pause-335738 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-devi
ce-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:44:43.981495  435321 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 12:44:43.981960  435321 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 12:44:43.982072  435321 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19377-383955/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0805 12:44:43.998073  435321 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0805 12:44:43.999104  435321 cni.go:84] Creating CNI manager for ""
	I0805 12:44:43.999134  435321 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:44:43.999241  435321 start.go:340] cluster config:
	{Name:pause-335738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:pause-335738 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:f
alse registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:44:43.999515  435321 iso.go:125] acquiring lock: {Name:mk78a4988ea0dfb86bb6f7367e362683a39fd912 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 12:44:44.001520  435321 out.go:177] * Starting "pause-335738" primary control-plane node in "pause-335738" cluster
	I0805 12:44:42.970473  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:42.971049  434893 main.go:141] libmachine: (kindnet-119870) DBG | unable to find current IP address of domain kindnet-119870 in network mk-kindnet-119870
	I0805 12:44:42.971076  434893 main.go:141] libmachine: (kindnet-119870) DBG | I0805 12:44:42.970996  435153 retry.go:31] will retry after 2.255351199s: waiting for machine to come up
	I0805 12:44:45.229689  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:45.230326  434893 main.go:141] libmachine: (kindnet-119870) DBG | unable to find current IP address of domain kindnet-119870 in network mk-kindnet-119870
	I0805 12:44:45.230353  434893 main.go:141] libmachine: (kindnet-119870) DBG | I0805 12:44:45.230252  435153 retry.go:31] will retry after 2.54222134s: waiting for machine to come up
	I0805 12:44:42.924035  434553 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0805 12:44:42.924155  434553 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0805 12:44:43.925144  434553 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00193841s
	I0805 12:44:43.925248  434553 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0805 12:44:44.002746  435321 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 12:44:44.002789  435321 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0805 12:44:44.002805  435321 cache.go:56] Caching tarball of preloaded images
	I0805 12:44:44.002914  435321 preload.go:172] Found /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0805 12:44:44.002936  435321 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0805 12:44:44.003063  435321 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/pause-335738/config.json ...
	I0805 12:44:44.003287  435321 start.go:360] acquireMachinesLock for pause-335738: {Name:mk3babe91d55c30c0b650587cdec6489eb3a7ed6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 12:44:48.925717  434553 kubeadm.go:310] [api-check] The API server is healthy after 5.002245616s
	I0805 12:44:48.935922  434553 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 12:44:48.947374  434553 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 12:44:48.974358  434553 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0805 12:44:48.974605  434553 kubeadm.go:310] [mark-control-plane] Marking the node auto-119870 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 12:44:48.985798  434553 kubeadm.go:310] [bootstrap-token] Using token: bjp3f3.p69em8uudx6hyl0p
	I0805 12:44:48.987171  434553 out.go:204]   - Configuring RBAC rules ...
	I0805 12:44:48.987290  434553 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 12:44:48.994374  434553 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 12:44:49.017673  434553 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 12:44:49.022435  434553 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 12:44:49.028617  434553 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 12:44:49.032676  434553 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 12:44:49.332279  434553 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 12:44:49.768880  434553 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0805 12:44:50.331385  434553 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0805 12:44:50.331413  434553 kubeadm.go:310] 
	I0805 12:44:50.331471  434553 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0805 12:44:50.331509  434553 kubeadm.go:310] 
	I0805 12:44:50.331637  434553 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0805 12:44:50.331654  434553 kubeadm.go:310] 
	I0805 12:44:50.331699  434553 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0805 12:44:50.331801  434553 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 12:44:50.331877  434553 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 12:44:50.331886  434553 kubeadm.go:310] 
	I0805 12:44:50.331984  434553 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0805 12:44:50.331993  434553 kubeadm.go:310] 
	I0805 12:44:50.332062  434553 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 12:44:50.332084  434553 kubeadm.go:310] 
	I0805 12:44:50.332159  434553 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0805 12:44:50.332237  434553 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 12:44:50.332296  434553 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 12:44:50.332305  434553 kubeadm.go:310] 
	I0805 12:44:50.332383  434553 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0805 12:44:50.332494  434553 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0805 12:44:50.332505  434553 kubeadm.go:310] 
	I0805 12:44:50.332581  434553 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bjp3f3.p69em8uudx6hyl0p \
	I0805 12:44:50.332705  434553 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d5d31a77e9c4cbf19599d2fca5d8f2345e115b01301fa4b841f92bcfec86ddc6 \
	I0805 12:44:50.332740  434553 kubeadm.go:310] 	--control-plane 
	I0805 12:44:50.332749  434553 kubeadm.go:310] 
	I0805 12:44:50.332857  434553 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0805 12:44:50.332867  434553 kubeadm.go:310] 
	I0805 12:44:50.332964  434553 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bjp3f3.p69em8uudx6hyl0p \
	I0805 12:44:50.333114  434553 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d5d31a77e9c4cbf19599d2fca5d8f2345e115b01301fa4b841f92bcfec86ddc6 
	I0805 12:44:50.333257  434553 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 12:44:50.333349  434553 cni.go:84] Creating CNI manager for ""
	I0805 12:44:50.333367  434553 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:44:50.334958  434553 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0805 12:44:47.773613  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:47.774165  434893 main.go:141] libmachine: (kindnet-119870) DBG | unable to find current IP address of domain kindnet-119870 in network mk-kindnet-119870
	I0805 12:44:47.774190  434893 main.go:141] libmachine: (kindnet-119870) DBG | I0805 12:44:47.774099  435153 retry.go:31] will retry after 3.606807249s: waiting for machine to come up
	I0805 12:44:51.384791  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:51.385349  434893 main.go:141] libmachine: (kindnet-119870) DBG | unable to find current IP address of domain kindnet-119870 in network mk-kindnet-119870
	I0805 12:44:51.385373  434893 main.go:141] libmachine: (kindnet-119870) DBG | I0805 12:44:51.385294  435153 retry.go:31] will retry after 5.167725361s: waiting for machine to come up
	I0805 12:44:50.336085  434553 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0805 12:44:50.347010  434553 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0805 12:44:50.365133  434553 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 12:44:50.365244  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:44:50.365295  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-119870 minikube.k8s.io/updated_at=2024_08_05T12_44_50_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f minikube.k8s.io/name=auto-119870 minikube.k8s.io/primary=true
	I0805 12:44:50.405054  434553 ops.go:34] apiserver oom_adj: -16
	I0805 12:44:50.487680  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:44:50.988580  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:44:51.487877  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:44:51.988334  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:44:52.487841  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:44:56.557768  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:56.558350  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has current primary IP address 192.168.72.10 and MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:56.558384  434893 main.go:141] libmachine: (kindnet-119870) Found IP for machine: 192.168.72.10
	I0805 12:44:56.558399  434893 main.go:141] libmachine: (kindnet-119870) Reserving static IP address...
	I0805 12:44:56.558757  434893 main.go:141] libmachine: (kindnet-119870) DBG | unable to find host DHCP lease matching {name: "kindnet-119870", mac: "52:54:00:a2:57:b7", ip: "192.168.72.10"} in network mk-kindnet-119870
	I0805 12:44:56.633347  434893 main.go:141] libmachine: (kindnet-119870) DBG | Getting to WaitForSSH function...
	I0805 12:44:56.633387  434893 main.go:141] libmachine: (kindnet-119870) Reserved static IP address: 192.168.72.10
	I0805 12:44:56.633438  434893 main.go:141] libmachine: (kindnet-119870) Waiting for SSH to be available...
	I0805 12:44:56.636062  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:56.636592  434893 main.go:141] libmachine: (kindnet-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:57:b7", ip: ""} in network mk-kindnet-119870: {Iface:virbr4 ExpiryTime:2024-08-05 13:44:48 +0000 UTC Type:0 Mac:52:54:00:a2:57:b7 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a2:57:b7}
	I0805 12:44:56.636629  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined IP address 192.168.72.10 and MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:56.636731  434893 main.go:141] libmachine: (kindnet-119870) DBG | Using SSH client type: external
	I0805 12:44:56.636753  434893 main.go:141] libmachine: (kindnet-119870) DBG | Using SSH private key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/kindnet-119870/id_rsa (-rw-------)
	I0805 12:44:56.636785  434893 main.go:141] libmachine: (kindnet-119870) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.10 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19377-383955/.minikube/machines/kindnet-119870/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 12:44:56.636796  434893 main.go:141] libmachine: (kindnet-119870) DBG | About to run SSH command:
	I0805 12:44:56.636809  434893 main.go:141] libmachine: (kindnet-119870) DBG | exit 0
	I0805 12:44:56.760018  434893 main.go:141] libmachine: (kindnet-119870) DBG | SSH cmd err, output: <nil>: 
	I0805 12:44:56.760338  434893 main.go:141] libmachine: (kindnet-119870) KVM machine creation complete!
	I0805 12:44:56.760676  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetConfigRaw
	I0805 12:44:56.761244  434893 main.go:141] libmachine: (kindnet-119870) Calling .DriverName
	I0805 12:44:56.761466  434893 main.go:141] libmachine: (kindnet-119870) Calling .DriverName
	I0805 12:44:56.761682  434893 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0805 12:44:56.761701  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetState
	I0805 12:44:56.763034  434893 main.go:141] libmachine: Detecting operating system of created instance...
	I0805 12:44:56.763052  434893 main.go:141] libmachine: Waiting for SSH to be available...
	I0805 12:44:56.763059  434893 main.go:141] libmachine: Getting to WaitForSSH function...
	I0805 12:44:56.763068  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHHostname
	I0805 12:44:56.765321  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:56.765673  434893 main.go:141] libmachine: (kindnet-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:57:b7", ip: ""} in network mk-kindnet-119870: {Iface:virbr4 ExpiryTime:2024-08-05 13:44:48 +0000 UTC Type:0 Mac:52:54:00:a2:57:b7 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:kindnet-119870 Clientid:01:52:54:00:a2:57:b7}
	I0805 12:44:56.765705  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined IP address 192.168.72.10 and MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:56.765817  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHPort
	I0805 12:44:56.766014  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHKeyPath
	I0805 12:44:56.766185  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHKeyPath
	I0805 12:44:56.766313  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHUsername
	I0805 12:44:56.766466  434893 main.go:141] libmachine: Using SSH client type: native
	I0805 12:44:56.766677  434893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.10 22 <nil> <nil>}
	I0805 12:44:56.766691  434893 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0805 12:44:52.987702  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:44:53.488336  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:44:53.987711  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:44:54.488037  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:44:54.987903  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:44:55.487849  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:44:55.987844  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:44:56.488378  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:44:56.988332  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:44:57.488622  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:44:58.252843  435321 start.go:364] duration metric: took 14.249505532s to acquireMachinesLock for "pause-335738"
	I0805 12:44:58.252909  435321 start.go:96] Skipping create...Using existing machine configuration
	I0805 12:44:58.252921  435321 fix.go:54] fixHost starting: 
	I0805 12:44:58.253335  435321 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:44:58.253396  435321 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:44:58.273749  435321 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41657
	I0805 12:44:58.274208  435321 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:44:58.274727  435321 main.go:141] libmachine: Using API Version  1
	I0805 12:44:58.274754  435321 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:44:58.275052  435321 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:44:58.275256  435321 main.go:141] libmachine: (pause-335738) Calling .DriverName
	I0805 12:44:58.275414  435321 main.go:141] libmachine: (pause-335738) Calling .GetState
	I0805 12:44:58.277106  435321 fix.go:112] recreateIfNeeded on pause-335738: state=Running err=<nil>
	W0805 12:44:58.277151  435321 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 12:44:58.279288  435321 out.go:177] * Updating the running kvm2 "pause-335738" VM ...
	I0805 12:44:58.280673  435321 machine.go:94] provisionDockerMachine start ...
	I0805 12:44:58.280701  435321 main.go:141] libmachine: (pause-335738) Calling .DriverName
	I0805 12:44:58.280894  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHHostname
	I0805 12:44:58.284133  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:44:58.284613  435321 main.go:141] libmachine: (pause-335738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:22:6e", ip: ""} in network mk-pause-335738: {Iface:virbr1 ExpiryTime:2024-08-05 13:43:55 +0000 UTC Type:0 Mac:52:54:00:c5:22:6e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-335738 Clientid:01:52:54:00:c5:22:6e}
	I0805 12:44:58.284640  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined IP address 192.168.39.97 and MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:44:58.284791  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHPort
	I0805 12:44:58.284946  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHKeyPath
	I0805 12:44:58.285090  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHKeyPath
	I0805 12:44:58.285225  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHUsername
	I0805 12:44:58.285452  435321 main.go:141] libmachine: Using SSH client type: native
	I0805 12:44:58.285645  435321 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0805 12:44:58.285656  435321 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 12:44:56.867111  434893 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 12:44:56.867137  434893 main.go:141] libmachine: Detecting the provisioner...
	I0805 12:44:56.867145  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHHostname
	I0805 12:44:56.869996  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:56.870310  434893 main.go:141] libmachine: (kindnet-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:57:b7", ip: ""} in network mk-kindnet-119870: {Iface:virbr4 ExpiryTime:2024-08-05 13:44:48 +0000 UTC Type:0 Mac:52:54:00:a2:57:b7 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:kindnet-119870 Clientid:01:52:54:00:a2:57:b7}
	I0805 12:44:56.870346  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined IP address 192.168.72.10 and MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:56.870481  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHPort
	I0805 12:44:56.870703  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHKeyPath
	I0805 12:44:56.870913  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHKeyPath
	I0805 12:44:56.871081  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHUsername
	I0805 12:44:56.871331  434893 main.go:141] libmachine: Using SSH client type: native
	I0805 12:44:56.871513  434893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.10 22 <nil> <nil>}
	I0805 12:44:56.871522  434893 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0805 12:44:56.972557  434893 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0805 12:44:56.972641  434893 main.go:141] libmachine: found compatible host: buildroot
	I0805 12:44:56.972653  434893 main.go:141] libmachine: Provisioning with buildroot...
	I0805 12:44:56.972662  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetMachineName
	I0805 12:44:56.972939  434893 buildroot.go:166] provisioning hostname "kindnet-119870"
	I0805 12:44:56.972966  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetMachineName
	I0805 12:44:56.973177  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHHostname
	I0805 12:44:56.976140  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:56.976567  434893 main.go:141] libmachine: (kindnet-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:57:b7", ip: ""} in network mk-kindnet-119870: {Iface:virbr4 ExpiryTime:2024-08-05 13:44:48 +0000 UTC Type:0 Mac:52:54:00:a2:57:b7 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:kindnet-119870 Clientid:01:52:54:00:a2:57:b7}
	I0805 12:44:56.976597  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined IP address 192.168.72.10 and MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:56.976706  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHPort
	I0805 12:44:56.976879  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHKeyPath
	I0805 12:44:56.977035  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHKeyPath
	I0805 12:44:56.977208  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHUsername
	I0805 12:44:56.977385  434893 main.go:141] libmachine: Using SSH client type: native
	I0805 12:44:56.977560  434893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.10 22 <nil> <nil>}
	I0805 12:44:56.977572  434893 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-119870 && echo "kindnet-119870" | sudo tee /etc/hostname
	I0805 12:44:57.099024  434893 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-119870
	
	I0805 12:44:57.099057  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHHostname
	I0805 12:44:57.101768  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:57.102132  434893 main.go:141] libmachine: (kindnet-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:57:b7", ip: ""} in network mk-kindnet-119870: {Iface:virbr4 ExpiryTime:2024-08-05 13:44:48 +0000 UTC Type:0 Mac:52:54:00:a2:57:b7 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:kindnet-119870 Clientid:01:52:54:00:a2:57:b7}
	I0805 12:44:57.102161  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined IP address 192.168.72.10 and MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:57.102552  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHPort
	I0805 12:44:57.102772  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHKeyPath
	I0805 12:44:57.102977  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHKeyPath
	I0805 12:44:57.103136  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHUsername
	I0805 12:44:57.103332  434893 main.go:141] libmachine: Using SSH client type: native
	I0805 12:44:57.103508  434893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.10 22 <nil> <nil>}
	I0805 12:44:57.103525  434893 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-119870' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-119870/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-119870' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 12:44:57.213876  434893 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 12:44:57.213908  434893 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19377-383955/.minikube CaCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19377-383955/.minikube}
	I0805 12:44:57.213949  434893 buildroot.go:174] setting up certificates
	I0805 12:44:57.213962  434893 provision.go:84] configureAuth start
	I0805 12:44:57.213973  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetMachineName
	I0805 12:44:57.214331  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetIP
	I0805 12:44:57.217345  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:57.217761  434893 main.go:141] libmachine: (kindnet-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:57:b7", ip: ""} in network mk-kindnet-119870: {Iface:virbr4 ExpiryTime:2024-08-05 13:44:48 +0000 UTC Type:0 Mac:52:54:00:a2:57:b7 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:kindnet-119870 Clientid:01:52:54:00:a2:57:b7}
	I0805 12:44:57.217786  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined IP address 192.168.72.10 and MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:57.217995  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHHostname
	I0805 12:44:57.220388  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:57.220709  434893 main.go:141] libmachine: (kindnet-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:57:b7", ip: ""} in network mk-kindnet-119870: {Iface:virbr4 ExpiryTime:2024-08-05 13:44:48 +0000 UTC Type:0 Mac:52:54:00:a2:57:b7 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:kindnet-119870 Clientid:01:52:54:00:a2:57:b7}
	I0805 12:44:57.220753  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined IP address 192.168.72.10 and MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:57.220864  434893 provision.go:143] copyHostCerts
	I0805 12:44:57.220923  434893 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem, removing ...
	I0805 12:44:57.220939  434893 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 12:44:57.221004  434893 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem (1082 bytes)
	I0805 12:44:57.221103  434893 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem, removing ...
	I0805 12:44:57.221113  434893 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 12:44:57.221133  434893 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem (1123 bytes)
	I0805 12:44:57.221185  434893 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem, removing ...
	I0805 12:44:57.221192  434893 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 12:44:57.221208  434893 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem (1675 bytes)
	I0805 12:44:57.221254  434893 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem org=jenkins.kindnet-119870 san=[127.0.0.1 192.168.72.10 kindnet-119870 localhost minikube]
	I0805 12:44:57.576576  434893 provision.go:177] copyRemoteCerts
	I0805 12:44:57.576643  434893 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 12:44:57.576670  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHHostname
	I0805 12:44:57.579637  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:57.579986  434893 main.go:141] libmachine: (kindnet-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:57:b7", ip: ""} in network mk-kindnet-119870: {Iface:virbr4 ExpiryTime:2024-08-05 13:44:48 +0000 UTC Type:0 Mac:52:54:00:a2:57:b7 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:kindnet-119870 Clientid:01:52:54:00:a2:57:b7}
	I0805 12:44:57.580020  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined IP address 192.168.72.10 and MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:57.580264  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHPort
	I0805 12:44:57.580449  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHKeyPath
	I0805 12:44:57.580620  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHUsername
	I0805 12:44:57.580733  434893 sshutil.go:53] new ssh client: &{IP:192.168.72.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/kindnet-119870/id_rsa Username:docker}
	I0805 12:44:57.663238  434893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0805 12:44:57.688031  434893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 12:44:57.712663  434893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 12:44:57.743530  434893 provision.go:87] duration metric: took 529.553371ms to configureAuth
	I0805 12:44:57.743559  434893 buildroot.go:189] setting minikube options for container-runtime
	I0805 12:44:57.743790  434893 config.go:182] Loaded profile config "kindnet-119870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:44:57.743904  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHHostname
	I0805 12:44:57.746398  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:57.746760  434893 main.go:141] libmachine: (kindnet-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:57:b7", ip: ""} in network mk-kindnet-119870: {Iface:virbr4 ExpiryTime:2024-08-05 13:44:48 +0000 UTC Type:0 Mac:52:54:00:a2:57:b7 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:kindnet-119870 Clientid:01:52:54:00:a2:57:b7}
	I0805 12:44:57.746781  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined IP address 192.168.72.10 and MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:57.746980  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHPort
	I0805 12:44:57.747181  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHKeyPath
	I0805 12:44:57.747343  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHKeyPath
	I0805 12:44:57.747465  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHUsername
	I0805 12:44:57.747599  434893 main.go:141] libmachine: Using SSH client type: native
	I0805 12:44:57.747813  434893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.10 22 <nil> <nil>}
	I0805 12:44:57.747831  434893 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 12:44:58.016168  434893 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 12:44:58.016204  434893 main.go:141] libmachine: Checking connection to Docker...
	I0805 12:44:58.016212  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetURL
	I0805 12:44:58.017670  434893 main.go:141] libmachine: (kindnet-119870) DBG | Using libvirt version 6000000
	I0805 12:44:58.020355  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:58.020875  434893 main.go:141] libmachine: (kindnet-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:57:b7", ip: ""} in network mk-kindnet-119870: {Iface:virbr4 ExpiryTime:2024-08-05 13:44:48 +0000 UTC Type:0 Mac:52:54:00:a2:57:b7 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:kindnet-119870 Clientid:01:52:54:00:a2:57:b7}
	I0805 12:44:58.020908  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined IP address 192.168.72.10 and MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:58.021074  434893 main.go:141] libmachine: Docker is up and running!
	I0805 12:44:58.021090  434893 main.go:141] libmachine: Reticulating splines...
	I0805 12:44:58.021098  434893 client.go:171] duration metric: took 25.634465705s to LocalClient.Create
	I0805 12:44:58.021120  434893 start.go:167] duration metric: took 25.634531809s to libmachine.API.Create "kindnet-119870"
	I0805 12:44:58.021129  434893 start.go:293] postStartSetup for "kindnet-119870" (driver="kvm2")
	I0805 12:44:58.021144  434893 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 12:44:58.021161  434893 main.go:141] libmachine: (kindnet-119870) Calling .DriverName
	I0805 12:44:58.021408  434893 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 12:44:58.021439  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHHostname
	I0805 12:44:58.023811  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:58.024132  434893 main.go:141] libmachine: (kindnet-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:57:b7", ip: ""} in network mk-kindnet-119870: {Iface:virbr4 ExpiryTime:2024-08-05 13:44:48 +0000 UTC Type:0 Mac:52:54:00:a2:57:b7 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:kindnet-119870 Clientid:01:52:54:00:a2:57:b7}
	I0805 12:44:58.024163  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined IP address 192.168.72.10 and MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:58.024330  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHPort
	I0805 12:44:58.024527  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHKeyPath
	I0805 12:44:58.024715  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHUsername
	I0805 12:44:58.024883  434893 sshutil.go:53] new ssh client: &{IP:192.168.72.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/kindnet-119870/id_rsa Username:docker}
	I0805 12:44:58.106053  434893 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 12:44:58.110316  434893 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 12:44:58.110342  434893 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/addons for local assets ...
	I0805 12:44:58.110427  434893 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/files for local assets ...
	I0805 12:44:58.110535  434893 filesync.go:149] local asset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> 3912192.pem in /etc/ssl/certs
	I0805 12:44:58.110650  434893 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 12:44:58.120388  434893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:44:58.144312  434893 start.go:296] duration metric: took 123.168771ms for postStartSetup
	I0805 12:44:58.144370  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetConfigRaw
	I0805 12:44:58.144991  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetIP
	I0805 12:44:58.147736  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:58.148218  434893 main.go:141] libmachine: (kindnet-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:57:b7", ip: ""} in network mk-kindnet-119870: {Iface:virbr4 ExpiryTime:2024-08-05 13:44:48 +0000 UTC Type:0 Mac:52:54:00:a2:57:b7 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:kindnet-119870 Clientid:01:52:54:00:a2:57:b7}
	I0805 12:44:58.148253  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined IP address 192.168.72.10 and MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:58.148486  434893 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/config.json ...
	I0805 12:44:58.148668  434893 start.go:128] duration metric: took 25.787662742s to createHost
	I0805 12:44:58.148701  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHHostname
	I0805 12:44:58.151139  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:58.151426  434893 main.go:141] libmachine: (kindnet-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:57:b7", ip: ""} in network mk-kindnet-119870: {Iface:virbr4 ExpiryTime:2024-08-05 13:44:48 +0000 UTC Type:0 Mac:52:54:00:a2:57:b7 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:kindnet-119870 Clientid:01:52:54:00:a2:57:b7}
	I0805 12:44:58.151457  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined IP address 192.168.72.10 and MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:58.151604  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHPort
	I0805 12:44:58.151810  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHKeyPath
	I0805 12:44:58.152013  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHKeyPath
	I0805 12:44:58.152172  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHUsername
	I0805 12:44:58.152392  434893 main.go:141] libmachine: Using SSH client type: native
	I0805 12:44:58.152605  434893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.10 22 <nil> <nil>}
	I0805 12:44:58.152619  434893 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 12:44:58.252636  434893 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722861898.226477509
	
	I0805 12:44:58.252665  434893 fix.go:216] guest clock: 1722861898.226477509
	I0805 12:44:58.252680  434893 fix.go:229] Guest: 2024-08-05 12:44:58.226477509 +0000 UTC Remote: 2024-08-05 12:44:58.148689335 +0000 UTC m=+51.328647468 (delta=77.788174ms)
	I0805 12:44:58.252726  434893 fix.go:200] guest clock delta is within tolerance: 77.788174ms
	I0805 12:44:58.252734  434893 start.go:83] releasing machines lock for "kindnet-119870", held for 25.891936471s
	I0805 12:44:58.252772  434893 main.go:141] libmachine: (kindnet-119870) Calling .DriverName
	I0805 12:44:58.253119  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetIP
	I0805 12:44:58.255933  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:58.256291  434893 main.go:141] libmachine: (kindnet-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:57:b7", ip: ""} in network mk-kindnet-119870: {Iface:virbr4 ExpiryTime:2024-08-05 13:44:48 +0000 UTC Type:0 Mac:52:54:00:a2:57:b7 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:kindnet-119870 Clientid:01:52:54:00:a2:57:b7}
	I0805 12:44:58.256316  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined IP address 192.168.72.10 and MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:58.256544  434893 main.go:141] libmachine: (kindnet-119870) Calling .DriverName
	I0805 12:44:58.257100  434893 main.go:141] libmachine: (kindnet-119870) Calling .DriverName
	I0805 12:44:58.257333  434893 main.go:141] libmachine: (kindnet-119870) Calling .DriverName
	I0805 12:44:58.257443  434893 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 12:44:58.257488  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHHostname
	I0805 12:44:58.257541  434893 ssh_runner.go:195] Run: cat /version.json
	I0805 12:44:58.257568  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHHostname
	I0805 12:44:58.260338  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:58.260586  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:58.260738  434893 main.go:141] libmachine: (kindnet-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:57:b7", ip: ""} in network mk-kindnet-119870: {Iface:virbr4 ExpiryTime:2024-08-05 13:44:48 +0000 UTC Type:0 Mac:52:54:00:a2:57:b7 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:kindnet-119870 Clientid:01:52:54:00:a2:57:b7}
	I0805 12:44:58.260769  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined IP address 192.168.72.10 and MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:58.260934  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHPort
	I0805 12:44:58.260947  434893 main.go:141] libmachine: (kindnet-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:57:b7", ip: ""} in network mk-kindnet-119870: {Iface:virbr4 ExpiryTime:2024-08-05 13:44:48 +0000 UTC Type:0 Mac:52:54:00:a2:57:b7 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:kindnet-119870 Clientid:01:52:54:00:a2:57:b7}
	I0805 12:44:58.260972  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined IP address 192.168.72.10 and MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:58.261135  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHPort
	I0805 12:44:58.261158  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHKeyPath
	I0805 12:44:58.261326  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHKeyPath
	I0805 12:44:58.261353  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHUsername
	I0805 12:44:58.261511  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHUsername
	I0805 12:44:58.261517  434893 sshutil.go:53] new ssh client: &{IP:192.168.72.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/kindnet-119870/id_rsa Username:docker}
	I0805 12:44:58.261654  434893 sshutil.go:53] new ssh client: &{IP:192.168.72.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/kindnet-119870/id_rsa Username:docker}
	I0805 12:44:58.364801  434893 ssh_runner.go:195] Run: systemctl --version
	I0805 12:44:58.371389  434893 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 12:44:58.534509  434893 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 12:44:58.541262  434893 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 12:44:58.541329  434893 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 12:44:58.560185  434893 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 12:44:58.560215  434893 start.go:495] detecting cgroup driver to use...
	I0805 12:44:58.560297  434893 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 12:44:58.577096  434893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 12:44:58.591396  434893 docker.go:217] disabling cri-docker service (if available) ...
	I0805 12:44:58.591451  434893 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 12:44:58.605793  434893 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 12:44:58.621914  434893 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 12:44:58.756993  434893 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 12:44:58.926921  434893 docker.go:233] disabling docker service ...
	I0805 12:44:58.927001  434893 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 12:44:58.944233  434893 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 12:44:58.957369  434893 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 12:44:59.103223  434893 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 12:44:59.247242  434893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 12:44:59.264889  434893 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 12:44:59.285696  434893 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0805 12:44:59.285770  434893 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:44:59.297232  434893 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 12:44:59.297305  434893 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:44:59.308055  434893 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:44:59.318602  434893 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:44:59.329408  434893 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 12:44:59.340384  434893 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:44:59.354364  434893 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:44:59.376155  434893 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:44:59.389766  434893 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 12:44:59.402604  434893 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0805 12:44:59.402693  434893 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0805 12:44:59.417804  434893 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 12:44:59.428825  434893 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:44:59.560900  434893 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 12:44:59.702965  434893 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 12:44:59.703036  434893 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 12:44:59.708288  434893 start.go:563] Will wait 60s for crictl version
	I0805 12:44:59.708351  434893 ssh_runner.go:195] Run: which crictl
	I0805 12:44:59.712258  434893 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 12:44:59.755798  434893 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 12:44:59.755907  434893 ssh_runner.go:195] Run: crio --version
	I0805 12:44:59.783941  434893 ssh_runner.go:195] Run: crio --version
	I0805 12:44:59.814845  434893 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0805 12:44:59.816091  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetIP
	I0805 12:44:59.818988  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:59.819333  434893 main.go:141] libmachine: (kindnet-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:57:b7", ip: ""} in network mk-kindnet-119870: {Iface:virbr4 ExpiryTime:2024-08-05 13:44:48 +0000 UTC Type:0 Mac:52:54:00:a2:57:b7 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:kindnet-119870 Clientid:01:52:54:00:a2:57:b7}
	I0805 12:44:59.819371  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined IP address 192.168.72.10 and MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:59.819646  434893 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0805 12:44:59.823947  434893 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:44:59.836996  434893 kubeadm.go:883] updating cluster {Name:kindnet-119870 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:kindnet-119870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.72.10 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 12:44:59.837118  434893 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 12:44:59.837167  434893 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:44:59.870712  434893 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0805 12:44:59.870793  434893 ssh_runner.go:195] Run: which lz4
	I0805 12:44:59.874840  434893 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0805 12:44:59.879189  434893 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 12:44:59.879213  434893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0805 12:45:01.359338  434893 crio.go:462] duration metric: took 1.484532897s to copy over tarball
	I0805 12:45:01.359440  434893 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 12:44:57.987769  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:44:58.487712  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:44:58.988274  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:44:59.488526  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:44:59.988523  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:45:00.488628  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:45:00.988722  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:45:01.488747  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:45:01.988602  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:45:02.488202  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:44:58.400571  435321 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-335738
	
	I0805 12:44:58.400607  435321 main.go:141] libmachine: (pause-335738) Calling .GetMachineName
	I0805 12:44:58.400894  435321 buildroot.go:166] provisioning hostname "pause-335738"
	I0805 12:44:58.400928  435321 main.go:141] libmachine: (pause-335738) Calling .GetMachineName
	I0805 12:44:58.401212  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHHostname
	I0805 12:44:58.404011  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:44:58.404385  435321 main.go:141] libmachine: (pause-335738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:22:6e", ip: ""} in network mk-pause-335738: {Iface:virbr1 ExpiryTime:2024-08-05 13:43:55 +0000 UTC Type:0 Mac:52:54:00:c5:22:6e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-335738 Clientid:01:52:54:00:c5:22:6e}
	I0805 12:44:58.404407  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined IP address 192.168.39.97 and MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:44:58.404647  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHPort
	I0805 12:44:58.404816  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHKeyPath
	I0805 12:44:58.404970  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHKeyPath
	I0805 12:44:58.405127  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHUsername
	I0805 12:44:58.405377  435321 main.go:141] libmachine: Using SSH client type: native
	I0805 12:44:58.405594  435321 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0805 12:44:58.405612  435321 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-335738 && echo "pause-335738" | sudo tee /etc/hostname
	I0805 12:44:58.531104  435321 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-335738
	
	I0805 12:44:58.531139  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHHostname
	I0805 12:44:58.534469  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:44:58.534976  435321 main.go:141] libmachine: (pause-335738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:22:6e", ip: ""} in network mk-pause-335738: {Iface:virbr1 ExpiryTime:2024-08-05 13:43:55 +0000 UTC Type:0 Mac:52:54:00:c5:22:6e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-335738 Clientid:01:52:54:00:c5:22:6e}
	I0805 12:44:58.535021  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined IP address 192.168.39.97 and MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:44:58.535254  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHPort
	I0805 12:44:58.535511  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHKeyPath
	I0805 12:44:58.535713  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHKeyPath
	I0805 12:44:58.535906  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHUsername
	I0805 12:44:58.536130  435321 main.go:141] libmachine: Using SSH client type: native
	I0805 12:44:58.536346  435321 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0805 12:44:58.536371  435321 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-335738' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-335738/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-335738' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 12:44:58.654002  435321 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 12:44:58.654040  435321 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19377-383955/.minikube CaCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19377-383955/.minikube}
	I0805 12:44:58.654109  435321 buildroot.go:174] setting up certificates
	I0805 12:44:58.654119  435321 provision.go:84] configureAuth start
	I0805 12:44:58.654136  435321 main.go:141] libmachine: (pause-335738) Calling .GetMachineName
	I0805 12:44:58.654481  435321 main.go:141] libmachine: (pause-335738) Calling .GetIP
	I0805 12:44:58.657169  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:44:58.657539  435321 main.go:141] libmachine: (pause-335738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:22:6e", ip: ""} in network mk-pause-335738: {Iface:virbr1 ExpiryTime:2024-08-05 13:43:55 +0000 UTC Type:0 Mac:52:54:00:c5:22:6e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-335738 Clientid:01:52:54:00:c5:22:6e}
	I0805 12:44:58.657566  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined IP address 192.168.39.97 and MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:44:58.657679  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHHostname
	I0805 12:44:58.659937  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:44:58.660312  435321 main.go:141] libmachine: (pause-335738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:22:6e", ip: ""} in network mk-pause-335738: {Iface:virbr1 ExpiryTime:2024-08-05 13:43:55 +0000 UTC Type:0 Mac:52:54:00:c5:22:6e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-335738 Clientid:01:52:54:00:c5:22:6e}
	I0805 12:44:58.660336  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined IP address 192.168.39.97 and MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:44:58.660613  435321 provision.go:143] copyHostCerts
	I0805 12:44:58.660679  435321 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem, removing ...
	I0805 12:44:58.660690  435321 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 12:44:58.660740  435321 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem (1082 bytes)
	I0805 12:44:58.660833  435321 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem, removing ...
	I0805 12:44:58.660842  435321 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 12:44:58.660863  435321 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem (1123 bytes)
	I0805 12:44:58.660914  435321 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem, removing ...
	I0805 12:44:58.660921  435321 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 12:44:58.660945  435321 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem (1675 bytes)
	I0805 12:44:58.660988  435321 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem org=jenkins.pause-335738 san=[127.0.0.1 192.168.39.97 localhost minikube pause-335738]
	I0805 12:44:59.028284  435321 provision.go:177] copyRemoteCerts
	I0805 12:44:59.028377  435321 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 12:44:59.028414  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHHostname
	I0805 12:44:59.031279  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:44:59.031702  435321 main.go:141] libmachine: (pause-335738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:22:6e", ip: ""} in network mk-pause-335738: {Iface:virbr1 ExpiryTime:2024-08-05 13:43:55 +0000 UTC Type:0 Mac:52:54:00:c5:22:6e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-335738 Clientid:01:52:54:00:c5:22:6e}
	I0805 12:44:59.031760  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined IP address 192.168.39.97 and MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:44:59.031939  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHPort
	I0805 12:44:59.032172  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHKeyPath
	I0805 12:44:59.032322  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHUsername
	I0805 12:44:59.032465  435321 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/pause-335738/id_rsa Username:docker}
	I0805 12:44:59.122102  435321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 12:44:59.152102  435321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0805 12:44:59.185021  435321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0805 12:44:59.216147  435321 provision.go:87] duration metric: took 562.010148ms to configureAuth
	I0805 12:44:59.216200  435321 buildroot.go:189] setting minikube options for container-runtime
	I0805 12:44:59.216425  435321 config.go:182] Loaded profile config "pause-335738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:44:59.216544  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHHostname
	I0805 12:44:59.219728  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:44:59.220160  435321 main.go:141] libmachine: (pause-335738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:22:6e", ip: ""} in network mk-pause-335738: {Iface:virbr1 ExpiryTime:2024-08-05 13:43:55 +0000 UTC Type:0 Mac:52:54:00:c5:22:6e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-335738 Clientid:01:52:54:00:c5:22:6e}
	I0805 12:44:59.220193  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined IP address 192.168.39.97 and MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:44:59.220453  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHPort
	I0805 12:44:59.220684  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHKeyPath
	I0805 12:44:59.220862  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHKeyPath
	I0805 12:44:59.220995  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHUsername
	I0805 12:44:59.221192  435321 main.go:141] libmachine: Using SSH client type: native
	I0805 12:44:59.221433  435321 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0805 12:44:59.221465  435321 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 12:45:02.988025  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:45:03.487921  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:45:03.987872  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:45:04.083399  434553 kubeadm.go:1113] duration metric: took 13.718223985s to wait for elevateKubeSystemPrivileges
	I0805 12:45:04.083444  434553 kubeadm.go:394] duration metric: took 24.284474624s to StartCluster
	I0805 12:45:04.083471  434553 settings.go:142] acquiring lock: {Name:mkef693333292ed53a03690c72ec170ce2e26d3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:45:04.083556  434553 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 12:45:04.084789  434553 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/kubeconfig: {Name:mkf2ea766e58530103015ce4ba9d1ed3336f3926 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:45:04.085043  434553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0805 12:45:04.085058  434553 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.50.143 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 12:45:04.085122  434553 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 12:45:04.085206  434553 addons.go:69] Setting storage-provisioner=true in profile "auto-119870"
	I0805 12:45:04.085238  434553 addons.go:234] Setting addon storage-provisioner=true in "auto-119870"
	I0805 12:45:04.085269  434553 config.go:182] Loaded profile config "auto-119870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:45:04.085272  434553 host.go:66] Checking if "auto-119870" exists ...
	I0805 12:45:04.085262  434553 addons.go:69] Setting default-storageclass=true in profile "auto-119870"
	I0805 12:45:04.085326  434553 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-119870"
	I0805 12:45:04.085654  434553 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:45:04.085678  434553 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:45:04.085783  434553 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:45:04.085835  434553 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:45:04.086659  434553 out.go:177] * Verifying Kubernetes components...
	I0805 12:45:04.088057  434553 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:45:04.106100  434553 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45765
	I0805 12:45:04.106147  434553 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37415
	I0805 12:45:04.106610  434553 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:45:04.106722  434553 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:45:04.107146  434553 main.go:141] libmachine: Using API Version  1
	I0805 12:45:04.107170  434553 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:45:04.107298  434553 main.go:141] libmachine: Using API Version  1
	I0805 12:45:04.107328  434553 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:45:04.107585  434553 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:45:04.107802  434553 main.go:141] libmachine: (auto-119870) Calling .GetState
	I0805 12:45:04.107856  434553 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:45:04.108549  434553 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:45:04.108896  434553 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:45:04.111991  434553 addons.go:234] Setting addon default-storageclass=true in "auto-119870"
	I0805 12:45:04.112036  434553 host.go:66] Checking if "auto-119870" exists ...
	I0805 12:45:04.112313  434553 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:45:04.112343  434553 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:45:04.129062  434553 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39679
	I0805 12:45:04.129621  434553 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:45:04.130233  434553 main.go:141] libmachine: Using API Version  1
	I0805 12:45:04.130253  434553 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:45:04.134972  434553 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41423
	I0805 12:45:04.135159  434553 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:45:04.135415  434553 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:45:04.135579  434553 main.go:141] libmachine: (auto-119870) Calling .GetState
	I0805 12:45:04.136311  434553 main.go:141] libmachine: Using API Version  1
	I0805 12:45:04.136340  434553 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:45:04.136785  434553 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:45:04.137368  434553 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:45:04.137407  434553 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:45:04.138414  434553 main.go:141] libmachine: (auto-119870) Calling .DriverName
	I0805 12:45:04.140386  434553 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:45:03.825786  434893 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.466309401s)
	I0805 12:45:03.825816  434893 crio.go:469] duration metric: took 2.466442343s to extract the tarball
	I0805 12:45:03.825825  434893 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0805 12:45:03.873168  434893 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:45:03.922121  434893 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 12:45:03.922146  434893 cache_images.go:84] Images are preloaded, skipping loading
	I0805 12:45:03.922155  434893 kubeadm.go:934] updating node { 192.168.72.10 8443 v1.30.3 crio true true} ...
	I0805 12:45:03.922293  434893 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-119870 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:kindnet-119870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I0805 12:45:03.922397  434893 ssh_runner.go:195] Run: crio config
	I0805 12:45:03.979120  434893 cni.go:84] Creating CNI manager for "kindnet"
	I0805 12:45:03.979175  434893 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 12:45:03.979209  434893 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.10 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-119870 NodeName:kindnet-119870 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 12:45:03.979439  434893 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.10
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-119870"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.10
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.10"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 12:45:03.979521  434893 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 12:45:03.989880  434893 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 12:45:03.989964  434893 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 12:45:03.999901  434893 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0805 12:45:04.019677  434893 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 12:45:04.041630  434893 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0805 12:45:04.065172  434893 ssh_runner.go:195] Run: grep 192.168.72.10	control-plane.minikube.internal$ /etc/hosts
	I0805 12:45:04.070320  434893 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.10	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:45:04.087860  434893 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:45:04.224230  434893 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 12:45:04.249531  434893 certs.go:68] Setting up /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870 for IP: 192.168.72.10
	I0805 12:45:04.249562  434893 certs.go:194] generating shared ca certs ...
	I0805 12:45:04.249586  434893 certs.go:226] acquiring lock for ca certs: {Name:mk0abfcaff3883fbb5243c47b487f9200d9166d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:45:04.249787  434893 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key
	I0805 12:45:04.249855  434893 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key
	I0805 12:45:04.249870  434893 certs.go:256] generating profile certs ...
	I0805 12:45:04.249961  434893 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/client.key
	I0805 12:45:04.249979  434893 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/client.crt with IP's: []
	I0805 12:45:04.346617  434893 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/client.crt ...
	I0805 12:45:04.346643  434893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/client.crt: {Name:mk3f222c7678011251b9be7adaed1cca9432f54a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:45:04.349645  434893 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/client.key ...
	I0805 12:45:04.349672  434893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/client.key: {Name:mk1559a046ba6d292b37a939a000ecb417c1d69d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:45:04.349802  434893 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/apiserver.key.51996b03
	I0805 12:45:04.349825  434893 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/apiserver.crt.51996b03 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.10]
	I0805 12:45:04.837610  434893 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/apiserver.crt.51996b03 ...
	I0805 12:45:04.837641  434893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/apiserver.crt.51996b03: {Name:mk3f07972809b40722feea3cc23349534a06b43c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:45:04.876653  434893 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/apiserver.key.51996b03 ...
	I0805 12:45:04.876680  434893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/apiserver.key.51996b03: {Name:mka83207883f1a382a731733dd6b27e345d8def5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:45:04.876794  434893 certs.go:381] copying /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/apiserver.crt.51996b03 -> /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/apiserver.crt
	I0805 12:45:04.876908  434893 certs.go:385] copying /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/apiserver.key.51996b03 -> /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/apiserver.key
	I0805 12:45:04.877006  434893 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/proxy-client.key
	I0805 12:45:04.877027  434893 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/proxy-client.crt with IP's: []
	I0805 12:45:04.983179  434893 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/proxy-client.crt ...
	I0805 12:45:04.983212  434893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/proxy-client.crt: {Name:mk23f04203fd617cbc4c347c7c65ec7b14bef93a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:45:05.039115  434893 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/proxy-client.key ...
	I0805 12:45:05.039171  434893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/proxy-client.key: {Name:mk0fa1185ede3374007c2d42f52bad662da0b89e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:45:05.039506  434893 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem (1338 bytes)
	W0805 12:45:05.039567  434893 certs.go:480] ignoring /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219_empty.pem, impossibly tiny 0 bytes
	I0805 12:45:05.039578  434893 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 12:45:05.039608  434893 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem (1082 bytes)
	I0805 12:45:05.039644  434893 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem (1123 bytes)
	I0805 12:45:05.039675  434893 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem (1675 bytes)
	I0805 12:45:05.039737  434893 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:45:05.040651  434893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 12:45:05.125386  434893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 12:45:05.152505  434893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 12:45:05.178622  434893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 12:45:05.202388  434893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0805 12:45:05.225937  434893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0805 12:45:05.250479  434893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 12:45:05.275606  434893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0805 12:45:05.300464  434893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /usr/share/ca-certificates/3912192.pem (1708 bytes)
	I0805 12:45:05.323649  434893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 12:45:05.347218  434893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem --> /usr/share/ca-certificates/391219.pem (1338 bytes)
	I0805 12:45:05.372745  434893 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 12:45:05.390762  434893 ssh_runner.go:195] Run: openssl version
	I0805 12:45:05.397193  434893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3912192.pem && ln -fs /usr/share/ca-certificates/3912192.pem /etc/ssl/certs/3912192.pem"
	I0805 12:45:05.408510  434893 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3912192.pem
	I0805 12:45:05.413195  434893 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 11:39 /usr/share/ca-certificates/3912192.pem
	I0805 12:45:05.413246  434893 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3912192.pem
	I0805 12:45:05.419162  434893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3912192.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 12:45:05.430328  434893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 12:45:05.441477  434893 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:45:05.446209  434893 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:45:05.446272  434893 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:45:05.452849  434893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 12:45:05.465381  434893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391219.pem && ln -fs /usr/share/ca-certificates/391219.pem /etc/ssl/certs/391219.pem"
	I0805 12:45:05.480068  434893 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/391219.pem
	I0805 12:45:05.485313  434893 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 11:39 /usr/share/ca-certificates/391219.pem
	I0805 12:45:05.485391  434893 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391219.pem
	I0805 12:45:05.492049  434893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391219.pem /etc/ssl/certs/51391683.0"
	I0805 12:45:05.504309  434893 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 12:45:05.508556  434893 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0805 12:45:05.508630  434893 kubeadm.go:392] StartCluster: {Name:kindnet-119870 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:kindnet-119870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.72.10 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:45:05.508723  434893 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 12:45:05.508769  434893 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:45:05.545213  434893 cri.go:89] found id: ""
	I0805 12:45:05.545301  434893 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 12:45:05.556126  434893 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 12:45:05.567572  434893 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 12:45:05.577974  434893 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 12:45:05.577998  434893 kubeadm.go:157] found existing configuration files:
	
	I0805 12:45:05.578046  434893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 12:45:05.588262  434893 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 12:45:05.588328  434893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 12:45:05.598770  434893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 12:45:05.609877  434893 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 12:45:05.609938  434893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 12:45:05.620266  434893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 12:45:05.630871  434893 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 12:45:05.630931  434893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 12:45:05.641576  434893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 12:45:05.651588  434893 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 12:45:05.651659  434893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 12:45:05.662043  434893 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 12:45:05.729309  434893 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0805 12:45:05.729415  434893 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 12:45:05.881032  434893 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 12:45:05.881165  434893 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 12:45:05.881256  434893 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 12:45:06.081840  434893 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 12:45:06.213220  434893 out.go:204]   - Generating certificates and keys ...
	I0805 12:45:06.213339  434893 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 12:45:06.213415  434893 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 12:45:06.213497  434893 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0805 12:45:06.388396  434893 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0805 12:45:06.820111  434893 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0805 12:45:04.141810  434553 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 12:45:04.141833  434553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 12:45:04.141852  434553 main.go:141] libmachine: (auto-119870) Calling .GetSSHHostname
	I0805 12:45:04.145409  434553 main.go:141] libmachine: (auto-119870) DBG | domain auto-119870 has defined MAC address 52:54:00:a8:ca:b1 in network mk-auto-119870
	I0805 12:45:04.145933  434553 main.go:141] libmachine: (auto-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:b1", ip: ""} in network mk-auto-119870: {Iface:virbr2 ExpiryTime:2024-08-05 13:44:22 +0000 UTC Type:0 Mac:52:54:00:a8:ca:b1 Iaid: IPaddr:192.168.50.143 Prefix:24 Hostname:auto-119870 Clientid:01:52:54:00:a8:ca:b1}
	I0805 12:45:04.145953  434553 main.go:141] libmachine: (auto-119870) DBG | domain auto-119870 has defined IP address 192.168.50.143 and MAC address 52:54:00:a8:ca:b1 in network mk-auto-119870
	I0805 12:45:04.146102  434553 main.go:141] libmachine: (auto-119870) Calling .GetSSHPort
	I0805 12:45:04.146315  434553 main.go:141] libmachine: (auto-119870) Calling .GetSSHKeyPath
	I0805 12:45:04.146513  434553 main.go:141] libmachine: (auto-119870) Calling .GetSSHUsername
	I0805 12:45:04.146686  434553 sshutil.go:53] new ssh client: &{IP:192.168.50.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/auto-119870/id_rsa Username:docker}
	I0805 12:45:04.161760  434553 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45121
	I0805 12:45:04.162357  434553 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:45:04.163041  434553 main.go:141] libmachine: Using API Version  1
	I0805 12:45:04.163056  434553 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:45:04.163377  434553 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:45:04.163629  434553 main.go:141] libmachine: (auto-119870) Calling .GetState
	I0805 12:45:04.165226  434553 main.go:141] libmachine: (auto-119870) Calling .DriverName
	I0805 12:45:04.165534  434553 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 12:45:04.165553  434553 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 12:45:04.165572  434553 main.go:141] libmachine: (auto-119870) Calling .GetSSHHostname
	I0805 12:45:04.168221  434553 main.go:141] libmachine: (auto-119870) DBG | domain auto-119870 has defined MAC address 52:54:00:a8:ca:b1 in network mk-auto-119870
	I0805 12:45:04.168574  434553 main.go:141] libmachine: (auto-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:b1", ip: ""} in network mk-auto-119870: {Iface:virbr2 ExpiryTime:2024-08-05 13:44:22 +0000 UTC Type:0 Mac:52:54:00:a8:ca:b1 Iaid: IPaddr:192.168.50.143 Prefix:24 Hostname:auto-119870 Clientid:01:52:54:00:a8:ca:b1}
	I0805 12:45:04.168603  434553 main.go:141] libmachine: (auto-119870) DBG | domain auto-119870 has defined IP address 192.168.50.143 and MAC address 52:54:00:a8:ca:b1 in network mk-auto-119870
	I0805 12:45:04.168839  434553 main.go:141] libmachine: (auto-119870) Calling .GetSSHPort
	I0805 12:45:04.169012  434553 main.go:141] libmachine: (auto-119870) Calling .GetSSHKeyPath
	I0805 12:45:04.169128  434553 main.go:141] libmachine: (auto-119870) Calling .GetSSHUsername
	I0805 12:45:04.169243  434553 sshutil.go:53] new ssh client: &{IP:192.168.50.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/auto-119870/id_rsa Username:docker}
	I0805 12:45:04.365659  434553 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 12:45:04.385049  434553 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 12:45:04.447637  434553 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 12:45:04.447679  434553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0805 12:45:05.628741  434553 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.26304238s)
	I0805 12:45:05.628815  434553 main.go:141] libmachine: Making call to close driver server
	I0805 12:45:05.628830  434553 main.go:141] libmachine: (auto-119870) Calling .Close
	I0805 12:45:05.629145  434553 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:45:05.629168  434553 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:45:05.629179  434553 main.go:141] libmachine: Making call to close driver server
	I0805 12:45:05.629187  434553 main.go:141] libmachine: (auto-119870) Calling .Close
	I0805 12:45:05.629433  434553 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:45:05.629458  434553 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:45:06.329497  434553 main.go:141] libmachine: Making call to close driver server
	I0805 12:45:06.329526  434553 main.go:141] libmachine: (auto-119870) Calling .Close
	I0805 12:45:06.329839  434553 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:45:06.329859  434553 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:45:06.329909  434553 main.go:141] libmachine: (auto-119870) DBG | Closing plugin on server side
	I0805 12:45:07.493407  434553 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.108317832s)
	I0805 12:45:07.493468  434553 main.go:141] libmachine: Making call to close driver server
	I0805 12:45:07.493469  434553 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.045792509s)
	I0805 12:45:07.493493  434553 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.045790427s)
	I0805 12:45:07.493511  434553 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0805 12:45:07.493480  434553 main.go:141] libmachine: (auto-119870) Calling .Close
	I0805 12:45:07.493872  434553 main.go:141] libmachine: (auto-119870) DBG | Closing plugin on server side
	I0805 12:45:07.493924  434553 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:45:07.493948  434553 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:45:07.493962  434553 main.go:141] libmachine: Making call to close driver server
	I0805 12:45:07.493970  434553 main.go:141] libmachine: (auto-119870) Calling .Close
	I0805 12:45:07.494237  434553 main.go:141] libmachine: (auto-119870) DBG | Closing plugin on server side
	I0805 12:45:07.494282  434553 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:45:07.494290  434553 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:45:07.494664  434553 node_ready.go:35] waiting up to 15m0s for node "auto-119870" to be "Ready" ...
	I0805 12:45:07.496631  434553 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0805 12:45:07.497825  434553 addons.go:510] duration metric: took 3.412714062s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0805 12:45:07.501679  434553 node_ready.go:49] node "auto-119870" has status "Ready":"True"
	I0805 12:45:07.501707  434553 node_ready.go:38] duration metric: took 7.01384ms for node "auto-119870" to be "Ready" ...
	I0805 12:45:07.501720  434553 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 12:45:07.514457  434553 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-2smkn" in "kube-system" namespace to be "Ready" ...
	I0805 12:45:07.435365  435321 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 12:45:07.435400  435321 machine.go:97] duration metric: took 9.154706851s to provisionDockerMachine
	I0805 12:45:07.435416  435321 start.go:293] postStartSetup for "pause-335738" (driver="kvm2")
	I0805 12:45:07.435430  435321 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 12:45:07.435463  435321 main.go:141] libmachine: (pause-335738) Calling .DriverName
	I0805 12:45:07.435973  435321 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 12:45:07.436011  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHHostname
	I0805 12:45:07.439119  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:45:07.439536  435321 main.go:141] libmachine: (pause-335738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:22:6e", ip: ""} in network mk-pause-335738: {Iface:virbr1 ExpiryTime:2024-08-05 13:43:55 +0000 UTC Type:0 Mac:52:54:00:c5:22:6e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-335738 Clientid:01:52:54:00:c5:22:6e}
	I0805 12:45:07.439568  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined IP address 192.168.39.97 and MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:45:07.439811  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHPort
	I0805 12:45:07.440026  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHKeyPath
	I0805 12:45:07.440198  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHUsername
	I0805 12:45:07.440359  435321 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/pause-335738/id_rsa Username:docker}
	I0805 12:45:07.531327  435321 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 12:45:07.536064  435321 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 12:45:07.536093  435321 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/addons for local assets ...
	I0805 12:45:07.536168  435321 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/files for local assets ...
	I0805 12:45:07.536277  435321 filesync.go:149] local asset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> 3912192.pem in /etc/ssl/certs
	I0805 12:45:07.536401  435321 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 12:45:07.548192  435321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:45:07.583195  435321 start.go:296] duration metric: took 147.760389ms for postStartSetup
	I0805 12:45:07.583246  435321 fix.go:56] duration metric: took 9.330325706s for fixHost
	I0805 12:45:07.583273  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHHostname
	I0805 12:45:07.586518  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:45:07.586949  435321 main.go:141] libmachine: (pause-335738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:22:6e", ip: ""} in network mk-pause-335738: {Iface:virbr1 ExpiryTime:2024-08-05 13:43:55 +0000 UTC Type:0 Mac:52:54:00:c5:22:6e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-335738 Clientid:01:52:54:00:c5:22:6e}
	I0805 12:45:07.586981  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined IP address 192.168.39.97 and MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:45:07.587188  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHPort
	I0805 12:45:07.587426  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHKeyPath
	I0805 12:45:07.587614  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHKeyPath
	I0805 12:45:07.587795  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHUsername
	I0805 12:45:07.587976  435321 main.go:141] libmachine: Using SSH client type: native
	I0805 12:45:07.588199  435321 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0805 12:45:07.588214  435321 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 12:45:07.709190  435321 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722861907.704347151
	
	I0805 12:45:07.709217  435321 fix.go:216] guest clock: 1722861907.704347151
	I0805 12:45:07.709227  435321 fix.go:229] Guest: 2024-08-05 12:45:07.704347151 +0000 UTC Remote: 2024-08-05 12:45:07.583251272 +0000 UTC m=+24.284926934 (delta=121.095879ms)
	I0805 12:45:07.709254  435321 fix.go:200] guest clock delta is within tolerance: 121.095879ms
	I0805 12:45:07.709261  435321 start.go:83] releasing machines lock for "pause-335738", held for 9.456379931s
	I0805 12:45:07.709285  435321 main.go:141] libmachine: (pause-335738) Calling .DriverName
	I0805 12:45:07.709564  435321 main.go:141] libmachine: (pause-335738) Calling .GetIP
	I0805 12:45:07.713014  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:45:07.713434  435321 main.go:141] libmachine: (pause-335738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:22:6e", ip: ""} in network mk-pause-335738: {Iface:virbr1 ExpiryTime:2024-08-05 13:43:55 +0000 UTC Type:0 Mac:52:54:00:c5:22:6e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-335738 Clientid:01:52:54:00:c5:22:6e}
	I0805 12:45:07.713461  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined IP address 192.168.39.97 and MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:45:07.713671  435321 main.go:141] libmachine: (pause-335738) Calling .DriverName
	I0805 12:45:07.714276  435321 main.go:141] libmachine: (pause-335738) Calling .DriverName
	I0805 12:45:07.714515  435321 main.go:141] libmachine: (pause-335738) Calling .DriverName
	I0805 12:45:07.714639  435321 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 12:45:07.714697  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHHostname
	I0805 12:45:07.714762  435321 ssh_runner.go:195] Run: cat /version.json
	I0805 12:45:07.714791  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHHostname
	I0805 12:45:07.717917  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:45:07.717952  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:45:07.718167  435321 main.go:141] libmachine: (pause-335738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:22:6e", ip: ""} in network mk-pause-335738: {Iface:virbr1 ExpiryTime:2024-08-05 13:43:55 +0000 UTC Type:0 Mac:52:54:00:c5:22:6e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-335738 Clientid:01:52:54:00:c5:22:6e}
	I0805 12:45:07.718188  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined IP address 192.168.39.97 and MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:45:07.718332  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHPort
	I0805 12:45:07.718447  435321 main.go:141] libmachine: (pause-335738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:22:6e", ip: ""} in network mk-pause-335738: {Iface:virbr1 ExpiryTime:2024-08-05 13:43:55 +0000 UTC Type:0 Mac:52:54:00:c5:22:6e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-335738 Clientid:01:52:54:00:c5:22:6e}
	I0805 12:45:07.718470  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined IP address 192.168.39.97 and MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:45:07.718508  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHKeyPath
	I0805 12:45:07.718569  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHPort
	I0805 12:45:07.718707  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHKeyPath
	I0805 12:45:07.718708  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHUsername
	I0805 12:45:07.718884  435321 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/pause-335738/id_rsa Username:docker}
	I0805 12:45:07.718987  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHUsername
	I0805 12:45:07.719193  435321 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/pause-335738/id_rsa Username:docker}
	I0805 12:45:07.805733  435321 ssh_runner.go:195] Run: systemctl --version
	I0805 12:45:07.828521  435321 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 12:45:07.992325  435321 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 12:45:08.000564  435321 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 12:45:08.000647  435321 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 12:45:08.013511  435321 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0805 12:45:08.013535  435321 start.go:495] detecting cgroup driver to use...
	I0805 12:45:08.013612  435321 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 12:45:08.037227  435321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 12:45:08.056733  435321 docker.go:217] disabling cri-docker service (if available) ...
	I0805 12:45:08.056797  435321 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 12:45:08.077383  435321 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 12:45:08.135695  435321 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 12:45:06.923649  434893 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0805 12:45:07.053094  434893 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0805 12:45:07.053484  434893 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kindnet-119870 localhost] and IPs [192.168.72.10 127.0.0.1 ::1]
	I0805 12:45:07.201979  434893 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0805 12:45:07.202236  434893 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kindnet-119870 localhost] and IPs [192.168.72.10 127.0.0.1 ::1]
	I0805 12:45:07.428235  434893 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0805 12:45:07.639538  434893 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0805 12:45:08.053798  434893 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0805 12:45:08.053972  434893 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 12:45:08.267929  434893 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 12:45:08.365899  434893 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0805 12:45:08.512425  434893 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 12:45:08.643905  434893 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 12:45:08.708761  434893 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 12:45:08.709621  434893 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 12:45:08.711903  434893 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 12:45:08.373203  435321 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 12:45:08.650299  435321 docker.go:233] disabling docker service ...
	I0805 12:45:08.650369  435321 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 12:45:08.733071  435321 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 12:45:08.780524  435321 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 12:45:09.135883  435321 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 12:45:09.436578  435321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 12:45:09.464255  435321 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 12:45:09.496891  435321 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0805 12:45:09.496966  435321 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:45:09.517774  435321 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 12:45:09.517853  435321 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:45:09.554633  435321 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:45:09.572207  435321 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:45:09.585896  435321 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 12:45:09.610404  435321 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:45:09.627089  435321 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:45:09.648767  435321 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:45:09.668714  435321 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 12:45:09.690244  435321 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 12:45:09.705043  435321 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:45:09.934592  435321 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 12:45:10.469200  435321 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 12:45:10.469293  435321 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 12:45:10.475998  435321 start.go:563] Will wait 60s for crictl version
	I0805 12:45:10.476070  435321 ssh_runner.go:195] Run: which crictl
	I0805 12:45:10.515388  435321 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 12:45:10.668529  435321 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 12:45:10.668682  435321 ssh_runner.go:195] Run: crio --version
	I0805 12:45:10.900094  435321 ssh_runner.go:195] Run: crio --version
	I0805 12:45:10.975777  435321 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0805 12:45:08.713600  434893 out.go:204]   - Booting up control plane ...
	I0805 12:45:08.713715  434893 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 12:45:08.717685  434893 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 12:45:08.718925  434893 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 12:45:08.740263  434893 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 12:45:08.740417  434893 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 12:45:08.740496  434893 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 12:45:08.898938  434893 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0805 12:45:08.899078  434893 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0805 12:45:09.900870  434893 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00232472s
	I0805 12:45:09.901028  434893 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0805 12:45:07.998190  434553 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-119870" context rescaled to 1 replicas
	I0805 12:45:09.524999  434553 pod_ready.go:102] pod "coredns-7db6d8ff4d-2smkn" in "kube-system" namespace has status "Ready":"False"
	I0805 12:45:12.024133  434553 pod_ready.go:102] pod "coredns-7db6d8ff4d-2smkn" in "kube-system" namespace has status "Ready":"False"
	I0805 12:45:10.977172  435321 main.go:141] libmachine: (pause-335738) Calling .GetIP
	I0805 12:45:10.980756  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:45:10.981259  435321 main.go:141] libmachine: (pause-335738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:22:6e", ip: ""} in network mk-pause-335738: {Iface:virbr1 ExpiryTime:2024-08-05 13:43:55 +0000 UTC Type:0 Mac:52:54:00:c5:22:6e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-335738 Clientid:01:52:54:00:c5:22:6e}
	I0805 12:45:10.981303  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined IP address 192.168.39.97 and MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:45:10.981603  435321 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0805 12:45:10.989940  435321 kubeadm.go:883] updating cluster {Name:pause-335738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:pause-335738 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 12:45:10.990128  435321 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 12:45:10.990202  435321 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:45:11.043291  435321 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 12:45:11.043319  435321 crio.go:433] Images already preloaded, skipping extraction
	I0805 12:45:11.043368  435321 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:45:11.090395  435321 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 12:45:11.090428  435321 cache_images.go:84] Images are preloaded, skipping loading
	I0805 12:45:11.090440  435321 kubeadm.go:934] updating node { 192.168.39.97 8443 v1.30.3 crio true true} ...
	I0805 12:45:11.090582  435321 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-335738 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:pause-335738 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 12:45:11.090699  435321 ssh_runner.go:195] Run: crio config
	I0805 12:45:11.189451  435321 cni.go:84] Creating CNI manager for ""
	I0805 12:45:11.189482  435321 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:45:11.189499  435321 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 12:45:11.189529  435321 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.97 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-335738 NodeName:pause-335738 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.97 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 12:45:11.189716  435321 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.97
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-335738"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.97
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.97"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 12:45:11.189792  435321 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 12:45:11.200510  435321 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 12:45:11.200601  435321 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 12:45:11.212958  435321 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0805 12:45:11.237802  435321 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 12:45:11.254534  435321 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0805 12:45:11.271371  435321 ssh_runner.go:195] Run: grep 192.168.39.97	control-plane.minikube.internal$ /etc/hosts
	I0805 12:45:11.275240  435321 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:45:11.410170  435321 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 12:45:11.425406  435321 certs.go:68] Setting up /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/pause-335738 for IP: 192.168.39.97
	I0805 12:45:11.425438  435321 certs.go:194] generating shared ca certs ...
	I0805 12:45:11.425460  435321 certs.go:226] acquiring lock for ca certs: {Name:mk0abfcaff3883fbb5243c47b487f9200d9166d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:45:11.425613  435321 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key
	I0805 12:45:11.425657  435321 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key
	I0805 12:45:11.425666  435321 certs.go:256] generating profile certs ...
	I0805 12:45:11.425737  435321 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/pause-335738/client.key
	I0805 12:45:11.425821  435321 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/pause-335738/apiserver.key.4c2e0008
	I0805 12:45:11.425881  435321 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/pause-335738/proxy-client.key
	I0805 12:45:11.425992  435321 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem (1338 bytes)
	W0805 12:45:11.426021  435321 certs.go:480] ignoring /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219_empty.pem, impossibly tiny 0 bytes
	I0805 12:45:11.426030  435321 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 12:45:11.426052  435321 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem (1082 bytes)
	I0805 12:45:11.426076  435321 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem (1123 bytes)
	I0805 12:45:11.426098  435321 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem (1675 bytes)
	I0805 12:45:11.426133  435321 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:45:11.426731  435321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 12:45:11.451227  435321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 12:45:11.477587  435321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 12:45:11.504930  435321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 12:45:11.529685  435321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/pause-335738/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0805 12:45:11.558933  435321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/pause-335738/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0805 12:45:11.585167  435321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/pause-335738/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 12:45:11.614871  435321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/pause-335738/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 12:45:11.644643  435321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem --> /usr/share/ca-certificates/391219.pem (1338 bytes)
	I0805 12:45:11.672724  435321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /usr/share/ca-certificates/3912192.pem (1708 bytes)
	I0805 12:45:11.732393  435321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 12:45:11.756262  435321 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 12:45:11.773242  435321 ssh_runner.go:195] Run: openssl version
	I0805 12:45:11.778989  435321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391219.pem && ln -fs /usr/share/ca-certificates/391219.pem /etc/ssl/certs/391219.pem"
	I0805 12:45:11.790709  435321 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/391219.pem
	I0805 12:45:11.795841  435321 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 11:39 /usr/share/ca-certificates/391219.pem
	I0805 12:45:11.795942  435321 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391219.pem
	I0805 12:45:11.802389  435321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391219.pem /etc/ssl/certs/51391683.0"
	I0805 12:45:11.812281  435321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3912192.pem && ln -fs /usr/share/ca-certificates/3912192.pem /etc/ssl/certs/3912192.pem"
	I0805 12:45:11.823184  435321 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3912192.pem
	I0805 12:45:11.827757  435321 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 11:39 /usr/share/ca-certificates/3912192.pem
	I0805 12:45:11.827815  435321 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3912192.pem
	I0805 12:45:11.833336  435321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3912192.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 12:45:11.843830  435321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 12:45:11.855223  435321 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:45:11.860007  435321 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:45:11.860059  435321 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:45:11.865896  435321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 12:45:11.876271  435321 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 12:45:11.881124  435321 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 12:45:11.887289  435321 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 12:45:11.896111  435321 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 12:45:11.901722  435321 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 12:45:11.907361  435321 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 12:45:11.913038  435321 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 12:45:11.918906  435321 kubeadm.go:392] StartCluster: {Name:pause-335738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:pause-335738 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false ol
m:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:45:11.919030  435321 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 12:45:11.919069  435321 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:45:11.969286  435321 cri.go:89] found id: "2d03d5b65be38adf050eac82d091e13a70488941b180fbfe98c242246dea6d02"
	I0805 12:45:11.969315  435321 cri.go:89] found id: "add61196ce4f029ecc2eb9dbd7dbded2932824edc54b7099b1ee0c73a8ac269d"
	I0805 12:45:11.969321  435321 cri.go:89] found id: "862df9ac2aa291a1fd67edc74d04423c203ac8b935809be931d7af85bab22892"
	I0805 12:45:11.969326  435321 cri.go:89] found id: "fe361230dd1265ebfe73cd0cb849c09c62c2b58b4281010ffaef1149e8bcfd51"
	I0805 12:45:11.969330  435321 cri.go:89] found id: "62e629ccbea51616692856cbf4046c26f2e54ef331e7b238b1aa3742c4a5d0de"
	I0805 12:45:11.969334  435321 cri.go:89] found id: "57dd9d3e8f34f97a6da8e9cb2772d12864a5ff5e3bd6fa93bcbb140763635832"
	I0805 12:45:11.969338  435321 cri.go:89] found id: ""
	I0805 12:45:11.969406  435321 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 05 12:45:33 pause-335738 crio[2948]: time="2024-08-05 12:45:33.316332407Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722861933316305388,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c21574eb-0d3f-4237-b3e8-8ac3fb984747 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 12:45:33 pause-335738 crio[2948]: time="2024-08-05 12:45:33.317037635Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d13a8f61-9314-49a7-8775-e53dcbe63b04 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:45:33 pause-335738 crio[2948]: time="2024-08-05 12:45:33.317098101Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d13a8f61-9314-49a7-8775-e53dcbe63b04 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:45:33 pause-335738 crio[2948]: time="2024-08-05 12:45:33.317591842Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2cfe1f13563b5d763c46bab15e7780fc0ee847d816fe24474213d19ac287e2a,PodSandboxId:1029fa92bc15f8b0a44ea964f16f47d5a97cd3bf4aa9982897ba37fa47be8eec,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722861918034966567,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l65cv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7245122-31af-45f8-99bf-1c863cbe5fe0,},Annotations:map[string]string{io.kubernetes.container.hash: 648a4b00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b483fbdb9094c6b2254e1174f5404b5e13f32997cf0e93775c4a59871b6c67b,PodSandboxId:bea3686074e08fc31691146745928b822cc7f7d00204ad6bb6af941a715cda81,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722861918039179678,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lzsxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9763738-61b9-43a5-9368-e06b52f43cd1,},Annotations:map[string]string{io.kubernetes.container.hash: fa173645,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdb056427f892d2aec7eb86a5657cbdb9b0a217985c4060e9da236fc00eea5b8,PodSandboxId:135cb004edd66c7374074996a965b0abf0f49f02b171fb15223409fafae6e4ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722861914235695957,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-335738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6551c46c42e7616b7906ac9288d43852,},Annot
ations:map[string]string{io.kubernetes.container.hash: 49b7a065,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb2cf67e138ffc00da0b5e832dbdbc353f1b5a01b20bafc76fe5e3b49f6d8719,PodSandboxId:82e471517e9809cb5c4dccabf0797484ffed40a8f413e2c9a799eb37ce71b7b7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722861914231068847,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-335738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 039b9240d849ddba132706a44a556b1f,},Annotations:map[string]
string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20853392b9232a94de389437742a4eefc03ed66b0026f3dce2232814e71deead,PodSandboxId:c0b9a577527007267d547133465164f1beedde697fd5645ea6f9e8730ce1d347,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722861914189215069,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-335738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be134a1d11f4eff92b8fa914099187cc,},Annotations:map[string]string{io.kubernet
es.container.hash: 5a22d9ff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29a6e6cec6effa81402d0037d16e85067d55f101a7c45a057619423e3684b644,PodSandboxId:f060ab33d43b2ab73cc0fbdf059980193dc58a69ee70fcb153f75da42036d0fe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722861914208431679,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-335738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b95943159a9620f708d589a3b6ccb89e,},Annotations:map[string]string{io
.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d03d5b65be38adf050eac82d091e13a70488941b180fbfe98c242246dea6d02,PodSandboxId:8ddf2717f05e846210cef04e27e050662e014a7a886a6441d9f35458255b56bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722861909590521803,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lzsxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9763738-61b9-43a5-9368-e06b52f43cd1,},Annotations:map[string]string{io.kubernetes.container.hash: fa17
3645,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:add61196ce4f029ecc2eb9dbd7dbded2932824edc54b7099b1ee0c73a8ac269d,PodSandboxId:4800c6d276d8e8c2ea869e35aa80ef7c4b875bb4d69248d4e2f9f2a4ed60fa18,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722861908841992072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-l65cv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7245122-31af-45f8-99bf-1c863cbe5fe0,},Annotations:map[string]string{io.kubernetes.container.hash: 648a4b00,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe361230dd1265ebfe73cd0cb849c09c62c2b58b4281010ffaef1149e8bcfd51,PodSandboxId:cd9efca96a8c0278b8dd4fc23eaf74063cbe57d36f71ec39ced22c4dd0c9ad11,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722861908770856291,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-335738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 039b9240d849ddba132706a44a556b1f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:862df9ac2aa291a1fd67edc74d04423c203ac8b935809be931d7af85bab22892,PodSandboxId:dbc576013f6676af99fc1220a817848683856f27ffb98d993b5e3612bd28ede1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722861908778693781,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-pause-335738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b95943159a9620f708d589a3b6ccb89e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62e629ccbea51616692856cbf4046c26f2e54ef331e7b238b1aa3742c4a5d0de,PodSandboxId:c706526c47804552f2b45bd552416df6d89b9dedb75efc86f574f65308b2783a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722861908723653329,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-335738,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be134a1d11f4eff92b8fa914099187cc,},Annotations:map[string]string{io.kubernetes.container.hash: 5a22d9ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57dd9d3e8f34f97a6da8e9cb2772d12864a5ff5e3bd6fa93bcbb140763635832,PodSandboxId:9710e986deeef0df082dd738b905fb2cbc53bb6f93d2d58b9e6b5d59f6ee439e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722861908576082491,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-335738,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 6551c46c42e7616b7906ac9288d43852,},Annotations:map[string]string{io.kubernetes.container.hash: 49b7a065,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d13a8f61-9314-49a7-8775-e53dcbe63b04 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:45:33 pause-335738 crio[2948]: time="2024-08-05 12:45:33.367031748Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c6d79bd2-c3ad-45d0-9256-790b27522bc5 name=/runtime.v1.RuntimeService/Version
	Aug 05 12:45:33 pause-335738 crio[2948]: time="2024-08-05 12:45:33.367130586Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c6d79bd2-c3ad-45d0-9256-790b27522bc5 name=/runtime.v1.RuntimeService/Version
	Aug 05 12:45:33 pause-335738 crio[2948]: time="2024-08-05 12:45:33.368525137Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f5e32009-154b-4824-99b1-360e4fab8d63 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 12:45:33 pause-335738 crio[2948]: time="2024-08-05 12:45:33.368896748Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722861933368875843,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f5e32009-154b-4824-99b1-360e4fab8d63 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 12:45:33 pause-335738 crio[2948]: time="2024-08-05 12:45:33.369591038Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0015d7af-5742-4ae6-9126-0830cfdb36b9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:45:33 pause-335738 crio[2948]: time="2024-08-05 12:45:33.369651205Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0015d7af-5742-4ae6-9126-0830cfdb36b9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:45:33 pause-335738 crio[2948]: time="2024-08-05 12:45:33.369905592Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2cfe1f13563b5d763c46bab15e7780fc0ee847d816fe24474213d19ac287e2a,PodSandboxId:1029fa92bc15f8b0a44ea964f16f47d5a97cd3bf4aa9982897ba37fa47be8eec,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722861918034966567,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l65cv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7245122-31af-45f8-99bf-1c863cbe5fe0,},Annotations:map[string]string{io.kubernetes.container.hash: 648a4b00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b483fbdb9094c6b2254e1174f5404b5e13f32997cf0e93775c4a59871b6c67b,PodSandboxId:bea3686074e08fc31691146745928b822cc7f7d00204ad6bb6af941a715cda81,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722861918039179678,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lzsxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9763738-61b9-43a5-9368-e06b52f43cd1,},Annotations:map[string]string{io.kubernetes.container.hash: fa173645,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdb056427f892d2aec7eb86a5657cbdb9b0a217985c4060e9da236fc00eea5b8,PodSandboxId:135cb004edd66c7374074996a965b0abf0f49f02b171fb15223409fafae6e4ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722861914235695957,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-335738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6551c46c42e7616b7906ac9288d43852,},Annot
ations:map[string]string{io.kubernetes.container.hash: 49b7a065,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb2cf67e138ffc00da0b5e832dbdbc353f1b5a01b20bafc76fe5e3b49f6d8719,PodSandboxId:82e471517e9809cb5c4dccabf0797484ffed40a8f413e2c9a799eb37ce71b7b7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722861914231068847,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-335738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 039b9240d849ddba132706a44a556b1f,},Annotations:map[string]
string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20853392b9232a94de389437742a4eefc03ed66b0026f3dce2232814e71deead,PodSandboxId:c0b9a577527007267d547133465164f1beedde697fd5645ea6f9e8730ce1d347,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722861914189215069,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-335738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be134a1d11f4eff92b8fa914099187cc,},Annotations:map[string]string{io.kubernet
es.container.hash: 5a22d9ff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29a6e6cec6effa81402d0037d16e85067d55f101a7c45a057619423e3684b644,PodSandboxId:f060ab33d43b2ab73cc0fbdf059980193dc58a69ee70fcb153f75da42036d0fe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722861914208431679,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-335738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b95943159a9620f708d589a3b6ccb89e,},Annotations:map[string]string{io
.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d03d5b65be38adf050eac82d091e13a70488941b180fbfe98c242246dea6d02,PodSandboxId:8ddf2717f05e846210cef04e27e050662e014a7a886a6441d9f35458255b56bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722861909590521803,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lzsxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9763738-61b9-43a5-9368-e06b52f43cd1,},Annotations:map[string]string{io.kubernetes.container.hash: fa17
3645,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:add61196ce4f029ecc2eb9dbd7dbded2932824edc54b7099b1ee0c73a8ac269d,PodSandboxId:4800c6d276d8e8c2ea869e35aa80ef7c4b875bb4d69248d4e2f9f2a4ed60fa18,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722861908841992072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-l65cv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7245122-31af-45f8-99bf-1c863cbe5fe0,},Annotations:map[string]string{io.kubernetes.container.hash: 648a4b00,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe361230dd1265ebfe73cd0cb849c09c62c2b58b4281010ffaef1149e8bcfd51,PodSandboxId:cd9efca96a8c0278b8dd4fc23eaf74063cbe57d36f71ec39ced22c4dd0c9ad11,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722861908770856291,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-335738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 039b9240d849ddba132706a44a556b1f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:862df9ac2aa291a1fd67edc74d04423c203ac8b935809be931d7af85bab22892,PodSandboxId:dbc576013f6676af99fc1220a817848683856f27ffb98d993b5e3612bd28ede1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722861908778693781,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-pause-335738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b95943159a9620f708d589a3b6ccb89e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62e629ccbea51616692856cbf4046c26f2e54ef331e7b238b1aa3742c4a5d0de,PodSandboxId:c706526c47804552f2b45bd552416df6d89b9dedb75efc86f574f65308b2783a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722861908723653329,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-335738,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be134a1d11f4eff92b8fa914099187cc,},Annotations:map[string]string{io.kubernetes.container.hash: 5a22d9ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57dd9d3e8f34f97a6da8e9cb2772d12864a5ff5e3bd6fa93bcbb140763635832,PodSandboxId:9710e986deeef0df082dd738b905fb2cbc53bb6f93d2d58b9e6b5d59f6ee439e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722861908576082491,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-335738,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 6551c46c42e7616b7906ac9288d43852,},Annotations:map[string]string{io.kubernetes.container.hash: 49b7a065,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0015d7af-5742-4ae6-9126-0830cfdb36b9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:45:33 pause-335738 crio[2948]: time="2024-08-05 12:45:33.414649238Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3be04f5e-54c4-4a04-9e7b-450433d5d994 name=/runtime.v1.RuntimeService/Version
	Aug 05 12:45:33 pause-335738 crio[2948]: time="2024-08-05 12:45:33.414732459Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3be04f5e-54c4-4a04-9e7b-450433d5d994 name=/runtime.v1.RuntimeService/Version
	Aug 05 12:45:33 pause-335738 crio[2948]: time="2024-08-05 12:45:33.417352995Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ea35ee17-0472-4bca-aa5d-4d9a43301a21 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 12:45:33 pause-335738 crio[2948]: time="2024-08-05 12:45:33.417880203Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722861933417853447,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ea35ee17-0472-4bca-aa5d-4d9a43301a21 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 12:45:33 pause-335738 crio[2948]: time="2024-08-05 12:45:33.418746269Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4e2fae2b-bd96-4b70-800f-d2763e8df1cd name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:45:33 pause-335738 crio[2948]: time="2024-08-05 12:45:33.418808472Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4e2fae2b-bd96-4b70-800f-d2763e8df1cd name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:45:33 pause-335738 crio[2948]: time="2024-08-05 12:45:33.419263685Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2cfe1f13563b5d763c46bab15e7780fc0ee847d816fe24474213d19ac287e2a,PodSandboxId:1029fa92bc15f8b0a44ea964f16f47d5a97cd3bf4aa9982897ba37fa47be8eec,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722861918034966567,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l65cv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7245122-31af-45f8-99bf-1c863cbe5fe0,},Annotations:map[string]string{io.kubernetes.container.hash: 648a4b00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b483fbdb9094c6b2254e1174f5404b5e13f32997cf0e93775c4a59871b6c67b,PodSandboxId:bea3686074e08fc31691146745928b822cc7f7d00204ad6bb6af941a715cda81,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722861918039179678,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lzsxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9763738-61b9-43a5-9368-e06b52f43cd1,},Annotations:map[string]string{io.kubernetes.container.hash: fa173645,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdb056427f892d2aec7eb86a5657cbdb9b0a217985c4060e9da236fc00eea5b8,PodSandboxId:135cb004edd66c7374074996a965b0abf0f49f02b171fb15223409fafae6e4ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722861914235695957,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-335738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6551c46c42e7616b7906ac9288d43852,},Annot
ations:map[string]string{io.kubernetes.container.hash: 49b7a065,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb2cf67e138ffc00da0b5e832dbdbc353f1b5a01b20bafc76fe5e3b49f6d8719,PodSandboxId:82e471517e9809cb5c4dccabf0797484ffed40a8f413e2c9a799eb37ce71b7b7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722861914231068847,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-335738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 039b9240d849ddba132706a44a556b1f,},Annotations:map[string]
string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20853392b9232a94de389437742a4eefc03ed66b0026f3dce2232814e71deead,PodSandboxId:c0b9a577527007267d547133465164f1beedde697fd5645ea6f9e8730ce1d347,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722861914189215069,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-335738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be134a1d11f4eff92b8fa914099187cc,},Annotations:map[string]string{io.kubernet
es.container.hash: 5a22d9ff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29a6e6cec6effa81402d0037d16e85067d55f101a7c45a057619423e3684b644,PodSandboxId:f060ab33d43b2ab73cc0fbdf059980193dc58a69ee70fcb153f75da42036d0fe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722861914208431679,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-335738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b95943159a9620f708d589a3b6ccb89e,},Annotations:map[string]string{io
.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d03d5b65be38adf050eac82d091e13a70488941b180fbfe98c242246dea6d02,PodSandboxId:8ddf2717f05e846210cef04e27e050662e014a7a886a6441d9f35458255b56bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722861909590521803,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lzsxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9763738-61b9-43a5-9368-e06b52f43cd1,},Annotations:map[string]string{io.kubernetes.container.hash: fa17
3645,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:add61196ce4f029ecc2eb9dbd7dbded2932824edc54b7099b1ee0c73a8ac269d,PodSandboxId:4800c6d276d8e8c2ea869e35aa80ef7c4b875bb4d69248d4e2f9f2a4ed60fa18,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722861908841992072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-l65cv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7245122-31af-45f8-99bf-1c863cbe5fe0,},Annotations:map[string]string{io.kubernetes.container.hash: 648a4b00,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe361230dd1265ebfe73cd0cb849c09c62c2b58b4281010ffaef1149e8bcfd51,PodSandboxId:cd9efca96a8c0278b8dd4fc23eaf74063cbe57d36f71ec39ced22c4dd0c9ad11,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722861908770856291,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-335738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 039b9240d849ddba132706a44a556b1f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:862df9ac2aa291a1fd67edc74d04423c203ac8b935809be931d7af85bab22892,PodSandboxId:dbc576013f6676af99fc1220a817848683856f27ffb98d993b5e3612bd28ede1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722861908778693781,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-pause-335738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b95943159a9620f708d589a3b6ccb89e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62e629ccbea51616692856cbf4046c26f2e54ef331e7b238b1aa3742c4a5d0de,PodSandboxId:c706526c47804552f2b45bd552416df6d89b9dedb75efc86f574f65308b2783a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722861908723653329,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-335738,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be134a1d11f4eff92b8fa914099187cc,},Annotations:map[string]string{io.kubernetes.container.hash: 5a22d9ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57dd9d3e8f34f97a6da8e9cb2772d12864a5ff5e3bd6fa93bcbb140763635832,PodSandboxId:9710e986deeef0df082dd738b905fb2cbc53bb6f93d2d58b9e6b5d59f6ee439e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722861908576082491,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-335738,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 6551c46c42e7616b7906ac9288d43852,},Annotations:map[string]string{io.kubernetes.container.hash: 49b7a065,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4e2fae2b-bd96-4b70-800f-d2763e8df1cd name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:45:33 pause-335738 crio[2948]: time="2024-08-05 12:45:33.461495688Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2716c97f-6278-49fb-811b-e7bfb1ac16e6 name=/runtime.v1.RuntimeService/Version
	Aug 05 12:45:33 pause-335738 crio[2948]: time="2024-08-05 12:45:33.461575445Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2716c97f-6278-49fb-811b-e7bfb1ac16e6 name=/runtime.v1.RuntimeService/Version
	Aug 05 12:45:33 pause-335738 crio[2948]: time="2024-08-05 12:45:33.462920595Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7c9c4cc0-c73c-41cb-9dd3-72faf968fa44 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 12:45:33 pause-335738 crio[2948]: time="2024-08-05 12:45:33.463291237Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722861933463269872,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7c9c4cc0-c73c-41cb-9dd3-72faf968fa44 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 12:45:33 pause-335738 crio[2948]: time="2024-08-05 12:45:33.463825570Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1c643ab5-0f82-432a-9675-56c595f65255 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:45:33 pause-335738 crio[2948]: time="2024-08-05 12:45:33.463910129Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1c643ab5-0f82-432a-9675-56c595f65255 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:45:33 pause-335738 crio[2948]: time="2024-08-05 12:45:33.464184407Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2cfe1f13563b5d763c46bab15e7780fc0ee847d816fe24474213d19ac287e2a,PodSandboxId:1029fa92bc15f8b0a44ea964f16f47d5a97cd3bf4aa9982897ba37fa47be8eec,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722861918034966567,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l65cv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7245122-31af-45f8-99bf-1c863cbe5fe0,},Annotations:map[string]string{io.kubernetes.container.hash: 648a4b00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b483fbdb9094c6b2254e1174f5404b5e13f32997cf0e93775c4a59871b6c67b,PodSandboxId:bea3686074e08fc31691146745928b822cc7f7d00204ad6bb6af941a715cda81,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722861918039179678,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lzsxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9763738-61b9-43a5-9368-e06b52f43cd1,},Annotations:map[string]string{io.kubernetes.container.hash: fa173645,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdb056427f892d2aec7eb86a5657cbdb9b0a217985c4060e9da236fc00eea5b8,PodSandboxId:135cb004edd66c7374074996a965b0abf0f49f02b171fb15223409fafae6e4ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722861914235695957,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-335738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6551c46c42e7616b7906ac9288d43852,},Annot
ations:map[string]string{io.kubernetes.container.hash: 49b7a065,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb2cf67e138ffc00da0b5e832dbdbc353f1b5a01b20bafc76fe5e3b49f6d8719,PodSandboxId:82e471517e9809cb5c4dccabf0797484ffed40a8f413e2c9a799eb37ce71b7b7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722861914231068847,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-335738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 039b9240d849ddba132706a44a556b1f,},Annotations:map[string]
string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20853392b9232a94de389437742a4eefc03ed66b0026f3dce2232814e71deead,PodSandboxId:c0b9a577527007267d547133465164f1beedde697fd5645ea6f9e8730ce1d347,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722861914189215069,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-335738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be134a1d11f4eff92b8fa914099187cc,},Annotations:map[string]string{io.kubernet
es.container.hash: 5a22d9ff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29a6e6cec6effa81402d0037d16e85067d55f101a7c45a057619423e3684b644,PodSandboxId:f060ab33d43b2ab73cc0fbdf059980193dc58a69ee70fcb153f75da42036d0fe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722861914208431679,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-335738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b95943159a9620f708d589a3b6ccb89e,},Annotations:map[string]string{io
.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d03d5b65be38adf050eac82d091e13a70488941b180fbfe98c242246dea6d02,PodSandboxId:8ddf2717f05e846210cef04e27e050662e014a7a886a6441d9f35458255b56bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722861909590521803,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lzsxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9763738-61b9-43a5-9368-e06b52f43cd1,},Annotations:map[string]string{io.kubernetes.container.hash: fa17
3645,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:add61196ce4f029ecc2eb9dbd7dbded2932824edc54b7099b1ee0c73a8ac269d,PodSandboxId:4800c6d276d8e8c2ea869e35aa80ef7c4b875bb4d69248d4e2f9f2a4ed60fa18,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722861908841992072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-l65cv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7245122-31af-45f8-99bf-1c863cbe5fe0,},Annotations:map[string]string{io.kubernetes.container.hash: 648a4b00,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe361230dd1265ebfe73cd0cb849c09c62c2b58b4281010ffaef1149e8bcfd51,PodSandboxId:cd9efca96a8c0278b8dd4fc23eaf74063cbe57d36f71ec39ced22c4dd0c9ad11,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722861908770856291,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-335738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 039b9240d849ddba132706a44a556b1f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:862df9ac2aa291a1fd67edc74d04423c203ac8b935809be931d7af85bab22892,PodSandboxId:dbc576013f6676af99fc1220a817848683856f27ffb98d993b5e3612bd28ede1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722861908778693781,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-pause-335738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b95943159a9620f708d589a3b6ccb89e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62e629ccbea51616692856cbf4046c26f2e54ef331e7b238b1aa3742c4a5d0de,PodSandboxId:c706526c47804552f2b45bd552416df6d89b9dedb75efc86f574f65308b2783a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722861908723653329,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-335738,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be134a1d11f4eff92b8fa914099187cc,},Annotations:map[string]string{io.kubernetes.container.hash: 5a22d9ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57dd9d3e8f34f97a6da8e9cb2772d12864a5ff5e3bd6fa93bcbb140763635832,PodSandboxId:9710e986deeef0df082dd738b905fb2cbc53bb6f93d2d58b9e6b5d59f6ee439e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722861908576082491,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-335738,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 6551c46c42e7616b7906ac9288d43852,},Annotations:map[string]string{io.kubernetes.container.hash: 49b7a065,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1c643ab5-0f82-432a-9675-56c595f65255 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5b483fbdb9094       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 seconds ago      Running             coredns                   2                   bea3686074e08       coredns-7db6d8ff4d-lzsxg
	b2cfe1f13563b       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   15 seconds ago      Running             kube-proxy                2                   1029fa92bc15f       kube-proxy-l65cv
	cdb056427f892       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   19 seconds ago      Running             etcd                      2                   135cb004edd66       etcd-pause-335738
	fb2cf67e138ff       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   19 seconds ago      Running             kube-scheduler            2                   82e471517e980       kube-scheduler-pause-335738
	29a6e6cec6eff       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   19 seconds ago      Running             kube-controller-manager   2                   f060ab33d43b2       kube-controller-manager-pause-335738
	20853392b9232       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   19 seconds ago      Running             kube-apiserver            2                   c0b9a57752700       kube-apiserver-pause-335738
	2d03d5b65be38       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   23 seconds ago      Exited              coredns                   1                   8ddf2717f05e8       coredns-7db6d8ff4d-lzsxg
	add61196ce4f0       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   24 seconds ago      Exited              kube-proxy                1                   4800c6d276d8e       kube-proxy-l65cv
	862df9ac2aa29       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   24 seconds ago      Exited              kube-controller-manager   1                   dbc576013f667       kube-controller-manager-pause-335738
	fe361230dd126       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   24 seconds ago      Exited              kube-scheduler            1                   cd9efca96a8c0       kube-scheduler-pause-335738
	62e629ccbea51       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   24 seconds ago      Exited              kube-apiserver            1                   c706526c47804       kube-apiserver-pause-335738
	57dd9d3e8f34f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   24 seconds ago      Exited              etcd                      1                   9710e986deeef       etcd-pause-335738
	
	
	==> coredns [2d03d5b65be38adf050eac82d091e13a70488941b180fbfe98c242246dea6d02] <==
	
	
	==> coredns [5b483fbdb9094c6b2254e1174f5404b5e13f32997cf0e93775c4a59871b6c67b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:51185 - 13972 "HINFO IN 1866762790375974259.8856323661939502731. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015107772s
	
	
	==> describe nodes <==
	Name:               pause-335738
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-335738
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f
	                    minikube.k8s.io/name=pause-335738
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_05T12_44_24_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 12:44:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-335738
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 12:45:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 12:45:17 +0000   Mon, 05 Aug 2024 12:44:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 12:45:17 +0000   Mon, 05 Aug 2024 12:44:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 12:45:17 +0000   Mon, 05 Aug 2024 12:44:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 12:45:17 +0000   Mon, 05 Aug 2024 12:44:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.97
	  Hostname:    pause-335738
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 a89b63ec2ffd4501bdd3a0b908d25c2d
	  System UUID:                a89b63ec-2ffd-4501-bdd3-a0b908d25c2d
	  Boot ID:                    7b889ebf-9f5c-45a8-837f-a2ba8811b564
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-lzsxg                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     56s
	  kube-system                 etcd-pause-335738                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         70s
	  kube-system                 kube-apiserver-pause-335738             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	  kube-system                 kube-controller-manager-pause-335738    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	  kube-system                 kube-proxy-l65cv                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         56s
	  kube-system                 kube-scheduler-pause-335738             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 55s                kube-proxy       
	  Normal  Starting                 15s                kube-proxy       
	  Normal  NodeAllocatableEnforced  70s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  70s (x2 over 70s)  kubelet          Node pause-335738 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    70s (x2 over 70s)  kubelet          Node pause-335738 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     70s (x2 over 70s)  kubelet          Node pause-335738 status is now: NodeHasSufficientPID
	  Normal  Starting                 70s                kubelet          Starting kubelet.
	  Normal  NodeReady                69s                kubelet          Node pause-335738 status is now: NodeReady
	  Normal  RegisteredNode           57s                node-controller  Node pause-335738 event: Registered Node pause-335738 in Controller
	  Normal  Starting                 20s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20s (x8 over 20s)  kubelet          Node pause-335738 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s (x8 over 20s)  kubelet          Node pause-335738 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20s (x7 over 20s)  kubelet          Node pause-335738 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4s                 node-controller  Node pause-335738 event: Registered Node pause-335738 in Controller
	
	
	==> dmesg <==
	[Aug 5 12:44] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.057926] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059338] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.165740] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.125137] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.291545] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +4.300851] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +0.068607] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.598251] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +1.443014] kauditd_printk_skb: 57 callbacks suppressed
	[  +5.115749] systemd-fstab-generator[1278]: Ignoring "noauto" option for root device
	[  +0.075824] kauditd_printk_skb: 30 callbacks suppressed
	[ +14.092595] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.290533] systemd-fstab-generator[1567]: Ignoring "noauto" option for root device
	[  +7.992745] kauditd_printk_skb: 92 callbacks suppressed
	[Aug 5 12:45] systemd-fstab-generator[2392]: Ignoring "noauto" option for root device
	[  +0.253330] systemd-fstab-generator[2487]: Ignoring "noauto" option for root device
	[  +0.445775] systemd-fstab-generator[2683]: Ignoring "noauto" option for root device
	[  +0.368655] systemd-fstab-generator[2821]: Ignoring "noauto" option for root device
	[  +0.483050] systemd-fstab-generator[2923]: Ignoring "noauto" option for root device
	[  +1.531752] systemd-fstab-generator[3511]: Ignoring "noauto" option for root device
	[  +2.179567] systemd-fstab-generator[3635]: Ignoring "noauto" option for root device
	[  +0.081806] kauditd_printk_skb: 244 callbacks suppressed
	[ +15.683735] systemd-fstab-generator[4081]: Ignoring "noauto" option for root device
	[  +0.118055] kauditd_printk_skb: 50 callbacks suppressed
	
	
	==> etcd [57dd9d3e8f34f97a6da8e9cb2772d12864a5ff5e3bd6fa93bcbb140763635832] <==
	{"level":"info","ts":"2024-08-05T12:45:09.17733Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"28.609508ms"}
	{"level":"info","ts":"2024-08-05T12:45:09.225498Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-08-05T12:45:09.25861Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"6e56e32a1e97f390","local-member-id":"f61fae125a956d36","commit-index":415}
	{"level":"info","ts":"2024-08-05T12:45:09.258719Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 switched to configuration voters=()"}
	{"level":"info","ts":"2024-08-05T12:45:09.258748Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 became follower at term 2"}
	{"level":"info","ts":"2024-08-05T12:45:09.258764Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft f61fae125a956d36 [peers: [], term: 2, commit: 415, applied: 0, lastindex: 415, lastterm: 2]"}
	{"level":"warn","ts":"2024-08-05T12:45:09.272196Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-08-05T12:45:09.353244Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":398}
	{"level":"info","ts":"2024-08-05T12:45:09.368166Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-08-05T12:45:09.379913Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"f61fae125a956d36","timeout":"7s"}
	{"level":"info","ts":"2024-08-05T12:45:09.380228Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"f61fae125a956d36"}
	{"level":"info","ts":"2024-08-05T12:45:09.380263Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"f61fae125a956d36","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-08-05T12:45:09.380687Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-08-05T12:45:09.380842Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-05T12:45:09.380887Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-05T12:45:09.380894Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-05T12:45:09.381129Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 switched to configuration voters=(17735085251460689206)"}
	{"level":"info","ts":"2024-08-05T12:45:09.381171Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6e56e32a1e97f390","local-member-id":"f61fae125a956d36","added-peer-id":"f61fae125a956d36","added-peer-peer-urls":["https://192.168.39.97:2380"]}
	{"level":"info","ts":"2024-08-05T12:45:09.38127Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e56e32a1e97f390","local-member-id":"f61fae125a956d36","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T12:45:09.381302Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T12:45:09.406535Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-05T12:45:09.406749Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f61fae125a956d36","initial-advertise-peer-urls":["https://192.168.39.97:2380"],"listen-peer-urls":["https://192.168.39.97:2380"],"advertise-client-urls":["https://192.168.39.97:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.97:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-05T12:45:09.406787Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-05T12:45:09.406898Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.97:2380"}
	{"level":"info","ts":"2024-08-05T12:45:09.406905Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.97:2380"}
	
	
	==> etcd [cdb056427f892d2aec7eb86a5657cbdb9b0a217985c4060e9da236fc00eea5b8] <==
	{"level":"info","ts":"2024-08-05T12:45:14.581758Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6e56e32a1e97f390","local-member-id":"f61fae125a956d36","added-peer-id":"f61fae125a956d36","added-peer-peer-urls":["https://192.168.39.97:2380"]}
	{"level":"info","ts":"2024-08-05T12:45:14.581858Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e56e32a1e97f390","local-member-id":"f61fae125a956d36","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T12:45:14.5819Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T12:45:14.583785Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-05T12:45:14.583869Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-05T12:45:14.583884Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-05T12:45:14.590763Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-05T12:45:14.591018Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f61fae125a956d36","initial-advertise-peer-urls":["https://192.168.39.97:2380"],"listen-peer-urls":["https://192.168.39.97:2380"],"advertise-client-urls":["https://192.168.39.97:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.97:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-05T12:45:14.591061Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-05T12:45:14.591126Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.97:2380"}
	{"level":"info","ts":"2024-08-05T12:45:14.59115Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.97:2380"}
	{"level":"info","ts":"2024-08-05T12:45:16.05181Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-05T12:45:16.05192Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-05T12:45:16.051983Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 received MsgPreVoteResp from f61fae125a956d36 at term 2"}
	{"level":"info","ts":"2024-08-05T12:45:16.052019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 became candidate at term 3"}
	{"level":"info","ts":"2024-08-05T12:45:16.052044Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 received MsgVoteResp from f61fae125a956d36 at term 3"}
	{"level":"info","ts":"2024-08-05T12:45:16.052071Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 became leader at term 3"}
	{"level":"info","ts":"2024-08-05T12:45:16.052097Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f61fae125a956d36 elected leader f61fae125a956d36 at term 3"}
	{"level":"info","ts":"2024-08-05T12:45:16.057574Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T12:45:16.05752Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"f61fae125a956d36","local-member-attributes":"{Name:pause-335738 ClientURLs:[https://192.168.39.97:2379]}","request-path":"/0/members/f61fae125a956d36/attributes","cluster-id":"6e56e32a1e97f390","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-05T12:45:16.058541Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T12:45:16.058754Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-05T12:45:16.058785Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-05T12:45:16.059965Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-05T12:45:16.060412Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.97:2379"}
	
	
	==> kernel <==
	 12:45:33 up 1 min,  0 users,  load average: 1.82, 0.59, 0.21
	Linux pause-335738 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [20853392b9232a94de389437742a4eefc03ed66b0026f3dce2232814e71deead] <==
	I0805 12:45:17.529242       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0805 12:45:17.540321       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0805 12:45:17.540402       1 policy_source.go:224] refreshing policies
	I0805 12:45:17.548013       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0805 12:45:17.548083       1 aggregator.go:165] initial CRD sync complete...
	I0805 12:45:17.548104       1 autoregister_controller.go:141] Starting autoregister controller
	I0805 12:45:17.548109       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0805 12:45:17.548113       1 cache.go:39] Caches are synced for autoregister controller
	I0805 12:45:17.599402       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0805 12:45:17.599438       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0805 12:45:17.600016       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0805 12:45:17.600202       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0805 12:45:17.600801       1 shared_informer.go:320] Caches are synced for configmaps
	I0805 12:45:17.601979       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0805 12:45:17.607532       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0805 12:45:17.608831       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0805 12:45:17.618821       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0805 12:45:18.409590       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0805 12:45:19.023848       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0805 12:45:19.038837       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0805 12:45:19.074417       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0805 12:45:19.102873       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0805 12:45:19.109526       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0805 12:45:29.914960       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0805 12:45:29.967664       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [62e629ccbea51616692856cbf4046c26f2e54ef331e7b238b1aa3742c4a5d0de] <==
	I0805 12:45:09.677795       1 options.go:221] external host was not specified, using 192.168.39.97
	I0805 12:45:09.680987       1 server.go:148] Version: v1.30.3
	I0805 12:45:09.681032       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-controller-manager [29a6e6cec6effa81402d0037d16e85067d55f101a7c45a057619423e3684b644] <==
	I0805 12:45:29.800551       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0805 12:45:29.805602       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0805 12:45:29.805661       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0805 12:45:29.805741       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0805 12:45:29.805787       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0805 12:45:29.805951       1 shared_informer.go:320] Caches are synced for crt configmap
	I0805 12:45:29.810244       1 shared_informer.go:320] Caches are synced for PVC protection
	I0805 12:45:29.822844       1 shared_informer.go:320] Caches are synced for GC
	I0805 12:45:29.835281       1 shared_informer.go:320] Caches are synced for disruption
	I0805 12:45:29.836450       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0805 12:45:29.853774       1 shared_informer.go:320] Caches are synced for deployment
	I0805 12:45:29.853942       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0805 12:45:29.858066       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0805 12:45:29.858249       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="134.001µs"
	I0805 12:45:29.902747       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0805 12:45:29.903193       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0805 12:45:29.932836       1 shared_informer.go:320] Caches are synced for resource quota
	I0805 12:45:29.956240       1 shared_informer.go:320] Caches are synced for endpoint
	I0805 12:45:29.971975       1 shared_informer.go:320] Caches are synced for namespace
	I0805 12:45:29.974290       1 shared_informer.go:320] Caches are synced for service account
	I0805 12:45:29.982987       1 shared_informer.go:320] Caches are synced for HPA
	I0805 12:45:30.019207       1 shared_informer.go:320] Caches are synced for resource quota
	I0805 12:45:30.454186       1 shared_informer.go:320] Caches are synced for garbage collector
	I0805 12:45:30.473868       1 shared_informer.go:320] Caches are synced for garbage collector
	I0805 12:45:30.473916       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [862df9ac2aa291a1fd67edc74d04423c203ac8b935809be931d7af85bab22892] <==
	
	
	==> kube-proxy [add61196ce4f029ecc2eb9dbd7dbded2932824edc54b7099b1ee0c73a8ac269d] <==
	
	
	==> kube-proxy [b2cfe1f13563b5d763c46bab15e7780fc0ee847d816fe24474213d19ac287e2a] <==
	I0805 12:45:18.221982       1 server_linux.go:69] "Using iptables proxy"
	I0805 12:45:18.230636       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.97"]
	I0805 12:45:18.262682       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0805 12:45:18.262780       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0805 12:45:18.262813       1 server_linux.go:165] "Using iptables Proxier"
	I0805 12:45:18.265510       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0805 12:45:18.265706       1 server.go:872] "Version info" version="v1.30.3"
	I0805 12:45:18.265888       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 12:45:18.266994       1 config.go:192] "Starting service config controller"
	I0805 12:45:18.267285       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 12:45:18.267429       1 config.go:101] "Starting endpoint slice config controller"
	I0805 12:45:18.267476       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 12:45:18.267924       1 config.go:319] "Starting node config controller"
	I0805 12:45:18.267961       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 12:45:18.368231       1 shared_informer.go:320] Caches are synced for node config
	I0805 12:45:18.368317       1 shared_informer.go:320] Caches are synced for service config
	I0805 12:45:18.368439       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [fb2cf67e138ffc00da0b5e832dbdbc353f1b5a01b20bafc76fe5e3b49f6d8719] <==
	I0805 12:45:15.112690       1 serving.go:380] Generated self-signed cert in-memory
	W0805 12:45:17.493110       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0805 12:45:17.493151       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0805 12:45:17.493160       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0805 12:45:17.493166       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0805 12:45:17.522790       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0805 12:45:17.522830       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 12:45:17.526886       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0805 12:45:17.526979       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0805 12:45:17.526991       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0805 12:45:17.527003       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0805 12:45:17.627889       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [fe361230dd1265ebfe73cd0cb849c09c62c2b58b4281010ffaef1149e8bcfd51] <==
	
	
	==> kubelet <==
	Aug 05 12:45:13 pause-335738 kubelet[3642]: E0805 12:45:13.929323    3642 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-335738?timeout=10s\": dial tcp 192.168.39.97:8443: connect: connection refused" interval="400ms"
	Aug 05 12:45:14 pause-335738 kubelet[3642]: I0805 12:45:14.030834    3642 kubelet_node_status.go:73] "Attempting to register node" node="pause-335738"
	Aug 05 12:45:14 pause-335738 kubelet[3642]: E0805 12:45:14.031757    3642 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.97:8443: connect: connection refused" node="pause-335738"
	Aug 05 12:45:14 pause-335738 kubelet[3642]: I0805 12:45:14.164155    3642 scope.go:117] "RemoveContainer" containerID="57dd9d3e8f34f97a6da8e9cb2772d12864a5ff5e3bd6fa93bcbb140763635832"
	Aug 05 12:45:14 pause-335738 kubelet[3642]: I0805 12:45:14.167052    3642 scope.go:117] "RemoveContainer" containerID="62e629ccbea51616692856cbf4046c26f2e54ef331e7b238b1aa3742c4a5d0de"
	Aug 05 12:45:14 pause-335738 kubelet[3642]: I0805 12:45:14.167600    3642 scope.go:117] "RemoveContainer" containerID="fe361230dd1265ebfe73cd0cb849c09c62c2b58b4281010ffaef1149e8bcfd51"
	Aug 05 12:45:14 pause-335738 kubelet[3642]: I0805 12:45:14.168292    3642 scope.go:117] "RemoveContainer" containerID="862df9ac2aa291a1fd67edc74d04423c203ac8b935809be931d7af85bab22892"
	Aug 05 12:45:14 pause-335738 kubelet[3642]: E0805 12:45:14.330819    3642 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-335738?timeout=10s\": dial tcp 192.168.39.97:8443: connect: connection refused" interval="800ms"
	Aug 05 12:45:14 pause-335738 kubelet[3642]: I0805 12:45:14.434134    3642 kubelet_node_status.go:73] "Attempting to register node" node="pause-335738"
	Aug 05 12:45:14 pause-335738 kubelet[3642]: E0805 12:45:14.435612    3642 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.97:8443: connect: connection refused" node="pause-335738"
	Aug 05 12:45:15 pause-335738 kubelet[3642]: I0805 12:45:15.237250    3642 kubelet_node_status.go:73] "Attempting to register node" node="pause-335738"
	Aug 05 12:45:17 pause-335738 kubelet[3642]: I0805 12:45:17.565284    3642 kubelet_node_status.go:112] "Node was previously registered" node="pause-335738"
	Aug 05 12:45:17 pause-335738 kubelet[3642]: I0805 12:45:17.565729    3642 kubelet_node_status.go:76] "Successfully registered node" node="pause-335738"
	Aug 05 12:45:17 pause-335738 kubelet[3642]: I0805 12:45:17.566908    3642 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 05 12:45:17 pause-335738 kubelet[3642]: I0805 12:45:17.567888    3642 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 05 12:45:17 pause-335738 kubelet[3642]: E0805 12:45:17.619980    3642 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-pause-335738\" already exists" pod="kube-system/kube-controller-manager-pause-335738"
	Aug 05 12:45:17 pause-335738 kubelet[3642]: I0805 12:45:17.717470    3642 apiserver.go:52] "Watching apiserver"
	Aug 05 12:45:17 pause-335738 kubelet[3642]: I0805 12:45:17.721229    3642 topology_manager.go:215] "Topology Admit Handler" podUID="d7245122-31af-45f8-99bf-1c863cbe5fe0" podNamespace="kube-system" podName="kube-proxy-l65cv"
	Aug 05 12:45:17 pause-335738 kubelet[3642]: I0805 12:45:17.722360    3642 topology_manager.go:215] "Topology Admit Handler" podUID="f9763738-61b9-43a5-9368-e06b52f43cd1" podNamespace="kube-system" podName="coredns-7db6d8ff4d-lzsxg"
	Aug 05 12:45:17 pause-335738 kubelet[3642]: I0805 12:45:17.724038    3642 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Aug 05 12:45:17 pause-335738 kubelet[3642]: I0805 12:45:17.808859    3642 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d7245122-31af-45f8-99bf-1c863cbe5fe0-xtables-lock\") pod \"kube-proxy-l65cv\" (UID: \"d7245122-31af-45f8-99bf-1c863cbe5fe0\") " pod="kube-system/kube-proxy-l65cv"
	Aug 05 12:45:17 pause-335738 kubelet[3642]: I0805 12:45:17.809027    3642 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d7245122-31af-45f8-99bf-1c863cbe5fe0-lib-modules\") pod \"kube-proxy-l65cv\" (UID: \"d7245122-31af-45f8-99bf-1c863cbe5fe0\") " pod="kube-system/kube-proxy-l65cv"
	Aug 05 12:45:18 pause-335738 kubelet[3642]: I0805 12:45:18.023559    3642 scope.go:117] "RemoveContainer" containerID="add61196ce4f029ecc2eb9dbd7dbded2932824edc54b7099b1ee0c73a8ac269d"
	Aug 05 12:45:18 pause-335738 kubelet[3642]: I0805 12:45:18.024734    3642 scope.go:117] "RemoveContainer" containerID="2d03d5b65be38adf050eac82d091e13a70488941b180fbfe98c242246dea6d02"
	Aug 05 12:45:26 pause-335738 kubelet[3642]: I0805 12:45:26.101324    3642 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0805 12:45:32.967447  435706 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19377-383955/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-335738 -n pause-335738
helpers_test.go:261: (dbg) Run:  kubectl --context pause-335738 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-335738 -n pause-335738
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-335738 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-335738 logs -n 25: (1.418952366s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p force-systemd-flag-960699          | force-systemd-flag-960699 | jenkins | v1.33.1 | 05 Aug 24 12:40 UTC | 05 Aug 24 12:40 UTC |
	| start   | -p running-upgrade-313656             | minikube                  | jenkins | v1.26.0 | 05 Aug 24 12:40 UTC | 05 Aug 24 12:42 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-833202                | NoKubernetes-833202       | jenkins | v1.33.1 | 05 Aug 24 12:40 UTC | 05 Aug 24 12:40 UTC |
	| start   | -p NoKubernetes-833202                | NoKubernetes-833202       | jenkins | v1.33.1 | 05 Aug 24 12:40 UTC | 05 Aug 24 12:41 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-823434 ssh               | cert-options-823434       | jenkins | v1.33.1 | 05 Aug 24 12:40 UTC | 05 Aug 24 12:40 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-823434 -- sudo        | cert-options-823434       | jenkins | v1.33.1 | 05 Aug 24 12:40 UTC | 05 Aug 24 12:40 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-823434                | cert-options-823434       | jenkins | v1.33.1 | 05 Aug 24 12:40 UTC | 05 Aug 24 12:40 UTC |
	| start   | -p kubernetes-upgrade-515808          | kubernetes-upgrade-515808 | jenkins | v1.33.1 | 05 Aug 24 12:40 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-833202 sudo           | NoKubernetes-833202       | jenkins | v1.33.1 | 05 Aug 24 12:41 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-833202                | NoKubernetes-833202       | jenkins | v1.33.1 | 05 Aug 24 12:41 UTC | 05 Aug 24 12:41 UTC |
	| start   | -p NoKubernetes-833202                | NoKubernetes-833202       | jenkins | v1.33.1 | 05 Aug 24 12:41 UTC | 05 Aug 24 12:42 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-833202 sudo           | NoKubernetes-833202       | jenkins | v1.33.1 | 05 Aug 24 12:42 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-833202                | NoKubernetes-833202       | jenkins | v1.33.1 | 05 Aug 24 12:42 UTC | 05 Aug 24 12:42 UTC |
	| start   | -p stopped-upgrade-938024             | minikube                  | jenkins | v1.26.0 | 05 Aug 24 12:42 UTC | 05 Aug 24 12:43 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| start   | -p running-upgrade-313656             | running-upgrade-313656    | jenkins | v1.33.1 | 05 Aug 24 12:42 UTC | 05 Aug 24 12:43 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p cert-expiration-623276             | cert-expiration-623276    | jenkins | v1.33.1 | 05 Aug 24 12:43 UTC | 05 Aug 24 12:43 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-938024 stop           | minikube                  | jenkins | v1.26.0 | 05 Aug 24 12:43 UTC | 05 Aug 24 12:43 UTC |
	| start   | -p stopped-upgrade-938024             | stopped-upgrade-938024    | jenkins | v1.33.1 | 05 Aug 24 12:43 UTC | 05 Aug 24 12:44 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-623276             | cert-expiration-623276    | jenkins | v1.33.1 | 05 Aug 24 12:43 UTC | 05 Aug 24 12:43 UTC |
	| start   | -p pause-335738 --memory=2048         | pause-335738              | jenkins | v1.33.1 | 05 Aug 24 12:43 UTC | 05 Aug 24 12:44 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-313656             | running-upgrade-313656    | jenkins | v1.33.1 | 05 Aug 24 12:43 UTC | 05 Aug 24 12:43 UTC |
	| start   | -p auto-119870 --memory=3072          | auto-119870               | jenkins | v1.33.1 | 05 Aug 24 12:43 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-938024             | stopped-upgrade-938024    | jenkins | v1.33.1 | 05 Aug 24 12:44 UTC | 05 Aug 24 12:44 UTC |
	| start   | -p kindnet-119870                     | kindnet-119870            | jenkins | v1.33.1 | 05 Aug 24 12:44 UTC |                     |
	|         | --memory=3072                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-335738                       | pause-335738              | jenkins | v1.33.1 | 05 Aug 24 12:44 UTC | 05 Aug 24 12:45 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 12:44:43
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 12:44:43.344035  435321 out.go:291] Setting OutFile to fd 1 ...
	I0805 12:44:43.344171  435321 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:44:43.344182  435321 out.go:304] Setting ErrFile to fd 2...
	I0805 12:44:43.344189  435321 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:44:43.344374  435321 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
	I0805 12:44:43.344967  435321 out.go:298] Setting JSON to false
	I0805 12:44:43.346018  435321 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8830,"bootTime":1722853053,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0805 12:44:43.346094  435321 start.go:139] virtualization: kvm guest
	I0805 12:44:43.368791  435321 out.go:177] * [pause-335738] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0805 12:44:43.453415  435321 notify.go:220] Checking for updates...
	I0805 12:44:43.453501  435321 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 12:44:43.537804  435321 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 12:44:43.636253  435321 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 12:44:43.774898  435321 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 12:44:43.808131  435321 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0805 12:44:43.851242  435321 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 12:44:43.901680  435321 config.go:182] Loaded profile config "pause-335738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:44:43.902170  435321 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:44:43.902229  435321 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:44:43.920143  435321 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41079
	I0805 12:44:43.920851  435321 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:44:43.921677  435321 main.go:141] libmachine: Using API Version  1
	I0805 12:44:43.921710  435321 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:44:43.922149  435321 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:44:43.922394  435321 main.go:141] libmachine: (pause-335738) Calling .DriverName
	I0805 12:44:43.922752  435321 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 12:44:43.923238  435321 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:44:43.923325  435321 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:44:43.940533  435321 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36477
	I0805 12:44:43.941054  435321 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:44:43.941600  435321 main.go:141] libmachine: Using API Version  1
	I0805 12:44:43.941632  435321 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:44:43.941981  435321 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:44:43.942163  435321 main.go:141] libmachine: (pause-335738) Calling .DriverName
	I0805 12:44:43.979976  435321 out.go:177] * Using the kvm2 driver based on existing profile
	I0805 12:44:43.981254  435321 start.go:297] selected driver: kvm2
	I0805 12:44:43.981275  435321 start.go:901] validating driver "kvm2" against &{Name:pause-335738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.30.3 ClusterName:pause-335738 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-devi
ce-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:44:43.981495  435321 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 12:44:43.981960  435321 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 12:44:43.982072  435321 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19377-383955/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0805 12:44:43.998073  435321 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0805 12:44:43.999104  435321 cni.go:84] Creating CNI manager for ""
	I0805 12:44:43.999134  435321 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:44:43.999241  435321 start.go:340] cluster config:
	{Name:pause-335738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:pause-335738 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:f
alse registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:44:43.999515  435321 iso.go:125] acquiring lock: {Name:mk78a4988ea0dfb86bb6f7367e362683a39fd912 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 12:44:44.001520  435321 out.go:177] * Starting "pause-335738" primary control-plane node in "pause-335738" cluster
	I0805 12:44:42.970473  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:42.971049  434893 main.go:141] libmachine: (kindnet-119870) DBG | unable to find current IP address of domain kindnet-119870 in network mk-kindnet-119870
	I0805 12:44:42.971076  434893 main.go:141] libmachine: (kindnet-119870) DBG | I0805 12:44:42.970996  435153 retry.go:31] will retry after 2.255351199s: waiting for machine to come up
	I0805 12:44:45.229689  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:45.230326  434893 main.go:141] libmachine: (kindnet-119870) DBG | unable to find current IP address of domain kindnet-119870 in network mk-kindnet-119870
	I0805 12:44:45.230353  434893 main.go:141] libmachine: (kindnet-119870) DBG | I0805 12:44:45.230252  435153 retry.go:31] will retry after 2.54222134s: waiting for machine to come up
	I0805 12:44:42.924035  434553 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0805 12:44:42.924155  434553 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0805 12:44:43.925144  434553 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00193841s
	I0805 12:44:43.925248  434553 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0805 12:44:44.002746  435321 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 12:44:44.002789  435321 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0805 12:44:44.002805  435321 cache.go:56] Caching tarball of preloaded images
	I0805 12:44:44.002914  435321 preload.go:172] Found /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0805 12:44:44.002936  435321 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0805 12:44:44.003063  435321 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/pause-335738/config.json ...
	I0805 12:44:44.003287  435321 start.go:360] acquireMachinesLock for pause-335738: {Name:mk3babe91d55c30c0b650587cdec6489eb3a7ed6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 12:44:48.925717  434553 kubeadm.go:310] [api-check] The API server is healthy after 5.002245616s
	I0805 12:44:48.935922  434553 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 12:44:48.947374  434553 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 12:44:48.974358  434553 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0805 12:44:48.974605  434553 kubeadm.go:310] [mark-control-plane] Marking the node auto-119870 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 12:44:48.985798  434553 kubeadm.go:310] [bootstrap-token] Using token: bjp3f3.p69em8uudx6hyl0p
	I0805 12:44:48.987171  434553 out.go:204]   - Configuring RBAC rules ...
	I0805 12:44:48.987290  434553 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 12:44:48.994374  434553 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 12:44:49.017673  434553 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 12:44:49.022435  434553 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 12:44:49.028617  434553 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 12:44:49.032676  434553 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 12:44:49.332279  434553 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 12:44:49.768880  434553 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0805 12:44:50.331385  434553 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0805 12:44:50.331413  434553 kubeadm.go:310] 
	I0805 12:44:50.331471  434553 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0805 12:44:50.331509  434553 kubeadm.go:310] 
	I0805 12:44:50.331637  434553 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0805 12:44:50.331654  434553 kubeadm.go:310] 
	I0805 12:44:50.331699  434553 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0805 12:44:50.331801  434553 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 12:44:50.331877  434553 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 12:44:50.331886  434553 kubeadm.go:310] 
	I0805 12:44:50.331984  434553 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0805 12:44:50.331993  434553 kubeadm.go:310] 
	I0805 12:44:50.332062  434553 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 12:44:50.332084  434553 kubeadm.go:310] 
	I0805 12:44:50.332159  434553 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0805 12:44:50.332237  434553 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 12:44:50.332296  434553 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 12:44:50.332305  434553 kubeadm.go:310] 
	I0805 12:44:50.332383  434553 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0805 12:44:50.332494  434553 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0805 12:44:50.332505  434553 kubeadm.go:310] 
	I0805 12:44:50.332581  434553 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bjp3f3.p69em8uudx6hyl0p \
	I0805 12:44:50.332705  434553 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d5d31a77e9c4cbf19599d2fca5d8f2345e115b01301fa4b841f92bcfec86ddc6 \
	I0805 12:44:50.332740  434553 kubeadm.go:310] 	--control-plane 
	I0805 12:44:50.332749  434553 kubeadm.go:310] 
	I0805 12:44:50.332857  434553 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0805 12:44:50.332867  434553 kubeadm.go:310] 
	I0805 12:44:50.332964  434553 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bjp3f3.p69em8uudx6hyl0p \
	I0805 12:44:50.333114  434553 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d5d31a77e9c4cbf19599d2fca5d8f2345e115b01301fa4b841f92bcfec86ddc6 
	I0805 12:44:50.333257  434553 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 12:44:50.333349  434553 cni.go:84] Creating CNI manager for ""
	I0805 12:44:50.333367  434553 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:44:50.334958  434553 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0805 12:44:47.773613  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:47.774165  434893 main.go:141] libmachine: (kindnet-119870) DBG | unable to find current IP address of domain kindnet-119870 in network mk-kindnet-119870
	I0805 12:44:47.774190  434893 main.go:141] libmachine: (kindnet-119870) DBG | I0805 12:44:47.774099  435153 retry.go:31] will retry after 3.606807249s: waiting for machine to come up
	I0805 12:44:51.384791  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:51.385349  434893 main.go:141] libmachine: (kindnet-119870) DBG | unable to find current IP address of domain kindnet-119870 in network mk-kindnet-119870
	I0805 12:44:51.385373  434893 main.go:141] libmachine: (kindnet-119870) DBG | I0805 12:44:51.385294  435153 retry.go:31] will retry after 5.167725361s: waiting for machine to come up
	I0805 12:44:50.336085  434553 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0805 12:44:50.347010  434553 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0805 12:44:50.365133  434553 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 12:44:50.365244  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:44:50.365295  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-119870 minikube.k8s.io/updated_at=2024_08_05T12_44_50_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f minikube.k8s.io/name=auto-119870 minikube.k8s.io/primary=true
	I0805 12:44:50.405054  434553 ops.go:34] apiserver oom_adj: -16
	I0805 12:44:50.487680  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:44:50.988580  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:44:51.487877  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:44:51.988334  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:44:52.487841  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:44:56.557768  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:56.558350  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has current primary IP address 192.168.72.10 and MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:56.558384  434893 main.go:141] libmachine: (kindnet-119870) Found IP for machine: 192.168.72.10
	I0805 12:44:56.558399  434893 main.go:141] libmachine: (kindnet-119870) Reserving static IP address...
	I0805 12:44:56.558757  434893 main.go:141] libmachine: (kindnet-119870) DBG | unable to find host DHCP lease matching {name: "kindnet-119870", mac: "52:54:00:a2:57:b7", ip: "192.168.72.10"} in network mk-kindnet-119870
	I0805 12:44:56.633347  434893 main.go:141] libmachine: (kindnet-119870) DBG | Getting to WaitForSSH function...
	I0805 12:44:56.633387  434893 main.go:141] libmachine: (kindnet-119870) Reserved static IP address: 192.168.72.10
	I0805 12:44:56.633438  434893 main.go:141] libmachine: (kindnet-119870) Waiting for SSH to be available...
	I0805 12:44:56.636062  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:56.636592  434893 main.go:141] libmachine: (kindnet-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:57:b7", ip: ""} in network mk-kindnet-119870: {Iface:virbr4 ExpiryTime:2024-08-05 13:44:48 +0000 UTC Type:0 Mac:52:54:00:a2:57:b7 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a2:57:b7}
	I0805 12:44:56.636629  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined IP address 192.168.72.10 and MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:56.636731  434893 main.go:141] libmachine: (kindnet-119870) DBG | Using SSH client type: external
	I0805 12:44:56.636753  434893 main.go:141] libmachine: (kindnet-119870) DBG | Using SSH private key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/kindnet-119870/id_rsa (-rw-------)
	I0805 12:44:56.636785  434893 main.go:141] libmachine: (kindnet-119870) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.10 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19377-383955/.minikube/machines/kindnet-119870/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 12:44:56.636796  434893 main.go:141] libmachine: (kindnet-119870) DBG | About to run SSH command:
	I0805 12:44:56.636809  434893 main.go:141] libmachine: (kindnet-119870) DBG | exit 0
	I0805 12:44:56.760018  434893 main.go:141] libmachine: (kindnet-119870) DBG | SSH cmd err, output: <nil>: 
	I0805 12:44:56.760338  434893 main.go:141] libmachine: (kindnet-119870) KVM machine creation complete!
	I0805 12:44:56.760676  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetConfigRaw
	I0805 12:44:56.761244  434893 main.go:141] libmachine: (kindnet-119870) Calling .DriverName
	I0805 12:44:56.761466  434893 main.go:141] libmachine: (kindnet-119870) Calling .DriverName
	I0805 12:44:56.761682  434893 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0805 12:44:56.761701  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetState
	I0805 12:44:56.763034  434893 main.go:141] libmachine: Detecting operating system of created instance...
	I0805 12:44:56.763052  434893 main.go:141] libmachine: Waiting for SSH to be available...
	I0805 12:44:56.763059  434893 main.go:141] libmachine: Getting to WaitForSSH function...
	I0805 12:44:56.763068  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHHostname
	I0805 12:44:56.765321  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:56.765673  434893 main.go:141] libmachine: (kindnet-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:57:b7", ip: ""} in network mk-kindnet-119870: {Iface:virbr4 ExpiryTime:2024-08-05 13:44:48 +0000 UTC Type:0 Mac:52:54:00:a2:57:b7 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:kindnet-119870 Clientid:01:52:54:00:a2:57:b7}
	I0805 12:44:56.765705  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined IP address 192.168.72.10 and MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:56.765817  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHPort
	I0805 12:44:56.766014  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHKeyPath
	I0805 12:44:56.766185  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHKeyPath
	I0805 12:44:56.766313  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHUsername
	I0805 12:44:56.766466  434893 main.go:141] libmachine: Using SSH client type: native
	I0805 12:44:56.766677  434893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.10 22 <nil> <nil>}
	I0805 12:44:56.766691  434893 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0805 12:44:52.987702  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:44:53.488336  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:44:53.987711  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:44:54.488037  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:44:54.987903  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:44:55.487849  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:44:55.987844  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:44:56.488378  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:44:56.988332  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:44:57.488622  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:44:58.252843  435321 start.go:364] duration metric: took 14.249505532s to acquireMachinesLock for "pause-335738"
	I0805 12:44:58.252909  435321 start.go:96] Skipping create...Using existing machine configuration
	I0805 12:44:58.252921  435321 fix.go:54] fixHost starting: 
	I0805 12:44:58.253335  435321 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:44:58.253396  435321 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:44:58.273749  435321 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41657
	I0805 12:44:58.274208  435321 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:44:58.274727  435321 main.go:141] libmachine: Using API Version  1
	I0805 12:44:58.274754  435321 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:44:58.275052  435321 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:44:58.275256  435321 main.go:141] libmachine: (pause-335738) Calling .DriverName
	I0805 12:44:58.275414  435321 main.go:141] libmachine: (pause-335738) Calling .GetState
	I0805 12:44:58.277106  435321 fix.go:112] recreateIfNeeded on pause-335738: state=Running err=<nil>
	W0805 12:44:58.277151  435321 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 12:44:58.279288  435321 out.go:177] * Updating the running kvm2 "pause-335738" VM ...
	I0805 12:44:58.280673  435321 machine.go:94] provisionDockerMachine start ...
	I0805 12:44:58.280701  435321 main.go:141] libmachine: (pause-335738) Calling .DriverName
	I0805 12:44:58.280894  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHHostname
	I0805 12:44:58.284133  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:44:58.284613  435321 main.go:141] libmachine: (pause-335738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:22:6e", ip: ""} in network mk-pause-335738: {Iface:virbr1 ExpiryTime:2024-08-05 13:43:55 +0000 UTC Type:0 Mac:52:54:00:c5:22:6e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-335738 Clientid:01:52:54:00:c5:22:6e}
	I0805 12:44:58.284640  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined IP address 192.168.39.97 and MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:44:58.284791  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHPort
	I0805 12:44:58.284946  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHKeyPath
	I0805 12:44:58.285090  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHKeyPath
	I0805 12:44:58.285225  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHUsername
	I0805 12:44:58.285452  435321 main.go:141] libmachine: Using SSH client type: native
	I0805 12:44:58.285645  435321 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0805 12:44:58.285656  435321 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 12:44:56.867111  434893 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 12:44:56.867137  434893 main.go:141] libmachine: Detecting the provisioner...
	I0805 12:44:56.867145  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHHostname
	I0805 12:44:56.869996  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:56.870310  434893 main.go:141] libmachine: (kindnet-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:57:b7", ip: ""} in network mk-kindnet-119870: {Iface:virbr4 ExpiryTime:2024-08-05 13:44:48 +0000 UTC Type:0 Mac:52:54:00:a2:57:b7 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:kindnet-119870 Clientid:01:52:54:00:a2:57:b7}
	I0805 12:44:56.870346  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined IP address 192.168.72.10 and MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:56.870481  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHPort
	I0805 12:44:56.870703  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHKeyPath
	I0805 12:44:56.870913  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHKeyPath
	I0805 12:44:56.871081  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHUsername
	I0805 12:44:56.871331  434893 main.go:141] libmachine: Using SSH client type: native
	I0805 12:44:56.871513  434893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.10 22 <nil> <nil>}
	I0805 12:44:56.871522  434893 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0805 12:44:56.972557  434893 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0805 12:44:56.972641  434893 main.go:141] libmachine: found compatible host: buildroot
	I0805 12:44:56.972653  434893 main.go:141] libmachine: Provisioning with buildroot...
	I0805 12:44:56.972662  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetMachineName
	I0805 12:44:56.972939  434893 buildroot.go:166] provisioning hostname "kindnet-119870"
	I0805 12:44:56.972966  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetMachineName
	I0805 12:44:56.973177  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHHostname
	I0805 12:44:56.976140  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:56.976567  434893 main.go:141] libmachine: (kindnet-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:57:b7", ip: ""} in network mk-kindnet-119870: {Iface:virbr4 ExpiryTime:2024-08-05 13:44:48 +0000 UTC Type:0 Mac:52:54:00:a2:57:b7 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:kindnet-119870 Clientid:01:52:54:00:a2:57:b7}
	I0805 12:44:56.976597  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined IP address 192.168.72.10 and MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:56.976706  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHPort
	I0805 12:44:56.976879  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHKeyPath
	I0805 12:44:56.977035  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHKeyPath
	I0805 12:44:56.977208  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHUsername
	I0805 12:44:56.977385  434893 main.go:141] libmachine: Using SSH client type: native
	I0805 12:44:56.977560  434893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.10 22 <nil> <nil>}
	I0805 12:44:56.977572  434893 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-119870 && echo "kindnet-119870" | sudo tee /etc/hostname
	I0805 12:44:57.099024  434893 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-119870
	
	I0805 12:44:57.099057  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHHostname
	I0805 12:44:57.101768  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:57.102132  434893 main.go:141] libmachine: (kindnet-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:57:b7", ip: ""} in network mk-kindnet-119870: {Iface:virbr4 ExpiryTime:2024-08-05 13:44:48 +0000 UTC Type:0 Mac:52:54:00:a2:57:b7 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:kindnet-119870 Clientid:01:52:54:00:a2:57:b7}
	I0805 12:44:57.102161  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined IP address 192.168.72.10 and MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:57.102552  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHPort
	I0805 12:44:57.102772  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHKeyPath
	I0805 12:44:57.102977  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHKeyPath
	I0805 12:44:57.103136  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHUsername
	I0805 12:44:57.103332  434893 main.go:141] libmachine: Using SSH client type: native
	I0805 12:44:57.103508  434893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.10 22 <nil> <nil>}
	I0805 12:44:57.103525  434893 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-119870' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-119870/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-119870' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 12:44:57.213876  434893 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 12:44:57.213908  434893 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19377-383955/.minikube CaCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19377-383955/.minikube}
	I0805 12:44:57.213949  434893 buildroot.go:174] setting up certificates
	I0805 12:44:57.213962  434893 provision.go:84] configureAuth start
	I0805 12:44:57.213973  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetMachineName
	I0805 12:44:57.214331  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetIP
	I0805 12:44:57.217345  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:57.217761  434893 main.go:141] libmachine: (kindnet-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:57:b7", ip: ""} in network mk-kindnet-119870: {Iface:virbr4 ExpiryTime:2024-08-05 13:44:48 +0000 UTC Type:0 Mac:52:54:00:a2:57:b7 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:kindnet-119870 Clientid:01:52:54:00:a2:57:b7}
	I0805 12:44:57.217786  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined IP address 192.168.72.10 and MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:57.217995  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHHostname
	I0805 12:44:57.220388  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:57.220709  434893 main.go:141] libmachine: (kindnet-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:57:b7", ip: ""} in network mk-kindnet-119870: {Iface:virbr4 ExpiryTime:2024-08-05 13:44:48 +0000 UTC Type:0 Mac:52:54:00:a2:57:b7 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:kindnet-119870 Clientid:01:52:54:00:a2:57:b7}
	I0805 12:44:57.220753  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined IP address 192.168.72.10 and MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:57.220864  434893 provision.go:143] copyHostCerts
	I0805 12:44:57.220923  434893 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem, removing ...
	I0805 12:44:57.220939  434893 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 12:44:57.221004  434893 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem (1082 bytes)
	I0805 12:44:57.221103  434893 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem, removing ...
	I0805 12:44:57.221113  434893 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 12:44:57.221133  434893 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem (1123 bytes)
	I0805 12:44:57.221185  434893 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem, removing ...
	I0805 12:44:57.221192  434893 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 12:44:57.221208  434893 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem (1675 bytes)
	I0805 12:44:57.221254  434893 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem org=jenkins.kindnet-119870 san=[127.0.0.1 192.168.72.10 kindnet-119870 localhost minikube]
	I0805 12:44:57.576576  434893 provision.go:177] copyRemoteCerts
	I0805 12:44:57.576643  434893 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 12:44:57.576670  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHHostname
	I0805 12:44:57.579637  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:57.579986  434893 main.go:141] libmachine: (kindnet-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:57:b7", ip: ""} in network mk-kindnet-119870: {Iface:virbr4 ExpiryTime:2024-08-05 13:44:48 +0000 UTC Type:0 Mac:52:54:00:a2:57:b7 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:kindnet-119870 Clientid:01:52:54:00:a2:57:b7}
	I0805 12:44:57.580020  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined IP address 192.168.72.10 and MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:57.580264  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHPort
	I0805 12:44:57.580449  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHKeyPath
	I0805 12:44:57.580620  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHUsername
	I0805 12:44:57.580733  434893 sshutil.go:53] new ssh client: &{IP:192.168.72.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/kindnet-119870/id_rsa Username:docker}
	I0805 12:44:57.663238  434893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0805 12:44:57.688031  434893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 12:44:57.712663  434893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 12:44:57.743530  434893 provision.go:87] duration metric: took 529.553371ms to configureAuth
	I0805 12:44:57.743559  434893 buildroot.go:189] setting minikube options for container-runtime
	I0805 12:44:57.743790  434893 config.go:182] Loaded profile config "kindnet-119870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:44:57.743904  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHHostname
	I0805 12:44:57.746398  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:57.746760  434893 main.go:141] libmachine: (kindnet-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:57:b7", ip: ""} in network mk-kindnet-119870: {Iface:virbr4 ExpiryTime:2024-08-05 13:44:48 +0000 UTC Type:0 Mac:52:54:00:a2:57:b7 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:kindnet-119870 Clientid:01:52:54:00:a2:57:b7}
	I0805 12:44:57.746781  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined IP address 192.168.72.10 and MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:57.746980  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHPort
	I0805 12:44:57.747181  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHKeyPath
	I0805 12:44:57.747343  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHKeyPath
	I0805 12:44:57.747465  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHUsername
	I0805 12:44:57.747599  434893 main.go:141] libmachine: Using SSH client type: native
	I0805 12:44:57.747813  434893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.10 22 <nil> <nil>}
	I0805 12:44:57.747831  434893 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 12:44:58.016168  434893 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 12:44:58.016204  434893 main.go:141] libmachine: Checking connection to Docker...
	I0805 12:44:58.016212  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetURL
	I0805 12:44:58.017670  434893 main.go:141] libmachine: (kindnet-119870) DBG | Using libvirt version 6000000
	I0805 12:44:58.020355  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:58.020875  434893 main.go:141] libmachine: (kindnet-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:57:b7", ip: ""} in network mk-kindnet-119870: {Iface:virbr4 ExpiryTime:2024-08-05 13:44:48 +0000 UTC Type:0 Mac:52:54:00:a2:57:b7 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:kindnet-119870 Clientid:01:52:54:00:a2:57:b7}
	I0805 12:44:58.020908  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined IP address 192.168.72.10 and MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:58.021074  434893 main.go:141] libmachine: Docker is up and running!
	I0805 12:44:58.021090  434893 main.go:141] libmachine: Reticulating splines...
	I0805 12:44:58.021098  434893 client.go:171] duration metric: took 25.634465705s to LocalClient.Create
	I0805 12:44:58.021120  434893 start.go:167] duration metric: took 25.634531809s to libmachine.API.Create "kindnet-119870"
	I0805 12:44:58.021129  434893 start.go:293] postStartSetup for "kindnet-119870" (driver="kvm2")
	I0805 12:44:58.021144  434893 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 12:44:58.021161  434893 main.go:141] libmachine: (kindnet-119870) Calling .DriverName
	I0805 12:44:58.021408  434893 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 12:44:58.021439  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHHostname
	I0805 12:44:58.023811  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:58.024132  434893 main.go:141] libmachine: (kindnet-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:57:b7", ip: ""} in network mk-kindnet-119870: {Iface:virbr4 ExpiryTime:2024-08-05 13:44:48 +0000 UTC Type:0 Mac:52:54:00:a2:57:b7 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:kindnet-119870 Clientid:01:52:54:00:a2:57:b7}
	I0805 12:44:58.024163  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined IP address 192.168.72.10 and MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:58.024330  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHPort
	I0805 12:44:58.024527  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHKeyPath
	I0805 12:44:58.024715  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHUsername
	I0805 12:44:58.024883  434893 sshutil.go:53] new ssh client: &{IP:192.168.72.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/kindnet-119870/id_rsa Username:docker}
	I0805 12:44:58.106053  434893 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 12:44:58.110316  434893 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 12:44:58.110342  434893 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/addons for local assets ...
	I0805 12:44:58.110427  434893 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/files for local assets ...
	I0805 12:44:58.110535  434893 filesync.go:149] local asset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> 3912192.pem in /etc/ssl/certs
	I0805 12:44:58.110650  434893 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 12:44:58.120388  434893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:44:58.144312  434893 start.go:296] duration metric: took 123.168771ms for postStartSetup
	I0805 12:44:58.144370  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetConfigRaw
	I0805 12:44:58.144991  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetIP
	I0805 12:44:58.147736  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:58.148218  434893 main.go:141] libmachine: (kindnet-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:57:b7", ip: ""} in network mk-kindnet-119870: {Iface:virbr4 ExpiryTime:2024-08-05 13:44:48 +0000 UTC Type:0 Mac:52:54:00:a2:57:b7 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:kindnet-119870 Clientid:01:52:54:00:a2:57:b7}
	I0805 12:44:58.148253  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined IP address 192.168.72.10 and MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:58.148486  434893 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/config.json ...
	I0805 12:44:58.148668  434893 start.go:128] duration metric: took 25.787662742s to createHost
	I0805 12:44:58.148701  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHHostname
	I0805 12:44:58.151139  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:58.151426  434893 main.go:141] libmachine: (kindnet-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:57:b7", ip: ""} in network mk-kindnet-119870: {Iface:virbr4 ExpiryTime:2024-08-05 13:44:48 +0000 UTC Type:0 Mac:52:54:00:a2:57:b7 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:kindnet-119870 Clientid:01:52:54:00:a2:57:b7}
	I0805 12:44:58.151457  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined IP address 192.168.72.10 and MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:58.151604  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHPort
	I0805 12:44:58.151810  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHKeyPath
	I0805 12:44:58.152013  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHKeyPath
	I0805 12:44:58.152172  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHUsername
	I0805 12:44:58.152392  434893 main.go:141] libmachine: Using SSH client type: native
	I0805 12:44:58.152605  434893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.10 22 <nil> <nil>}
	I0805 12:44:58.152619  434893 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 12:44:58.252636  434893 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722861898.226477509
	
	I0805 12:44:58.252665  434893 fix.go:216] guest clock: 1722861898.226477509
	I0805 12:44:58.252680  434893 fix.go:229] Guest: 2024-08-05 12:44:58.226477509 +0000 UTC Remote: 2024-08-05 12:44:58.148689335 +0000 UTC m=+51.328647468 (delta=77.788174ms)
	I0805 12:44:58.252726  434893 fix.go:200] guest clock delta is within tolerance: 77.788174ms
	I0805 12:44:58.252734  434893 start.go:83] releasing machines lock for "kindnet-119870", held for 25.891936471s
	I0805 12:44:58.252772  434893 main.go:141] libmachine: (kindnet-119870) Calling .DriverName
	I0805 12:44:58.253119  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetIP
	I0805 12:44:58.255933  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:58.256291  434893 main.go:141] libmachine: (kindnet-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:57:b7", ip: ""} in network mk-kindnet-119870: {Iface:virbr4 ExpiryTime:2024-08-05 13:44:48 +0000 UTC Type:0 Mac:52:54:00:a2:57:b7 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:kindnet-119870 Clientid:01:52:54:00:a2:57:b7}
	I0805 12:44:58.256316  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined IP address 192.168.72.10 and MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:58.256544  434893 main.go:141] libmachine: (kindnet-119870) Calling .DriverName
	I0805 12:44:58.257100  434893 main.go:141] libmachine: (kindnet-119870) Calling .DriverName
	I0805 12:44:58.257333  434893 main.go:141] libmachine: (kindnet-119870) Calling .DriverName
	I0805 12:44:58.257443  434893 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 12:44:58.257488  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHHostname
	I0805 12:44:58.257541  434893 ssh_runner.go:195] Run: cat /version.json
	I0805 12:44:58.257568  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHHostname
	I0805 12:44:58.260338  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:58.260586  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:58.260738  434893 main.go:141] libmachine: (kindnet-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:57:b7", ip: ""} in network mk-kindnet-119870: {Iface:virbr4 ExpiryTime:2024-08-05 13:44:48 +0000 UTC Type:0 Mac:52:54:00:a2:57:b7 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:kindnet-119870 Clientid:01:52:54:00:a2:57:b7}
	I0805 12:44:58.260769  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined IP address 192.168.72.10 and MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:58.260934  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHPort
	I0805 12:44:58.260947  434893 main.go:141] libmachine: (kindnet-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:57:b7", ip: ""} in network mk-kindnet-119870: {Iface:virbr4 ExpiryTime:2024-08-05 13:44:48 +0000 UTC Type:0 Mac:52:54:00:a2:57:b7 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:kindnet-119870 Clientid:01:52:54:00:a2:57:b7}
	I0805 12:44:58.260972  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined IP address 192.168.72.10 and MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:58.261135  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHPort
	I0805 12:44:58.261158  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHKeyPath
	I0805 12:44:58.261326  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHKeyPath
	I0805 12:44:58.261353  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHUsername
	I0805 12:44:58.261511  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetSSHUsername
	I0805 12:44:58.261517  434893 sshutil.go:53] new ssh client: &{IP:192.168.72.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/kindnet-119870/id_rsa Username:docker}
	I0805 12:44:58.261654  434893 sshutil.go:53] new ssh client: &{IP:192.168.72.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/kindnet-119870/id_rsa Username:docker}
	I0805 12:44:58.364801  434893 ssh_runner.go:195] Run: systemctl --version
	I0805 12:44:58.371389  434893 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 12:44:58.534509  434893 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 12:44:58.541262  434893 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 12:44:58.541329  434893 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 12:44:58.560185  434893 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 12:44:58.560215  434893 start.go:495] detecting cgroup driver to use...
	I0805 12:44:58.560297  434893 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 12:44:58.577096  434893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 12:44:58.591396  434893 docker.go:217] disabling cri-docker service (if available) ...
	I0805 12:44:58.591451  434893 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 12:44:58.605793  434893 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 12:44:58.621914  434893 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 12:44:58.756993  434893 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 12:44:58.926921  434893 docker.go:233] disabling docker service ...
	I0805 12:44:58.927001  434893 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 12:44:58.944233  434893 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 12:44:58.957369  434893 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 12:44:59.103223  434893 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 12:44:59.247242  434893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 12:44:59.264889  434893 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 12:44:59.285696  434893 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0805 12:44:59.285770  434893 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:44:59.297232  434893 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 12:44:59.297305  434893 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:44:59.308055  434893 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:44:59.318602  434893 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:44:59.329408  434893 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 12:44:59.340384  434893 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:44:59.354364  434893 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:44:59.376155  434893 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:44:59.389766  434893 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 12:44:59.402604  434893 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0805 12:44:59.402693  434893 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0805 12:44:59.417804  434893 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 12:44:59.428825  434893 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:44:59.560900  434893 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 12:44:59.702965  434893 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 12:44:59.703036  434893 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 12:44:59.708288  434893 start.go:563] Will wait 60s for crictl version
	I0805 12:44:59.708351  434893 ssh_runner.go:195] Run: which crictl
	I0805 12:44:59.712258  434893 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 12:44:59.755798  434893 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 12:44:59.755907  434893 ssh_runner.go:195] Run: crio --version
	I0805 12:44:59.783941  434893 ssh_runner.go:195] Run: crio --version
	I0805 12:44:59.814845  434893 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0805 12:44:59.816091  434893 main.go:141] libmachine: (kindnet-119870) Calling .GetIP
	I0805 12:44:59.818988  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:59.819333  434893 main.go:141] libmachine: (kindnet-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:57:b7", ip: ""} in network mk-kindnet-119870: {Iface:virbr4 ExpiryTime:2024-08-05 13:44:48 +0000 UTC Type:0 Mac:52:54:00:a2:57:b7 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:kindnet-119870 Clientid:01:52:54:00:a2:57:b7}
	I0805 12:44:59.819371  434893 main.go:141] libmachine: (kindnet-119870) DBG | domain kindnet-119870 has defined IP address 192.168.72.10 and MAC address 52:54:00:a2:57:b7 in network mk-kindnet-119870
	I0805 12:44:59.819646  434893 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0805 12:44:59.823947  434893 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:44:59.836996  434893 kubeadm.go:883] updating cluster {Name:kindnet-119870 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:kindnet-119870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.72.10 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 12:44:59.837118  434893 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 12:44:59.837167  434893 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:44:59.870712  434893 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0805 12:44:59.870793  434893 ssh_runner.go:195] Run: which lz4
	I0805 12:44:59.874840  434893 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0805 12:44:59.879189  434893 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 12:44:59.879213  434893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0805 12:45:01.359338  434893 crio.go:462] duration metric: took 1.484532897s to copy over tarball
	I0805 12:45:01.359440  434893 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 12:44:57.987769  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:44:58.487712  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:44:58.988274  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:44:59.488526  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:44:59.988523  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:45:00.488628  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:45:00.988722  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:45:01.488747  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:45:01.988602  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:45:02.488202  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:44:58.400571  435321 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-335738
	
	I0805 12:44:58.400607  435321 main.go:141] libmachine: (pause-335738) Calling .GetMachineName
	I0805 12:44:58.400894  435321 buildroot.go:166] provisioning hostname "pause-335738"
	I0805 12:44:58.400928  435321 main.go:141] libmachine: (pause-335738) Calling .GetMachineName
	I0805 12:44:58.401212  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHHostname
	I0805 12:44:58.404011  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:44:58.404385  435321 main.go:141] libmachine: (pause-335738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:22:6e", ip: ""} in network mk-pause-335738: {Iface:virbr1 ExpiryTime:2024-08-05 13:43:55 +0000 UTC Type:0 Mac:52:54:00:c5:22:6e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-335738 Clientid:01:52:54:00:c5:22:6e}
	I0805 12:44:58.404407  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined IP address 192.168.39.97 and MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:44:58.404647  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHPort
	I0805 12:44:58.404816  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHKeyPath
	I0805 12:44:58.404970  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHKeyPath
	I0805 12:44:58.405127  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHUsername
	I0805 12:44:58.405377  435321 main.go:141] libmachine: Using SSH client type: native
	I0805 12:44:58.405594  435321 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0805 12:44:58.405612  435321 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-335738 && echo "pause-335738" | sudo tee /etc/hostname
	I0805 12:44:58.531104  435321 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-335738
	
	I0805 12:44:58.531139  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHHostname
	I0805 12:44:58.534469  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:44:58.534976  435321 main.go:141] libmachine: (pause-335738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:22:6e", ip: ""} in network mk-pause-335738: {Iface:virbr1 ExpiryTime:2024-08-05 13:43:55 +0000 UTC Type:0 Mac:52:54:00:c5:22:6e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-335738 Clientid:01:52:54:00:c5:22:6e}
	I0805 12:44:58.535021  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined IP address 192.168.39.97 and MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:44:58.535254  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHPort
	I0805 12:44:58.535511  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHKeyPath
	I0805 12:44:58.535713  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHKeyPath
	I0805 12:44:58.535906  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHUsername
	I0805 12:44:58.536130  435321 main.go:141] libmachine: Using SSH client type: native
	I0805 12:44:58.536346  435321 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0805 12:44:58.536371  435321 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-335738' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-335738/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-335738' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 12:44:58.654002  435321 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 12:44:58.654040  435321 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19377-383955/.minikube CaCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19377-383955/.minikube}
	I0805 12:44:58.654109  435321 buildroot.go:174] setting up certificates
	I0805 12:44:58.654119  435321 provision.go:84] configureAuth start
	I0805 12:44:58.654136  435321 main.go:141] libmachine: (pause-335738) Calling .GetMachineName
	I0805 12:44:58.654481  435321 main.go:141] libmachine: (pause-335738) Calling .GetIP
	I0805 12:44:58.657169  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:44:58.657539  435321 main.go:141] libmachine: (pause-335738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:22:6e", ip: ""} in network mk-pause-335738: {Iface:virbr1 ExpiryTime:2024-08-05 13:43:55 +0000 UTC Type:0 Mac:52:54:00:c5:22:6e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-335738 Clientid:01:52:54:00:c5:22:6e}
	I0805 12:44:58.657566  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined IP address 192.168.39.97 and MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:44:58.657679  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHHostname
	I0805 12:44:58.659937  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:44:58.660312  435321 main.go:141] libmachine: (pause-335738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:22:6e", ip: ""} in network mk-pause-335738: {Iface:virbr1 ExpiryTime:2024-08-05 13:43:55 +0000 UTC Type:0 Mac:52:54:00:c5:22:6e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-335738 Clientid:01:52:54:00:c5:22:6e}
	I0805 12:44:58.660336  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined IP address 192.168.39.97 and MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:44:58.660613  435321 provision.go:143] copyHostCerts
	I0805 12:44:58.660679  435321 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem, removing ...
	I0805 12:44:58.660690  435321 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 12:44:58.660740  435321 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem (1082 bytes)
	I0805 12:44:58.660833  435321 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem, removing ...
	I0805 12:44:58.660842  435321 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 12:44:58.660863  435321 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem (1123 bytes)
	I0805 12:44:58.660914  435321 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem, removing ...
	I0805 12:44:58.660921  435321 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 12:44:58.660945  435321 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem (1675 bytes)
	I0805 12:44:58.660988  435321 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem org=jenkins.pause-335738 san=[127.0.0.1 192.168.39.97 localhost minikube pause-335738]
	I0805 12:44:59.028284  435321 provision.go:177] copyRemoteCerts
	I0805 12:44:59.028377  435321 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 12:44:59.028414  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHHostname
	I0805 12:44:59.031279  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:44:59.031702  435321 main.go:141] libmachine: (pause-335738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:22:6e", ip: ""} in network mk-pause-335738: {Iface:virbr1 ExpiryTime:2024-08-05 13:43:55 +0000 UTC Type:0 Mac:52:54:00:c5:22:6e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-335738 Clientid:01:52:54:00:c5:22:6e}
	I0805 12:44:59.031760  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined IP address 192.168.39.97 and MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:44:59.031939  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHPort
	I0805 12:44:59.032172  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHKeyPath
	I0805 12:44:59.032322  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHUsername
	I0805 12:44:59.032465  435321 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/pause-335738/id_rsa Username:docker}
	I0805 12:44:59.122102  435321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 12:44:59.152102  435321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0805 12:44:59.185021  435321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0805 12:44:59.216147  435321 provision.go:87] duration metric: took 562.010148ms to configureAuth
	I0805 12:44:59.216200  435321 buildroot.go:189] setting minikube options for container-runtime
	I0805 12:44:59.216425  435321 config.go:182] Loaded profile config "pause-335738": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:44:59.216544  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHHostname
	I0805 12:44:59.219728  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:44:59.220160  435321 main.go:141] libmachine: (pause-335738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:22:6e", ip: ""} in network mk-pause-335738: {Iface:virbr1 ExpiryTime:2024-08-05 13:43:55 +0000 UTC Type:0 Mac:52:54:00:c5:22:6e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-335738 Clientid:01:52:54:00:c5:22:6e}
	I0805 12:44:59.220193  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined IP address 192.168.39.97 and MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:44:59.220453  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHPort
	I0805 12:44:59.220684  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHKeyPath
	I0805 12:44:59.220862  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHKeyPath
	I0805 12:44:59.220995  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHUsername
	I0805 12:44:59.221192  435321 main.go:141] libmachine: Using SSH client type: native
	I0805 12:44:59.221433  435321 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0805 12:44:59.221465  435321 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 12:45:02.988025  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:45:03.487921  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:45:03.987872  434553 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 12:45:04.083399  434553 kubeadm.go:1113] duration metric: took 13.718223985s to wait for elevateKubeSystemPrivileges
	I0805 12:45:04.083444  434553 kubeadm.go:394] duration metric: took 24.284474624s to StartCluster
	I0805 12:45:04.083471  434553 settings.go:142] acquiring lock: {Name:mkef693333292ed53a03690c72ec170ce2e26d3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:45:04.083556  434553 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 12:45:04.084789  434553 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/kubeconfig: {Name:mkf2ea766e58530103015ce4ba9d1ed3336f3926 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:45:04.085043  434553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0805 12:45:04.085058  434553 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.50.143 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 12:45:04.085122  434553 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 12:45:04.085206  434553 addons.go:69] Setting storage-provisioner=true in profile "auto-119870"
	I0805 12:45:04.085238  434553 addons.go:234] Setting addon storage-provisioner=true in "auto-119870"
	I0805 12:45:04.085269  434553 config.go:182] Loaded profile config "auto-119870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:45:04.085272  434553 host.go:66] Checking if "auto-119870" exists ...
	I0805 12:45:04.085262  434553 addons.go:69] Setting default-storageclass=true in profile "auto-119870"
	I0805 12:45:04.085326  434553 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-119870"
	I0805 12:45:04.085654  434553 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:45:04.085678  434553 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:45:04.085783  434553 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:45:04.085835  434553 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:45:04.086659  434553 out.go:177] * Verifying Kubernetes components...
	I0805 12:45:04.088057  434553 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:45:04.106100  434553 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45765
	I0805 12:45:04.106147  434553 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37415
	I0805 12:45:04.106610  434553 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:45:04.106722  434553 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:45:04.107146  434553 main.go:141] libmachine: Using API Version  1
	I0805 12:45:04.107170  434553 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:45:04.107298  434553 main.go:141] libmachine: Using API Version  1
	I0805 12:45:04.107328  434553 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:45:04.107585  434553 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:45:04.107802  434553 main.go:141] libmachine: (auto-119870) Calling .GetState
	I0805 12:45:04.107856  434553 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:45:04.108549  434553 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:45:04.108896  434553 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:45:04.111991  434553 addons.go:234] Setting addon default-storageclass=true in "auto-119870"
	I0805 12:45:04.112036  434553 host.go:66] Checking if "auto-119870" exists ...
	I0805 12:45:04.112313  434553 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:45:04.112343  434553 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:45:04.129062  434553 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39679
	I0805 12:45:04.129621  434553 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:45:04.130233  434553 main.go:141] libmachine: Using API Version  1
	I0805 12:45:04.130253  434553 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:45:04.134972  434553 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41423
	I0805 12:45:04.135159  434553 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:45:04.135415  434553 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:45:04.135579  434553 main.go:141] libmachine: (auto-119870) Calling .GetState
	I0805 12:45:04.136311  434553 main.go:141] libmachine: Using API Version  1
	I0805 12:45:04.136340  434553 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:45:04.136785  434553 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:45:04.137368  434553 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:45:04.137407  434553 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:45:04.138414  434553 main.go:141] libmachine: (auto-119870) Calling .DriverName
	I0805 12:45:04.140386  434553 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:45:03.825786  434893 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.466309401s)
	I0805 12:45:03.825816  434893 crio.go:469] duration metric: took 2.466442343s to extract the tarball
	I0805 12:45:03.825825  434893 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0805 12:45:03.873168  434893 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:45:03.922121  434893 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 12:45:03.922146  434893 cache_images.go:84] Images are preloaded, skipping loading
	I0805 12:45:03.922155  434893 kubeadm.go:934] updating node { 192.168.72.10 8443 v1.30.3 crio true true} ...
	I0805 12:45:03.922293  434893 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-119870 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:kindnet-119870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I0805 12:45:03.922397  434893 ssh_runner.go:195] Run: crio config
	I0805 12:45:03.979120  434893 cni.go:84] Creating CNI manager for "kindnet"
	I0805 12:45:03.979175  434893 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 12:45:03.979209  434893 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.10 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-119870 NodeName:kindnet-119870 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 12:45:03.979439  434893 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.10
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-119870"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.10
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.10"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 12:45:03.979521  434893 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 12:45:03.989880  434893 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 12:45:03.989964  434893 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 12:45:03.999901  434893 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0805 12:45:04.019677  434893 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 12:45:04.041630  434893 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0805 12:45:04.065172  434893 ssh_runner.go:195] Run: grep 192.168.72.10	control-plane.minikube.internal$ /etc/hosts
	I0805 12:45:04.070320  434893 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.10	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:45:04.087860  434893 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:45:04.224230  434893 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 12:45:04.249531  434893 certs.go:68] Setting up /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870 for IP: 192.168.72.10
	I0805 12:45:04.249562  434893 certs.go:194] generating shared ca certs ...
	I0805 12:45:04.249586  434893 certs.go:226] acquiring lock for ca certs: {Name:mk0abfcaff3883fbb5243c47b487f9200d9166d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:45:04.249787  434893 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key
	I0805 12:45:04.249855  434893 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key
	I0805 12:45:04.249870  434893 certs.go:256] generating profile certs ...
	I0805 12:45:04.249961  434893 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/client.key
	I0805 12:45:04.249979  434893 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/client.crt with IP's: []
	I0805 12:45:04.346617  434893 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/client.crt ...
	I0805 12:45:04.346643  434893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/client.crt: {Name:mk3f222c7678011251b9be7adaed1cca9432f54a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:45:04.349645  434893 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/client.key ...
	I0805 12:45:04.349672  434893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/client.key: {Name:mk1559a046ba6d292b37a939a000ecb417c1d69d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:45:04.349802  434893 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/apiserver.key.51996b03
	I0805 12:45:04.349825  434893 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/apiserver.crt.51996b03 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.10]
	I0805 12:45:04.837610  434893 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/apiserver.crt.51996b03 ...
	I0805 12:45:04.837641  434893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/apiserver.crt.51996b03: {Name:mk3f07972809b40722feea3cc23349534a06b43c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:45:04.876653  434893 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/apiserver.key.51996b03 ...
	I0805 12:45:04.876680  434893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/apiserver.key.51996b03: {Name:mka83207883f1a382a731733dd6b27e345d8def5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:45:04.876794  434893 certs.go:381] copying /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/apiserver.crt.51996b03 -> /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/apiserver.crt
	I0805 12:45:04.876908  434893 certs.go:385] copying /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/apiserver.key.51996b03 -> /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/apiserver.key
	I0805 12:45:04.877006  434893 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/proxy-client.key
	I0805 12:45:04.877027  434893 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/proxy-client.crt with IP's: []
	I0805 12:45:04.983179  434893 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/proxy-client.crt ...
	I0805 12:45:04.983212  434893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/proxy-client.crt: {Name:mk23f04203fd617cbc4c347c7c65ec7b14bef93a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:45:05.039115  434893 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/proxy-client.key ...
	I0805 12:45:05.039171  434893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/proxy-client.key: {Name:mk0fa1185ede3374007c2d42f52bad662da0b89e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:45:05.039506  434893 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem (1338 bytes)
	W0805 12:45:05.039567  434893 certs.go:480] ignoring /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219_empty.pem, impossibly tiny 0 bytes
	I0805 12:45:05.039578  434893 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 12:45:05.039608  434893 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem (1082 bytes)
	I0805 12:45:05.039644  434893 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem (1123 bytes)
	I0805 12:45:05.039675  434893 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem (1675 bytes)
	I0805 12:45:05.039737  434893 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:45:05.040651  434893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 12:45:05.125386  434893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 12:45:05.152505  434893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 12:45:05.178622  434893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 12:45:05.202388  434893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0805 12:45:05.225937  434893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0805 12:45:05.250479  434893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 12:45:05.275606  434893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0805 12:45:05.300464  434893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /usr/share/ca-certificates/3912192.pem (1708 bytes)
	I0805 12:45:05.323649  434893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 12:45:05.347218  434893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem --> /usr/share/ca-certificates/391219.pem (1338 bytes)
	I0805 12:45:05.372745  434893 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 12:45:05.390762  434893 ssh_runner.go:195] Run: openssl version
	I0805 12:45:05.397193  434893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3912192.pem && ln -fs /usr/share/ca-certificates/3912192.pem /etc/ssl/certs/3912192.pem"
	I0805 12:45:05.408510  434893 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3912192.pem
	I0805 12:45:05.413195  434893 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 11:39 /usr/share/ca-certificates/3912192.pem
	I0805 12:45:05.413246  434893 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3912192.pem
	I0805 12:45:05.419162  434893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3912192.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 12:45:05.430328  434893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 12:45:05.441477  434893 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:45:05.446209  434893 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:45:05.446272  434893 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:45:05.452849  434893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 12:45:05.465381  434893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391219.pem && ln -fs /usr/share/ca-certificates/391219.pem /etc/ssl/certs/391219.pem"
	I0805 12:45:05.480068  434893 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/391219.pem
	I0805 12:45:05.485313  434893 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 11:39 /usr/share/ca-certificates/391219.pem
	I0805 12:45:05.485391  434893 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391219.pem
	I0805 12:45:05.492049  434893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391219.pem /etc/ssl/certs/51391683.0"
	I0805 12:45:05.504309  434893 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 12:45:05.508556  434893 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0805 12:45:05.508630  434893 kubeadm.go:392] StartCluster: {Name:kindnet-119870 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:kindnet-119870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.72.10 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:45:05.508723  434893 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 12:45:05.508769  434893 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:45:05.545213  434893 cri.go:89] found id: ""
	I0805 12:45:05.545301  434893 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 12:45:05.556126  434893 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 12:45:05.567572  434893 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 12:45:05.577974  434893 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 12:45:05.577998  434893 kubeadm.go:157] found existing configuration files:
	
	I0805 12:45:05.578046  434893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 12:45:05.588262  434893 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 12:45:05.588328  434893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 12:45:05.598770  434893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 12:45:05.609877  434893 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 12:45:05.609938  434893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 12:45:05.620266  434893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 12:45:05.630871  434893 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 12:45:05.630931  434893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 12:45:05.641576  434893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 12:45:05.651588  434893 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 12:45:05.651659  434893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 12:45:05.662043  434893 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 12:45:05.729309  434893 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0805 12:45:05.729415  434893 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 12:45:05.881032  434893 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 12:45:05.881165  434893 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 12:45:05.881256  434893 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 12:45:06.081840  434893 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 12:45:06.213220  434893 out.go:204]   - Generating certificates and keys ...
	I0805 12:45:06.213339  434893 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 12:45:06.213415  434893 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 12:45:06.213497  434893 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0805 12:45:06.388396  434893 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0805 12:45:06.820111  434893 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0805 12:45:04.141810  434553 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 12:45:04.141833  434553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 12:45:04.141852  434553 main.go:141] libmachine: (auto-119870) Calling .GetSSHHostname
	I0805 12:45:04.145409  434553 main.go:141] libmachine: (auto-119870) DBG | domain auto-119870 has defined MAC address 52:54:00:a8:ca:b1 in network mk-auto-119870
	I0805 12:45:04.145933  434553 main.go:141] libmachine: (auto-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:b1", ip: ""} in network mk-auto-119870: {Iface:virbr2 ExpiryTime:2024-08-05 13:44:22 +0000 UTC Type:0 Mac:52:54:00:a8:ca:b1 Iaid: IPaddr:192.168.50.143 Prefix:24 Hostname:auto-119870 Clientid:01:52:54:00:a8:ca:b1}
	I0805 12:45:04.145953  434553 main.go:141] libmachine: (auto-119870) DBG | domain auto-119870 has defined IP address 192.168.50.143 and MAC address 52:54:00:a8:ca:b1 in network mk-auto-119870
	I0805 12:45:04.146102  434553 main.go:141] libmachine: (auto-119870) Calling .GetSSHPort
	I0805 12:45:04.146315  434553 main.go:141] libmachine: (auto-119870) Calling .GetSSHKeyPath
	I0805 12:45:04.146513  434553 main.go:141] libmachine: (auto-119870) Calling .GetSSHUsername
	I0805 12:45:04.146686  434553 sshutil.go:53] new ssh client: &{IP:192.168.50.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/auto-119870/id_rsa Username:docker}
	I0805 12:45:04.161760  434553 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45121
	I0805 12:45:04.162357  434553 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:45:04.163041  434553 main.go:141] libmachine: Using API Version  1
	I0805 12:45:04.163056  434553 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:45:04.163377  434553 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:45:04.163629  434553 main.go:141] libmachine: (auto-119870) Calling .GetState
	I0805 12:45:04.165226  434553 main.go:141] libmachine: (auto-119870) Calling .DriverName
	I0805 12:45:04.165534  434553 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 12:45:04.165553  434553 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 12:45:04.165572  434553 main.go:141] libmachine: (auto-119870) Calling .GetSSHHostname
	I0805 12:45:04.168221  434553 main.go:141] libmachine: (auto-119870) DBG | domain auto-119870 has defined MAC address 52:54:00:a8:ca:b1 in network mk-auto-119870
	I0805 12:45:04.168574  434553 main.go:141] libmachine: (auto-119870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:ca:b1", ip: ""} in network mk-auto-119870: {Iface:virbr2 ExpiryTime:2024-08-05 13:44:22 +0000 UTC Type:0 Mac:52:54:00:a8:ca:b1 Iaid: IPaddr:192.168.50.143 Prefix:24 Hostname:auto-119870 Clientid:01:52:54:00:a8:ca:b1}
	I0805 12:45:04.168603  434553 main.go:141] libmachine: (auto-119870) DBG | domain auto-119870 has defined IP address 192.168.50.143 and MAC address 52:54:00:a8:ca:b1 in network mk-auto-119870
	I0805 12:45:04.168839  434553 main.go:141] libmachine: (auto-119870) Calling .GetSSHPort
	I0805 12:45:04.169012  434553 main.go:141] libmachine: (auto-119870) Calling .GetSSHKeyPath
	I0805 12:45:04.169128  434553 main.go:141] libmachine: (auto-119870) Calling .GetSSHUsername
	I0805 12:45:04.169243  434553 sshutil.go:53] new ssh client: &{IP:192.168.50.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/auto-119870/id_rsa Username:docker}
	I0805 12:45:04.365659  434553 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 12:45:04.385049  434553 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 12:45:04.447637  434553 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 12:45:04.447679  434553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0805 12:45:05.628741  434553 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.26304238s)
	I0805 12:45:05.628815  434553 main.go:141] libmachine: Making call to close driver server
	I0805 12:45:05.628830  434553 main.go:141] libmachine: (auto-119870) Calling .Close
	I0805 12:45:05.629145  434553 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:45:05.629168  434553 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:45:05.629179  434553 main.go:141] libmachine: Making call to close driver server
	I0805 12:45:05.629187  434553 main.go:141] libmachine: (auto-119870) Calling .Close
	I0805 12:45:05.629433  434553 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:45:05.629458  434553 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:45:06.329497  434553 main.go:141] libmachine: Making call to close driver server
	I0805 12:45:06.329526  434553 main.go:141] libmachine: (auto-119870) Calling .Close
	I0805 12:45:06.329839  434553 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:45:06.329859  434553 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:45:06.329909  434553 main.go:141] libmachine: (auto-119870) DBG | Closing plugin on server side
	I0805 12:45:07.493407  434553 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.108317832s)
	I0805 12:45:07.493468  434553 main.go:141] libmachine: Making call to close driver server
	I0805 12:45:07.493469  434553 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.045792509s)
	I0805 12:45:07.493493  434553 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.045790427s)
	I0805 12:45:07.493511  434553 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0805 12:45:07.493480  434553 main.go:141] libmachine: (auto-119870) Calling .Close
	I0805 12:45:07.493872  434553 main.go:141] libmachine: (auto-119870) DBG | Closing plugin on server side
	I0805 12:45:07.493924  434553 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:45:07.493948  434553 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:45:07.493962  434553 main.go:141] libmachine: Making call to close driver server
	I0805 12:45:07.493970  434553 main.go:141] libmachine: (auto-119870) Calling .Close
	I0805 12:45:07.494237  434553 main.go:141] libmachine: (auto-119870) DBG | Closing plugin on server side
	I0805 12:45:07.494282  434553 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:45:07.494290  434553 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:45:07.494664  434553 node_ready.go:35] waiting up to 15m0s for node "auto-119870" to be "Ready" ...
	I0805 12:45:07.496631  434553 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0805 12:45:07.497825  434553 addons.go:510] duration metric: took 3.412714062s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0805 12:45:07.501679  434553 node_ready.go:49] node "auto-119870" has status "Ready":"True"
	I0805 12:45:07.501707  434553 node_ready.go:38] duration metric: took 7.01384ms for node "auto-119870" to be "Ready" ...
	I0805 12:45:07.501720  434553 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 12:45:07.514457  434553 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-2smkn" in "kube-system" namespace to be "Ready" ...
	I0805 12:45:07.435365  435321 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 12:45:07.435400  435321 machine.go:97] duration metric: took 9.154706851s to provisionDockerMachine
	I0805 12:45:07.435416  435321 start.go:293] postStartSetup for "pause-335738" (driver="kvm2")
	I0805 12:45:07.435430  435321 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 12:45:07.435463  435321 main.go:141] libmachine: (pause-335738) Calling .DriverName
	I0805 12:45:07.435973  435321 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 12:45:07.436011  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHHostname
	I0805 12:45:07.439119  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:45:07.439536  435321 main.go:141] libmachine: (pause-335738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:22:6e", ip: ""} in network mk-pause-335738: {Iface:virbr1 ExpiryTime:2024-08-05 13:43:55 +0000 UTC Type:0 Mac:52:54:00:c5:22:6e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-335738 Clientid:01:52:54:00:c5:22:6e}
	I0805 12:45:07.439568  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined IP address 192.168.39.97 and MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:45:07.439811  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHPort
	I0805 12:45:07.440026  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHKeyPath
	I0805 12:45:07.440198  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHUsername
	I0805 12:45:07.440359  435321 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/pause-335738/id_rsa Username:docker}
	I0805 12:45:07.531327  435321 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 12:45:07.536064  435321 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 12:45:07.536093  435321 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/addons for local assets ...
	I0805 12:45:07.536168  435321 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/files for local assets ...
	I0805 12:45:07.536277  435321 filesync.go:149] local asset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> 3912192.pem in /etc/ssl/certs
	I0805 12:45:07.536401  435321 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 12:45:07.548192  435321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:45:07.583195  435321 start.go:296] duration metric: took 147.760389ms for postStartSetup
	I0805 12:45:07.583246  435321 fix.go:56] duration metric: took 9.330325706s for fixHost
	I0805 12:45:07.583273  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHHostname
	I0805 12:45:07.586518  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:45:07.586949  435321 main.go:141] libmachine: (pause-335738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:22:6e", ip: ""} in network mk-pause-335738: {Iface:virbr1 ExpiryTime:2024-08-05 13:43:55 +0000 UTC Type:0 Mac:52:54:00:c5:22:6e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-335738 Clientid:01:52:54:00:c5:22:6e}
	I0805 12:45:07.586981  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined IP address 192.168.39.97 and MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:45:07.587188  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHPort
	I0805 12:45:07.587426  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHKeyPath
	I0805 12:45:07.587614  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHKeyPath
	I0805 12:45:07.587795  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHUsername
	I0805 12:45:07.587976  435321 main.go:141] libmachine: Using SSH client type: native
	I0805 12:45:07.588199  435321 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0805 12:45:07.588214  435321 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 12:45:07.709190  435321 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722861907.704347151
	
	I0805 12:45:07.709217  435321 fix.go:216] guest clock: 1722861907.704347151
	I0805 12:45:07.709227  435321 fix.go:229] Guest: 2024-08-05 12:45:07.704347151 +0000 UTC Remote: 2024-08-05 12:45:07.583251272 +0000 UTC m=+24.284926934 (delta=121.095879ms)
	I0805 12:45:07.709254  435321 fix.go:200] guest clock delta is within tolerance: 121.095879ms
	I0805 12:45:07.709261  435321 start.go:83] releasing machines lock for "pause-335738", held for 9.456379931s
	I0805 12:45:07.709285  435321 main.go:141] libmachine: (pause-335738) Calling .DriverName
	I0805 12:45:07.709564  435321 main.go:141] libmachine: (pause-335738) Calling .GetIP
	I0805 12:45:07.713014  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:45:07.713434  435321 main.go:141] libmachine: (pause-335738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:22:6e", ip: ""} in network mk-pause-335738: {Iface:virbr1 ExpiryTime:2024-08-05 13:43:55 +0000 UTC Type:0 Mac:52:54:00:c5:22:6e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-335738 Clientid:01:52:54:00:c5:22:6e}
	I0805 12:45:07.713461  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined IP address 192.168.39.97 and MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:45:07.713671  435321 main.go:141] libmachine: (pause-335738) Calling .DriverName
	I0805 12:45:07.714276  435321 main.go:141] libmachine: (pause-335738) Calling .DriverName
	I0805 12:45:07.714515  435321 main.go:141] libmachine: (pause-335738) Calling .DriverName
	I0805 12:45:07.714639  435321 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 12:45:07.714697  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHHostname
	I0805 12:45:07.714762  435321 ssh_runner.go:195] Run: cat /version.json
	I0805 12:45:07.714791  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHHostname
	I0805 12:45:07.717917  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:45:07.717952  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:45:07.718167  435321 main.go:141] libmachine: (pause-335738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:22:6e", ip: ""} in network mk-pause-335738: {Iface:virbr1 ExpiryTime:2024-08-05 13:43:55 +0000 UTC Type:0 Mac:52:54:00:c5:22:6e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-335738 Clientid:01:52:54:00:c5:22:6e}
	I0805 12:45:07.718188  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined IP address 192.168.39.97 and MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:45:07.718332  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHPort
	I0805 12:45:07.718447  435321 main.go:141] libmachine: (pause-335738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:22:6e", ip: ""} in network mk-pause-335738: {Iface:virbr1 ExpiryTime:2024-08-05 13:43:55 +0000 UTC Type:0 Mac:52:54:00:c5:22:6e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-335738 Clientid:01:52:54:00:c5:22:6e}
	I0805 12:45:07.718470  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined IP address 192.168.39.97 and MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:45:07.718508  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHKeyPath
	I0805 12:45:07.718569  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHPort
	I0805 12:45:07.718707  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHKeyPath
	I0805 12:45:07.718708  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHUsername
	I0805 12:45:07.718884  435321 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/pause-335738/id_rsa Username:docker}
	I0805 12:45:07.718987  435321 main.go:141] libmachine: (pause-335738) Calling .GetSSHUsername
	I0805 12:45:07.719193  435321 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/pause-335738/id_rsa Username:docker}
	I0805 12:45:07.805733  435321 ssh_runner.go:195] Run: systemctl --version
	I0805 12:45:07.828521  435321 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 12:45:07.992325  435321 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 12:45:08.000564  435321 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 12:45:08.000647  435321 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 12:45:08.013511  435321 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0805 12:45:08.013535  435321 start.go:495] detecting cgroup driver to use...
	I0805 12:45:08.013612  435321 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 12:45:08.037227  435321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 12:45:08.056733  435321 docker.go:217] disabling cri-docker service (if available) ...
	I0805 12:45:08.056797  435321 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 12:45:08.077383  435321 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 12:45:08.135695  435321 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 12:45:06.923649  434893 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0805 12:45:07.053094  434893 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0805 12:45:07.053484  434893 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kindnet-119870 localhost] and IPs [192.168.72.10 127.0.0.1 ::1]
	I0805 12:45:07.201979  434893 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0805 12:45:07.202236  434893 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kindnet-119870 localhost] and IPs [192.168.72.10 127.0.0.1 ::1]
	I0805 12:45:07.428235  434893 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0805 12:45:07.639538  434893 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0805 12:45:08.053798  434893 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0805 12:45:08.053972  434893 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 12:45:08.267929  434893 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 12:45:08.365899  434893 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0805 12:45:08.512425  434893 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 12:45:08.643905  434893 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 12:45:08.708761  434893 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 12:45:08.709621  434893 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 12:45:08.711903  434893 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 12:45:08.373203  435321 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 12:45:08.650299  435321 docker.go:233] disabling docker service ...
	I0805 12:45:08.650369  435321 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 12:45:08.733071  435321 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 12:45:08.780524  435321 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 12:45:09.135883  435321 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 12:45:09.436578  435321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 12:45:09.464255  435321 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 12:45:09.496891  435321 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0805 12:45:09.496966  435321 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:45:09.517774  435321 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 12:45:09.517853  435321 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:45:09.554633  435321 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:45:09.572207  435321 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:45:09.585896  435321 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 12:45:09.610404  435321 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:45:09.627089  435321 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:45:09.648767  435321 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:45:09.668714  435321 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 12:45:09.690244  435321 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 12:45:09.705043  435321 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:45:09.934592  435321 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 12:45:10.469200  435321 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 12:45:10.469293  435321 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 12:45:10.475998  435321 start.go:563] Will wait 60s for crictl version
	I0805 12:45:10.476070  435321 ssh_runner.go:195] Run: which crictl
	I0805 12:45:10.515388  435321 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 12:45:10.668529  435321 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 12:45:10.668682  435321 ssh_runner.go:195] Run: crio --version
	I0805 12:45:10.900094  435321 ssh_runner.go:195] Run: crio --version
	I0805 12:45:10.975777  435321 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0805 12:45:08.713600  434893 out.go:204]   - Booting up control plane ...
	I0805 12:45:08.713715  434893 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 12:45:08.717685  434893 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 12:45:08.718925  434893 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 12:45:08.740263  434893 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 12:45:08.740417  434893 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 12:45:08.740496  434893 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 12:45:08.898938  434893 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0805 12:45:08.899078  434893 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0805 12:45:09.900870  434893 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00232472s
	I0805 12:45:09.901028  434893 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0805 12:45:07.998190  434553 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-119870" context rescaled to 1 replicas
	I0805 12:45:09.524999  434553 pod_ready.go:102] pod "coredns-7db6d8ff4d-2smkn" in "kube-system" namespace has status "Ready":"False"
	I0805 12:45:12.024133  434553 pod_ready.go:102] pod "coredns-7db6d8ff4d-2smkn" in "kube-system" namespace has status "Ready":"False"
	I0805 12:45:10.977172  435321 main.go:141] libmachine: (pause-335738) Calling .GetIP
	I0805 12:45:10.980756  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:45:10.981259  435321 main.go:141] libmachine: (pause-335738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:22:6e", ip: ""} in network mk-pause-335738: {Iface:virbr1 ExpiryTime:2024-08-05 13:43:55 +0000 UTC Type:0 Mac:52:54:00:c5:22:6e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-335738 Clientid:01:52:54:00:c5:22:6e}
	I0805 12:45:10.981303  435321 main.go:141] libmachine: (pause-335738) DBG | domain pause-335738 has defined IP address 192.168.39.97 and MAC address 52:54:00:c5:22:6e in network mk-pause-335738
	I0805 12:45:10.981603  435321 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0805 12:45:10.989940  435321 kubeadm.go:883] updating cluster {Name:pause-335738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:pause-335738 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 12:45:10.990128  435321 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 12:45:10.990202  435321 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:45:11.043291  435321 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 12:45:11.043319  435321 crio.go:433] Images already preloaded, skipping extraction
	I0805 12:45:11.043368  435321 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:45:11.090395  435321 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 12:45:11.090428  435321 cache_images.go:84] Images are preloaded, skipping loading
	I0805 12:45:11.090440  435321 kubeadm.go:934] updating node { 192.168.39.97 8443 v1.30.3 crio true true} ...
	I0805 12:45:11.090582  435321 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-335738 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:pause-335738 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 12:45:11.090699  435321 ssh_runner.go:195] Run: crio config
	I0805 12:45:11.189451  435321 cni.go:84] Creating CNI manager for ""
	I0805 12:45:11.189482  435321 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:45:11.189499  435321 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 12:45:11.189529  435321 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.97 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-335738 NodeName:pause-335738 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.97 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 12:45:11.189716  435321 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.97
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-335738"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.97
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.97"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 12:45:11.189792  435321 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 12:45:11.200510  435321 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 12:45:11.200601  435321 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 12:45:11.212958  435321 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0805 12:45:11.237802  435321 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 12:45:11.254534  435321 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0805 12:45:11.271371  435321 ssh_runner.go:195] Run: grep 192.168.39.97	control-plane.minikube.internal$ /etc/hosts
	I0805 12:45:11.275240  435321 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:45:11.410170  435321 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 12:45:11.425406  435321 certs.go:68] Setting up /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/pause-335738 for IP: 192.168.39.97
	I0805 12:45:11.425438  435321 certs.go:194] generating shared ca certs ...
	I0805 12:45:11.425460  435321 certs.go:226] acquiring lock for ca certs: {Name:mk0abfcaff3883fbb5243c47b487f9200d9166d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:45:11.425613  435321 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key
	I0805 12:45:11.425657  435321 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key
	I0805 12:45:11.425666  435321 certs.go:256] generating profile certs ...
	I0805 12:45:11.425737  435321 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/pause-335738/client.key
	I0805 12:45:11.425821  435321 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/pause-335738/apiserver.key.4c2e0008
	I0805 12:45:11.425881  435321 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/pause-335738/proxy-client.key
	I0805 12:45:11.425992  435321 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem (1338 bytes)
	W0805 12:45:11.426021  435321 certs.go:480] ignoring /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219_empty.pem, impossibly tiny 0 bytes
	I0805 12:45:11.426030  435321 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 12:45:11.426052  435321 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem (1082 bytes)
	I0805 12:45:11.426076  435321 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem (1123 bytes)
	I0805 12:45:11.426098  435321 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem (1675 bytes)
	I0805 12:45:11.426133  435321 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:45:11.426731  435321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 12:45:11.451227  435321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 12:45:11.477587  435321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 12:45:11.504930  435321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 12:45:11.529685  435321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/pause-335738/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0805 12:45:11.558933  435321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/pause-335738/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0805 12:45:11.585167  435321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/pause-335738/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 12:45:11.614871  435321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/pause-335738/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 12:45:11.644643  435321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem --> /usr/share/ca-certificates/391219.pem (1338 bytes)
	I0805 12:45:11.672724  435321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /usr/share/ca-certificates/3912192.pem (1708 bytes)
	I0805 12:45:11.732393  435321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 12:45:11.756262  435321 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 12:45:11.773242  435321 ssh_runner.go:195] Run: openssl version
	I0805 12:45:11.778989  435321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391219.pem && ln -fs /usr/share/ca-certificates/391219.pem /etc/ssl/certs/391219.pem"
	I0805 12:45:11.790709  435321 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/391219.pem
	I0805 12:45:11.795841  435321 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 11:39 /usr/share/ca-certificates/391219.pem
	I0805 12:45:11.795942  435321 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391219.pem
	I0805 12:45:11.802389  435321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391219.pem /etc/ssl/certs/51391683.0"
	I0805 12:45:11.812281  435321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3912192.pem && ln -fs /usr/share/ca-certificates/3912192.pem /etc/ssl/certs/3912192.pem"
	I0805 12:45:11.823184  435321 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3912192.pem
	I0805 12:45:11.827757  435321 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 11:39 /usr/share/ca-certificates/3912192.pem
	I0805 12:45:11.827815  435321 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3912192.pem
	I0805 12:45:11.833336  435321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3912192.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 12:45:11.843830  435321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 12:45:11.855223  435321 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:45:11.860007  435321 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:45:11.860059  435321 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:45:11.865896  435321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 12:45:11.876271  435321 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 12:45:11.881124  435321 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 12:45:11.887289  435321 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 12:45:11.896111  435321 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 12:45:11.901722  435321 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 12:45:11.907361  435321 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 12:45:11.913038  435321 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 12:45:11.918906  435321 kubeadm.go:392] StartCluster: {Name:pause-335738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:pause-335738 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false ol
m:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:45:11.919030  435321 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 12:45:11.919069  435321 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:45:11.969286  435321 cri.go:89] found id: "2d03d5b65be38adf050eac82d091e13a70488941b180fbfe98c242246dea6d02"
	I0805 12:45:11.969315  435321 cri.go:89] found id: "add61196ce4f029ecc2eb9dbd7dbded2932824edc54b7099b1ee0c73a8ac269d"
	I0805 12:45:11.969321  435321 cri.go:89] found id: "862df9ac2aa291a1fd67edc74d04423c203ac8b935809be931d7af85bab22892"
	I0805 12:45:11.969326  435321 cri.go:89] found id: "fe361230dd1265ebfe73cd0cb849c09c62c2b58b4281010ffaef1149e8bcfd51"
	I0805 12:45:11.969330  435321 cri.go:89] found id: "62e629ccbea51616692856cbf4046c26f2e54ef331e7b238b1aa3742c4a5d0de"
	I0805 12:45:11.969334  435321 cri.go:89] found id: "57dd9d3e8f34f97a6da8e9cb2772d12864a5ff5e3bd6fa93bcbb140763635832"
	I0805 12:45:11.969338  435321 cri.go:89] found id: ""
	I0805 12:45:11.969406  435321 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 05 12:45:35 pause-335738 crio[2948]: time="2024-08-05 12:45:35.403500809Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722861935403472819,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b1599ddc-1ffa-4059-a411-2fa4aae741c1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 12:45:35 pause-335738 crio[2948]: time="2024-08-05 12:45:35.404210724Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aca93511-bec8-4d76-b46f-2fc933d4d3b4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:45:35 pause-335738 crio[2948]: time="2024-08-05 12:45:35.404357047Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aca93511-bec8-4d76-b46f-2fc933d4d3b4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:45:35 pause-335738 crio[2948]: time="2024-08-05 12:45:35.404649327Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2cfe1f13563b5d763c46bab15e7780fc0ee847d816fe24474213d19ac287e2a,PodSandboxId:1029fa92bc15f8b0a44ea964f16f47d5a97cd3bf4aa9982897ba37fa47be8eec,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722861918034966567,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l65cv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7245122-31af-45f8-99bf-1c863cbe5fe0,},Annotations:map[string]string{io.kubernetes.container.hash: 648a4b00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b483fbdb9094c6b2254e1174f5404b5e13f32997cf0e93775c4a59871b6c67b,PodSandboxId:bea3686074e08fc31691146745928b822cc7f7d00204ad6bb6af941a715cda81,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722861918039179678,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lzsxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9763738-61b9-43a5-9368-e06b52f43cd1,},Annotations:map[string]string{io.kubernetes.container.hash: fa173645,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdb056427f892d2aec7eb86a5657cbdb9b0a217985c4060e9da236fc00eea5b8,PodSandboxId:135cb004edd66c7374074996a965b0abf0f49f02b171fb15223409fafae6e4ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722861914235695957,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-335738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6551c46c42e7616b7906ac9288d43852,},Annot
ations:map[string]string{io.kubernetes.container.hash: 49b7a065,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb2cf67e138ffc00da0b5e832dbdbc353f1b5a01b20bafc76fe5e3b49f6d8719,PodSandboxId:82e471517e9809cb5c4dccabf0797484ffed40a8f413e2c9a799eb37ce71b7b7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722861914231068847,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-335738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 039b9240d849ddba132706a44a556b1f,},Annotations:map[string]
string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20853392b9232a94de389437742a4eefc03ed66b0026f3dce2232814e71deead,PodSandboxId:c0b9a577527007267d547133465164f1beedde697fd5645ea6f9e8730ce1d347,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722861914189215069,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-335738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be134a1d11f4eff92b8fa914099187cc,},Annotations:map[string]string{io.kubernet
es.container.hash: 5a22d9ff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29a6e6cec6effa81402d0037d16e85067d55f101a7c45a057619423e3684b644,PodSandboxId:f060ab33d43b2ab73cc0fbdf059980193dc58a69ee70fcb153f75da42036d0fe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722861914208431679,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-335738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b95943159a9620f708d589a3b6ccb89e,},Annotations:map[string]string{io
.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d03d5b65be38adf050eac82d091e13a70488941b180fbfe98c242246dea6d02,PodSandboxId:8ddf2717f05e846210cef04e27e050662e014a7a886a6441d9f35458255b56bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722861909590521803,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lzsxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9763738-61b9-43a5-9368-e06b52f43cd1,},Annotations:map[string]string{io.kubernetes.container.hash: fa17
3645,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:add61196ce4f029ecc2eb9dbd7dbded2932824edc54b7099b1ee0c73a8ac269d,PodSandboxId:4800c6d276d8e8c2ea869e35aa80ef7c4b875bb4d69248d4e2f9f2a4ed60fa18,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722861908841992072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-l65cv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7245122-31af-45f8-99bf-1c863cbe5fe0,},Annotations:map[string]string{io.kubernetes.container.hash: 648a4b00,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe361230dd1265ebfe73cd0cb849c09c62c2b58b4281010ffaef1149e8bcfd51,PodSandboxId:cd9efca96a8c0278b8dd4fc23eaf74063cbe57d36f71ec39ced22c4dd0c9ad11,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722861908770856291,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-335738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 039b9240d849ddba132706a44a556b1f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:862df9ac2aa291a1fd67edc74d04423c203ac8b935809be931d7af85bab22892,PodSandboxId:dbc576013f6676af99fc1220a817848683856f27ffb98d993b5e3612bd28ede1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722861908778693781,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-pause-335738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b95943159a9620f708d589a3b6ccb89e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62e629ccbea51616692856cbf4046c26f2e54ef331e7b238b1aa3742c4a5d0de,PodSandboxId:c706526c47804552f2b45bd552416df6d89b9dedb75efc86f574f65308b2783a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722861908723653329,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-335738,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be134a1d11f4eff92b8fa914099187cc,},Annotations:map[string]string{io.kubernetes.container.hash: 5a22d9ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57dd9d3e8f34f97a6da8e9cb2772d12864a5ff5e3bd6fa93bcbb140763635832,PodSandboxId:9710e986deeef0df082dd738b905fb2cbc53bb6f93d2d58b9e6b5d59f6ee439e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722861908576082491,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-335738,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 6551c46c42e7616b7906ac9288d43852,},Annotations:map[string]string{io.kubernetes.container.hash: 49b7a065,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aca93511-bec8-4d76-b46f-2fc933d4d3b4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:45:35 pause-335738 crio[2948]: time="2024-08-05 12:45:35.449096437Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0b182df5-2d05-470c-8fa0-5ff534146acc name=/runtime.v1.RuntimeService/Version
	Aug 05 12:45:35 pause-335738 crio[2948]: time="2024-08-05 12:45:35.449171111Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0b182df5-2d05-470c-8fa0-5ff534146acc name=/runtime.v1.RuntimeService/Version
	Aug 05 12:45:35 pause-335738 crio[2948]: time="2024-08-05 12:45:35.450787466Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2913ab9c-2417-4d5d-b7d4-48acf8dbac51 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 12:45:35 pause-335738 crio[2948]: time="2024-08-05 12:45:35.451146576Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722861935451125436,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2913ab9c-2417-4d5d-b7d4-48acf8dbac51 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 12:45:35 pause-335738 crio[2948]: time="2024-08-05 12:45:35.451648594Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=792b8173-c93d-4d05-9748-80fffcc39f01 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:45:35 pause-335738 crio[2948]: time="2024-08-05 12:45:35.451700114Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=792b8173-c93d-4d05-9748-80fffcc39f01 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:45:35 pause-335738 crio[2948]: time="2024-08-05 12:45:35.451930397Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2cfe1f13563b5d763c46bab15e7780fc0ee847d816fe24474213d19ac287e2a,PodSandboxId:1029fa92bc15f8b0a44ea964f16f47d5a97cd3bf4aa9982897ba37fa47be8eec,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722861918034966567,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l65cv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7245122-31af-45f8-99bf-1c863cbe5fe0,},Annotations:map[string]string{io.kubernetes.container.hash: 648a4b00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b483fbdb9094c6b2254e1174f5404b5e13f32997cf0e93775c4a59871b6c67b,PodSandboxId:bea3686074e08fc31691146745928b822cc7f7d00204ad6bb6af941a715cda81,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722861918039179678,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lzsxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9763738-61b9-43a5-9368-e06b52f43cd1,},Annotations:map[string]string{io.kubernetes.container.hash: fa173645,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdb056427f892d2aec7eb86a5657cbdb9b0a217985c4060e9da236fc00eea5b8,PodSandboxId:135cb004edd66c7374074996a965b0abf0f49f02b171fb15223409fafae6e4ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722861914235695957,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-335738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6551c46c42e7616b7906ac9288d43852,},Annot
ations:map[string]string{io.kubernetes.container.hash: 49b7a065,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb2cf67e138ffc00da0b5e832dbdbc353f1b5a01b20bafc76fe5e3b49f6d8719,PodSandboxId:82e471517e9809cb5c4dccabf0797484ffed40a8f413e2c9a799eb37ce71b7b7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722861914231068847,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-335738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 039b9240d849ddba132706a44a556b1f,},Annotations:map[string]
string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20853392b9232a94de389437742a4eefc03ed66b0026f3dce2232814e71deead,PodSandboxId:c0b9a577527007267d547133465164f1beedde697fd5645ea6f9e8730ce1d347,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722861914189215069,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-335738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be134a1d11f4eff92b8fa914099187cc,},Annotations:map[string]string{io.kubernet
es.container.hash: 5a22d9ff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29a6e6cec6effa81402d0037d16e85067d55f101a7c45a057619423e3684b644,PodSandboxId:f060ab33d43b2ab73cc0fbdf059980193dc58a69ee70fcb153f75da42036d0fe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722861914208431679,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-335738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b95943159a9620f708d589a3b6ccb89e,},Annotations:map[string]string{io
.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d03d5b65be38adf050eac82d091e13a70488941b180fbfe98c242246dea6d02,PodSandboxId:8ddf2717f05e846210cef04e27e050662e014a7a886a6441d9f35458255b56bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722861909590521803,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lzsxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9763738-61b9-43a5-9368-e06b52f43cd1,},Annotations:map[string]string{io.kubernetes.container.hash: fa17
3645,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:add61196ce4f029ecc2eb9dbd7dbded2932824edc54b7099b1ee0c73a8ac269d,PodSandboxId:4800c6d276d8e8c2ea869e35aa80ef7c4b875bb4d69248d4e2f9f2a4ed60fa18,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722861908841992072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-l65cv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7245122-31af-45f8-99bf-1c863cbe5fe0,},Annotations:map[string]string{io.kubernetes.container.hash: 648a4b00,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe361230dd1265ebfe73cd0cb849c09c62c2b58b4281010ffaef1149e8bcfd51,PodSandboxId:cd9efca96a8c0278b8dd4fc23eaf74063cbe57d36f71ec39ced22c4dd0c9ad11,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722861908770856291,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-335738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 039b9240d849ddba132706a44a556b1f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:862df9ac2aa291a1fd67edc74d04423c203ac8b935809be931d7af85bab22892,PodSandboxId:dbc576013f6676af99fc1220a817848683856f27ffb98d993b5e3612bd28ede1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722861908778693781,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-pause-335738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b95943159a9620f708d589a3b6ccb89e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62e629ccbea51616692856cbf4046c26f2e54ef331e7b238b1aa3742c4a5d0de,PodSandboxId:c706526c47804552f2b45bd552416df6d89b9dedb75efc86f574f65308b2783a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722861908723653329,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-335738,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be134a1d11f4eff92b8fa914099187cc,},Annotations:map[string]string{io.kubernetes.container.hash: 5a22d9ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57dd9d3e8f34f97a6da8e9cb2772d12864a5ff5e3bd6fa93bcbb140763635832,PodSandboxId:9710e986deeef0df082dd738b905fb2cbc53bb6f93d2d58b9e6b5d59f6ee439e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722861908576082491,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-335738,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 6551c46c42e7616b7906ac9288d43852,},Annotations:map[string]string{io.kubernetes.container.hash: 49b7a065,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=792b8173-c93d-4d05-9748-80fffcc39f01 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:45:35 pause-335738 crio[2948]: time="2024-08-05 12:45:35.495945845Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4afce407-cf59-4bdb-b667-c61905901585 name=/runtime.v1.RuntimeService/Version
	Aug 05 12:45:35 pause-335738 crio[2948]: time="2024-08-05 12:45:35.496036977Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4afce407-cf59-4bdb-b667-c61905901585 name=/runtime.v1.RuntimeService/Version
	Aug 05 12:45:35 pause-335738 crio[2948]: time="2024-08-05 12:45:35.497217573Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6ba9ed3f-4d5c-4c2a-aee0-e520158a951b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 12:45:35 pause-335738 crio[2948]: time="2024-08-05 12:45:35.497892795Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722861935497867139,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6ba9ed3f-4d5c-4c2a-aee0-e520158a951b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 12:45:35 pause-335738 crio[2948]: time="2024-08-05 12:45:35.498472011Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7b2a8e0a-b80d-406e-b7de-68c1ce932ec5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:45:35 pause-335738 crio[2948]: time="2024-08-05 12:45:35.498526762Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7b2a8e0a-b80d-406e-b7de-68c1ce932ec5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:45:35 pause-335738 crio[2948]: time="2024-08-05 12:45:35.498785516Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2cfe1f13563b5d763c46bab15e7780fc0ee847d816fe24474213d19ac287e2a,PodSandboxId:1029fa92bc15f8b0a44ea964f16f47d5a97cd3bf4aa9982897ba37fa47be8eec,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722861918034966567,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l65cv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7245122-31af-45f8-99bf-1c863cbe5fe0,},Annotations:map[string]string{io.kubernetes.container.hash: 648a4b00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b483fbdb9094c6b2254e1174f5404b5e13f32997cf0e93775c4a59871b6c67b,PodSandboxId:bea3686074e08fc31691146745928b822cc7f7d00204ad6bb6af941a715cda81,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722861918039179678,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lzsxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9763738-61b9-43a5-9368-e06b52f43cd1,},Annotations:map[string]string{io.kubernetes.container.hash: fa173645,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdb056427f892d2aec7eb86a5657cbdb9b0a217985c4060e9da236fc00eea5b8,PodSandboxId:135cb004edd66c7374074996a965b0abf0f49f02b171fb15223409fafae6e4ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722861914235695957,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-335738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6551c46c42e7616b7906ac9288d43852,},Annot
ations:map[string]string{io.kubernetes.container.hash: 49b7a065,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb2cf67e138ffc00da0b5e832dbdbc353f1b5a01b20bafc76fe5e3b49f6d8719,PodSandboxId:82e471517e9809cb5c4dccabf0797484ffed40a8f413e2c9a799eb37ce71b7b7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722861914231068847,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-335738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 039b9240d849ddba132706a44a556b1f,},Annotations:map[string]
string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20853392b9232a94de389437742a4eefc03ed66b0026f3dce2232814e71deead,PodSandboxId:c0b9a577527007267d547133465164f1beedde697fd5645ea6f9e8730ce1d347,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722861914189215069,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-335738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be134a1d11f4eff92b8fa914099187cc,},Annotations:map[string]string{io.kubernet
es.container.hash: 5a22d9ff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29a6e6cec6effa81402d0037d16e85067d55f101a7c45a057619423e3684b644,PodSandboxId:f060ab33d43b2ab73cc0fbdf059980193dc58a69ee70fcb153f75da42036d0fe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722861914208431679,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-335738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b95943159a9620f708d589a3b6ccb89e,},Annotations:map[string]string{io
.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d03d5b65be38adf050eac82d091e13a70488941b180fbfe98c242246dea6d02,PodSandboxId:8ddf2717f05e846210cef04e27e050662e014a7a886a6441d9f35458255b56bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722861909590521803,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lzsxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9763738-61b9-43a5-9368-e06b52f43cd1,},Annotations:map[string]string{io.kubernetes.container.hash: fa17
3645,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:add61196ce4f029ecc2eb9dbd7dbded2932824edc54b7099b1ee0c73a8ac269d,PodSandboxId:4800c6d276d8e8c2ea869e35aa80ef7c4b875bb4d69248d4e2f9f2a4ed60fa18,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722861908841992072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-l65cv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7245122-31af-45f8-99bf-1c863cbe5fe0,},Annotations:map[string]string{io.kubernetes.container.hash: 648a4b00,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe361230dd1265ebfe73cd0cb849c09c62c2b58b4281010ffaef1149e8bcfd51,PodSandboxId:cd9efca96a8c0278b8dd4fc23eaf74063cbe57d36f71ec39ced22c4dd0c9ad11,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722861908770856291,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-335738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 039b9240d849ddba132706a44a556b1f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:862df9ac2aa291a1fd67edc74d04423c203ac8b935809be931d7af85bab22892,PodSandboxId:dbc576013f6676af99fc1220a817848683856f27ffb98d993b5e3612bd28ede1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722861908778693781,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-pause-335738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b95943159a9620f708d589a3b6ccb89e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62e629ccbea51616692856cbf4046c26f2e54ef331e7b238b1aa3742c4a5d0de,PodSandboxId:c706526c47804552f2b45bd552416df6d89b9dedb75efc86f574f65308b2783a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722861908723653329,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-335738,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be134a1d11f4eff92b8fa914099187cc,},Annotations:map[string]string{io.kubernetes.container.hash: 5a22d9ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57dd9d3e8f34f97a6da8e9cb2772d12864a5ff5e3bd6fa93bcbb140763635832,PodSandboxId:9710e986deeef0df082dd738b905fb2cbc53bb6f93d2d58b9e6b5d59f6ee439e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722861908576082491,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-335738,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 6551c46c42e7616b7906ac9288d43852,},Annotations:map[string]string{io.kubernetes.container.hash: 49b7a065,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7b2a8e0a-b80d-406e-b7de-68c1ce932ec5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:45:35 pause-335738 crio[2948]: time="2024-08-05 12:45:35.540595291Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cfbdd08d-9f91-4070-927b-aa5e60f371da name=/runtime.v1.RuntimeService/Version
	Aug 05 12:45:35 pause-335738 crio[2948]: time="2024-08-05 12:45:35.540678500Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cfbdd08d-9f91-4070-927b-aa5e60f371da name=/runtime.v1.RuntimeService/Version
	Aug 05 12:45:35 pause-335738 crio[2948]: time="2024-08-05 12:45:35.541845229Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5ec007a3-4794-4fb4-af32-467d20314c31 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 12:45:35 pause-335738 crio[2948]: time="2024-08-05 12:45:35.542196592Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722861935542172431,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5ec007a3-4794-4fb4-af32-467d20314c31 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 12:45:35 pause-335738 crio[2948]: time="2024-08-05 12:45:35.542784343Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ff9444a5-9cf6-4ea1-80d4-30df14ff608f name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:45:35 pause-335738 crio[2948]: time="2024-08-05 12:45:35.542833468Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ff9444a5-9cf6-4ea1-80d4-30df14ff608f name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 12:45:35 pause-335738 crio[2948]: time="2024-08-05 12:45:35.543068405Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2cfe1f13563b5d763c46bab15e7780fc0ee847d816fe24474213d19ac287e2a,PodSandboxId:1029fa92bc15f8b0a44ea964f16f47d5a97cd3bf4aa9982897ba37fa47be8eec,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722861918034966567,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l65cv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7245122-31af-45f8-99bf-1c863cbe5fe0,},Annotations:map[string]string{io.kubernetes.container.hash: 648a4b00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b483fbdb9094c6b2254e1174f5404b5e13f32997cf0e93775c4a59871b6c67b,PodSandboxId:bea3686074e08fc31691146745928b822cc7f7d00204ad6bb6af941a715cda81,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722861918039179678,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lzsxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9763738-61b9-43a5-9368-e06b52f43cd1,},Annotations:map[string]string{io.kubernetes.container.hash: fa173645,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdb056427f892d2aec7eb86a5657cbdb9b0a217985c4060e9da236fc00eea5b8,PodSandboxId:135cb004edd66c7374074996a965b0abf0f49f02b171fb15223409fafae6e4ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722861914235695957,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-335738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6551c46c42e7616b7906ac9288d43852,},Annot
ations:map[string]string{io.kubernetes.container.hash: 49b7a065,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb2cf67e138ffc00da0b5e832dbdbc353f1b5a01b20bafc76fe5e3b49f6d8719,PodSandboxId:82e471517e9809cb5c4dccabf0797484ffed40a8f413e2c9a799eb37ce71b7b7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722861914231068847,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-335738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 039b9240d849ddba132706a44a556b1f,},Annotations:map[string]
string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20853392b9232a94de389437742a4eefc03ed66b0026f3dce2232814e71deead,PodSandboxId:c0b9a577527007267d547133465164f1beedde697fd5645ea6f9e8730ce1d347,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722861914189215069,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-335738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be134a1d11f4eff92b8fa914099187cc,},Annotations:map[string]string{io.kubernet
es.container.hash: 5a22d9ff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29a6e6cec6effa81402d0037d16e85067d55f101a7c45a057619423e3684b644,PodSandboxId:f060ab33d43b2ab73cc0fbdf059980193dc58a69ee70fcb153f75da42036d0fe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722861914208431679,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-335738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b95943159a9620f708d589a3b6ccb89e,},Annotations:map[string]string{io
.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d03d5b65be38adf050eac82d091e13a70488941b180fbfe98c242246dea6d02,PodSandboxId:8ddf2717f05e846210cef04e27e050662e014a7a886a6441d9f35458255b56bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722861909590521803,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lzsxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9763738-61b9-43a5-9368-e06b52f43cd1,},Annotations:map[string]string{io.kubernetes.container.hash: fa17
3645,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:add61196ce4f029ecc2eb9dbd7dbded2932824edc54b7099b1ee0c73a8ac269d,PodSandboxId:4800c6d276d8e8c2ea869e35aa80ef7c4b875bb4d69248d4e2f9f2a4ed60fa18,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722861908841992072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-l65cv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7245122-31af-45f8-99bf-1c863cbe5fe0,},Annotations:map[string]string{io.kubernetes.container.hash: 648a4b00,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe361230dd1265ebfe73cd0cb849c09c62c2b58b4281010ffaef1149e8bcfd51,PodSandboxId:cd9efca96a8c0278b8dd4fc23eaf74063cbe57d36f71ec39ced22c4dd0c9ad11,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722861908770856291,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-335738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 039b9240d849ddba132706a44a556b1f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:862df9ac2aa291a1fd67edc74d04423c203ac8b935809be931d7af85bab22892,PodSandboxId:dbc576013f6676af99fc1220a817848683856f27ffb98d993b5e3612bd28ede1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722861908778693781,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-pause-335738,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b95943159a9620f708d589a3b6ccb89e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62e629ccbea51616692856cbf4046c26f2e54ef331e7b238b1aa3742c4a5d0de,PodSandboxId:c706526c47804552f2b45bd552416df6d89b9dedb75efc86f574f65308b2783a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722861908723653329,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-335738,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be134a1d11f4eff92b8fa914099187cc,},Annotations:map[string]string{io.kubernetes.container.hash: 5a22d9ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57dd9d3e8f34f97a6da8e9cb2772d12864a5ff5e3bd6fa93bcbb140763635832,PodSandboxId:9710e986deeef0df082dd738b905fb2cbc53bb6f93d2d58b9e6b5d59f6ee439e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722861908576082491,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-335738,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 6551c46c42e7616b7906ac9288d43852,},Annotations:map[string]string{io.kubernetes.container.hash: 49b7a065,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ff9444a5-9cf6-4ea1-80d4-30df14ff608f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5b483fbdb9094       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   17 seconds ago      Running             coredns                   2                   bea3686074e08       coredns-7db6d8ff4d-lzsxg
	b2cfe1f13563b       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   17 seconds ago      Running             kube-proxy                2                   1029fa92bc15f       kube-proxy-l65cv
	cdb056427f892       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   21 seconds ago      Running             etcd                      2                   135cb004edd66       etcd-pause-335738
	fb2cf67e138ff       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   21 seconds ago      Running             kube-scheduler            2                   82e471517e980       kube-scheduler-pause-335738
	29a6e6cec6eff       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   21 seconds ago      Running             kube-controller-manager   2                   f060ab33d43b2       kube-controller-manager-pause-335738
	20853392b9232       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   21 seconds ago      Running             kube-apiserver            2                   c0b9a57752700       kube-apiserver-pause-335738
	2d03d5b65be38       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   26 seconds ago      Exited              coredns                   1                   8ddf2717f05e8       coredns-7db6d8ff4d-lzsxg
	add61196ce4f0       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   26 seconds ago      Exited              kube-proxy                1                   4800c6d276d8e       kube-proxy-l65cv
	862df9ac2aa29       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   26 seconds ago      Exited              kube-controller-manager   1                   dbc576013f667       kube-controller-manager-pause-335738
	fe361230dd126       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   26 seconds ago      Exited              kube-scheduler            1                   cd9efca96a8c0       kube-scheduler-pause-335738
	62e629ccbea51       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   26 seconds ago      Exited              kube-apiserver            1                   c706526c47804       kube-apiserver-pause-335738
	57dd9d3e8f34f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   27 seconds ago      Exited              etcd                      1                   9710e986deeef       etcd-pause-335738
	
	
	==> coredns [2d03d5b65be38adf050eac82d091e13a70488941b180fbfe98c242246dea6d02] <==
	
	
	==> coredns [5b483fbdb9094c6b2254e1174f5404b5e13f32997cf0e93775c4a59871b6c67b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:51185 - 13972 "HINFO IN 1866762790375974259.8856323661939502731. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015107772s
	
	
	==> describe nodes <==
	Name:               pause-335738
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-335738
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f
	                    minikube.k8s.io/name=pause-335738
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_05T12_44_24_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 12:44:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-335738
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 12:45:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 12:45:17 +0000   Mon, 05 Aug 2024 12:44:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 12:45:17 +0000   Mon, 05 Aug 2024 12:44:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 12:45:17 +0000   Mon, 05 Aug 2024 12:44:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 12:45:17 +0000   Mon, 05 Aug 2024 12:44:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.97
	  Hostname:    pause-335738
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 a89b63ec2ffd4501bdd3a0b908d25c2d
	  System UUID:                a89b63ec-2ffd-4501-bdd3-a0b908d25c2d
	  Boot ID:                    7b889ebf-9f5c-45a8-837f-a2ba8811b564
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-lzsxg                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     58s
	  kube-system                 etcd-pause-335738                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         72s
	  kube-system                 kube-apiserver-pause-335738             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 kube-controller-manager-pause-335738    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 kube-proxy-l65cv                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	  kube-system                 kube-scheduler-pause-335738             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 57s                kube-proxy       
	  Normal  Starting                 17s                kube-proxy       
	  Normal  NodeAllocatableEnforced  72s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  72s (x2 over 72s)  kubelet          Node pause-335738 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    72s (x2 over 72s)  kubelet          Node pause-335738 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     72s (x2 over 72s)  kubelet          Node pause-335738 status is now: NodeHasSufficientPID
	  Normal  Starting                 72s                kubelet          Starting kubelet.
	  Normal  NodeReady                71s                kubelet          Node pause-335738 status is now: NodeReady
	  Normal  RegisteredNode           59s                node-controller  Node pause-335738 event: Registered Node pause-335738 in Controller
	  Normal  Starting                 22s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node pause-335738 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node pause-335738 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node pause-335738 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6s                 node-controller  Node pause-335738 event: Registered Node pause-335738 in Controller
	
	
	==> dmesg <==
	[Aug 5 12:44] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.057926] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059338] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.165740] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.125137] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.291545] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +4.300851] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +0.068607] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.598251] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +1.443014] kauditd_printk_skb: 57 callbacks suppressed
	[  +5.115749] systemd-fstab-generator[1278]: Ignoring "noauto" option for root device
	[  +0.075824] kauditd_printk_skb: 30 callbacks suppressed
	[ +14.092595] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.290533] systemd-fstab-generator[1567]: Ignoring "noauto" option for root device
	[  +7.992745] kauditd_printk_skb: 92 callbacks suppressed
	[Aug 5 12:45] systemd-fstab-generator[2392]: Ignoring "noauto" option for root device
	[  +0.253330] systemd-fstab-generator[2487]: Ignoring "noauto" option for root device
	[  +0.445775] systemd-fstab-generator[2683]: Ignoring "noauto" option for root device
	[  +0.368655] systemd-fstab-generator[2821]: Ignoring "noauto" option for root device
	[  +0.483050] systemd-fstab-generator[2923]: Ignoring "noauto" option for root device
	[  +1.531752] systemd-fstab-generator[3511]: Ignoring "noauto" option for root device
	[  +2.179567] systemd-fstab-generator[3635]: Ignoring "noauto" option for root device
	[  +0.081806] kauditd_printk_skb: 244 callbacks suppressed
	[ +15.683735] systemd-fstab-generator[4081]: Ignoring "noauto" option for root device
	[  +0.118055] kauditd_printk_skb: 50 callbacks suppressed
	
	
	==> etcd [57dd9d3e8f34f97a6da8e9cb2772d12864a5ff5e3bd6fa93bcbb140763635832] <==
	{"level":"info","ts":"2024-08-05T12:45:09.17733Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"28.609508ms"}
	{"level":"info","ts":"2024-08-05T12:45:09.225498Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-08-05T12:45:09.25861Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"6e56e32a1e97f390","local-member-id":"f61fae125a956d36","commit-index":415}
	{"level":"info","ts":"2024-08-05T12:45:09.258719Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 switched to configuration voters=()"}
	{"level":"info","ts":"2024-08-05T12:45:09.258748Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 became follower at term 2"}
	{"level":"info","ts":"2024-08-05T12:45:09.258764Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft f61fae125a956d36 [peers: [], term: 2, commit: 415, applied: 0, lastindex: 415, lastterm: 2]"}
	{"level":"warn","ts":"2024-08-05T12:45:09.272196Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-08-05T12:45:09.353244Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":398}
	{"level":"info","ts":"2024-08-05T12:45:09.368166Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-08-05T12:45:09.379913Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"f61fae125a956d36","timeout":"7s"}
	{"level":"info","ts":"2024-08-05T12:45:09.380228Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"f61fae125a956d36"}
	{"level":"info","ts":"2024-08-05T12:45:09.380263Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"f61fae125a956d36","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-08-05T12:45:09.380687Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-08-05T12:45:09.380842Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-05T12:45:09.380887Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-05T12:45:09.380894Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-05T12:45:09.381129Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 switched to configuration voters=(17735085251460689206)"}
	{"level":"info","ts":"2024-08-05T12:45:09.381171Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6e56e32a1e97f390","local-member-id":"f61fae125a956d36","added-peer-id":"f61fae125a956d36","added-peer-peer-urls":["https://192.168.39.97:2380"]}
	{"level":"info","ts":"2024-08-05T12:45:09.38127Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e56e32a1e97f390","local-member-id":"f61fae125a956d36","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T12:45:09.381302Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T12:45:09.406535Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-05T12:45:09.406749Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f61fae125a956d36","initial-advertise-peer-urls":["https://192.168.39.97:2380"],"listen-peer-urls":["https://192.168.39.97:2380"],"advertise-client-urls":["https://192.168.39.97:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.97:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-05T12:45:09.406787Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-05T12:45:09.406898Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.97:2380"}
	{"level":"info","ts":"2024-08-05T12:45:09.406905Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.97:2380"}
	
	
	==> etcd [cdb056427f892d2aec7eb86a5657cbdb9b0a217985c4060e9da236fc00eea5b8] <==
	{"level":"info","ts":"2024-08-05T12:45:14.581758Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6e56e32a1e97f390","local-member-id":"f61fae125a956d36","added-peer-id":"f61fae125a956d36","added-peer-peer-urls":["https://192.168.39.97:2380"]}
	{"level":"info","ts":"2024-08-05T12:45:14.581858Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e56e32a1e97f390","local-member-id":"f61fae125a956d36","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T12:45:14.5819Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T12:45:14.583785Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-05T12:45:14.583869Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-05T12:45:14.583884Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-05T12:45:14.590763Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-05T12:45:14.591018Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f61fae125a956d36","initial-advertise-peer-urls":["https://192.168.39.97:2380"],"listen-peer-urls":["https://192.168.39.97:2380"],"advertise-client-urls":["https://192.168.39.97:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.97:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-05T12:45:14.591061Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-05T12:45:14.591126Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.97:2380"}
	{"level":"info","ts":"2024-08-05T12:45:14.59115Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.97:2380"}
	{"level":"info","ts":"2024-08-05T12:45:16.05181Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-05T12:45:16.05192Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-05T12:45:16.051983Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 received MsgPreVoteResp from f61fae125a956d36 at term 2"}
	{"level":"info","ts":"2024-08-05T12:45:16.052019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 became candidate at term 3"}
	{"level":"info","ts":"2024-08-05T12:45:16.052044Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 received MsgVoteResp from f61fae125a956d36 at term 3"}
	{"level":"info","ts":"2024-08-05T12:45:16.052071Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 became leader at term 3"}
	{"level":"info","ts":"2024-08-05T12:45:16.052097Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f61fae125a956d36 elected leader f61fae125a956d36 at term 3"}
	{"level":"info","ts":"2024-08-05T12:45:16.057574Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T12:45:16.05752Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"f61fae125a956d36","local-member-attributes":"{Name:pause-335738 ClientURLs:[https://192.168.39.97:2379]}","request-path":"/0/members/f61fae125a956d36/attributes","cluster-id":"6e56e32a1e97f390","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-05T12:45:16.058541Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T12:45:16.058754Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-05T12:45:16.058785Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-05T12:45:16.059965Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-05T12:45:16.060412Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.97:2379"}
	
	
	==> kernel <==
	 12:45:35 up 1 min,  0 users,  load average: 1.67, 0.58, 0.21
	Linux pause-335738 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [20853392b9232a94de389437742a4eefc03ed66b0026f3dce2232814e71deead] <==
	I0805 12:45:17.529242       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0805 12:45:17.540321       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0805 12:45:17.540402       1 policy_source.go:224] refreshing policies
	I0805 12:45:17.548013       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0805 12:45:17.548083       1 aggregator.go:165] initial CRD sync complete...
	I0805 12:45:17.548104       1 autoregister_controller.go:141] Starting autoregister controller
	I0805 12:45:17.548109       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0805 12:45:17.548113       1 cache.go:39] Caches are synced for autoregister controller
	I0805 12:45:17.599402       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0805 12:45:17.599438       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0805 12:45:17.600016       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0805 12:45:17.600202       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0805 12:45:17.600801       1 shared_informer.go:320] Caches are synced for configmaps
	I0805 12:45:17.601979       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0805 12:45:17.607532       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0805 12:45:17.608831       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0805 12:45:17.618821       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0805 12:45:18.409590       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0805 12:45:19.023848       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0805 12:45:19.038837       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0805 12:45:19.074417       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0805 12:45:19.102873       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0805 12:45:19.109526       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0805 12:45:29.914960       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0805 12:45:29.967664       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [62e629ccbea51616692856cbf4046c26f2e54ef331e7b238b1aa3742c4a5d0de] <==
	I0805 12:45:09.677795       1 options.go:221] external host was not specified, using 192.168.39.97
	I0805 12:45:09.680987       1 server.go:148] Version: v1.30.3
	I0805 12:45:09.681032       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-controller-manager [29a6e6cec6effa81402d0037d16e85067d55f101a7c45a057619423e3684b644] <==
	I0805 12:45:29.800551       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0805 12:45:29.805602       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0805 12:45:29.805661       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0805 12:45:29.805741       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0805 12:45:29.805787       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0805 12:45:29.805951       1 shared_informer.go:320] Caches are synced for crt configmap
	I0805 12:45:29.810244       1 shared_informer.go:320] Caches are synced for PVC protection
	I0805 12:45:29.822844       1 shared_informer.go:320] Caches are synced for GC
	I0805 12:45:29.835281       1 shared_informer.go:320] Caches are synced for disruption
	I0805 12:45:29.836450       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0805 12:45:29.853774       1 shared_informer.go:320] Caches are synced for deployment
	I0805 12:45:29.853942       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0805 12:45:29.858066       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0805 12:45:29.858249       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="134.001µs"
	I0805 12:45:29.902747       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0805 12:45:29.903193       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0805 12:45:29.932836       1 shared_informer.go:320] Caches are synced for resource quota
	I0805 12:45:29.956240       1 shared_informer.go:320] Caches are synced for endpoint
	I0805 12:45:29.971975       1 shared_informer.go:320] Caches are synced for namespace
	I0805 12:45:29.974290       1 shared_informer.go:320] Caches are synced for service account
	I0805 12:45:29.982987       1 shared_informer.go:320] Caches are synced for HPA
	I0805 12:45:30.019207       1 shared_informer.go:320] Caches are synced for resource quota
	I0805 12:45:30.454186       1 shared_informer.go:320] Caches are synced for garbage collector
	I0805 12:45:30.473868       1 shared_informer.go:320] Caches are synced for garbage collector
	I0805 12:45:30.473916       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [862df9ac2aa291a1fd67edc74d04423c203ac8b935809be931d7af85bab22892] <==
	
	
	==> kube-proxy [add61196ce4f029ecc2eb9dbd7dbded2932824edc54b7099b1ee0c73a8ac269d] <==
	
	
	==> kube-proxy [b2cfe1f13563b5d763c46bab15e7780fc0ee847d816fe24474213d19ac287e2a] <==
	I0805 12:45:18.221982       1 server_linux.go:69] "Using iptables proxy"
	I0805 12:45:18.230636       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.97"]
	I0805 12:45:18.262682       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0805 12:45:18.262780       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0805 12:45:18.262813       1 server_linux.go:165] "Using iptables Proxier"
	I0805 12:45:18.265510       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0805 12:45:18.265706       1 server.go:872] "Version info" version="v1.30.3"
	I0805 12:45:18.265888       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 12:45:18.266994       1 config.go:192] "Starting service config controller"
	I0805 12:45:18.267285       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 12:45:18.267429       1 config.go:101] "Starting endpoint slice config controller"
	I0805 12:45:18.267476       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 12:45:18.267924       1 config.go:319] "Starting node config controller"
	I0805 12:45:18.267961       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 12:45:18.368231       1 shared_informer.go:320] Caches are synced for node config
	I0805 12:45:18.368317       1 shared_informer.go:320] Caches are synced for service config
	I0805 12:45:18.368439       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [fb2cf67e138ffc00da0b5e832dbdbc353f1b5a01b20bafc76fe5e3b49f6d8719] <==
	I0805 12:45:15.112690       1 serving.go:380] Generated self-signed cert in-memory
	W0805 12:45:17.493110       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0805 12:45:17.493151       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0805 12:45:17.493160       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0805 12:45:17.493166       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0805 12:45:17.522790       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0805 12:45:17.522830       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 12:45:17.526886       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0805 12:45:17.526979       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0805 12:45:17.526991       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0805 12:45:17.527003       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0805 12:45:17.627889       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [fe361230dd1265ebfe73cd0cb849c09c62c2b58b4281010ffaef1149e8bcfd51] <==
	
	
	==> kubelet <==
	Aug 05 12:45:13 pause-335738 kubelet[3642]: E0805 12:45:13.929323    3642 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-335738?timeout=10s\": dial tcp 192.168.39.97:8443: connect: connection refused" interval="400ms"
	Aug 05 12:45:14 pause-335738 kubelet[3642]: I0805 12:45:14.030834    3642 kubelet_node_status.go:73] "Attempting to register node" node="pause-335738"
	Aug 05 12:45:14 pause-335738 kubelet[3642]: E0805 12:45:14.031757    3642 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.97:8443: connect: connection refused" node="pause-335738"
	Aug 05 12:45:14 pause-335738 kubelet[3642]: I0805 12:45:14.164155    3642 scope.go:117] "RemoveContainer" containerID="57dd9d3e8f34f97a6da8e9cb2772d12864a5ff5e3bd6fa93bcbb140763635832"
	Aug 05 12:45:14 pause-335738 kubelet[3642]: I0805 12:45:14.167052    3642 scope.go:117] "RemoveContainer" containerID="62e629ccbea51616692856cbf4046c26f2e54ef331e7b238b1aa3742c4a5d0de"
	Aug 05 12:45:14 pause-335738 kubelet[3642]: I0805 12:45:14.167600    3642 scope.go:117] "RemoveContainer" containerID="fe361230dd1265ebfe73cd0cb849c09c62c2b58b4281010ffaef1149e8bcfd51"
	Aug 05 12:45:14 pause-335738 kubelet[3642]: I0805 12:45:14.168292    3642 scope.go:117] "RemoveContainer" containerID="862df9ac2aa291a1fd67edc74d04423c203ac8b935809be931d7af85bab22892"
	Aug 05 12:45:14 pause-335738 kubelet[3642]: E0805 12:45:14.330819    3642 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-335738?timeout=10s\": dial tcp 192.168.39.97:8443: connect: connection refused" interval="800ms"
	Aug 05 12:45:14 pause-335738 kubelet[3642]: I0805 12:45:14.434134    3642 kubelet_node_status.go:73] "Attempting to register node" node="pause-335738"
	Aug 05 12:45:14 pause-335738 kubelet[3642]: E0805 12:45:14.435612    3642 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.97:8443: connect: connection refused" node="pause-335738"
	Aug 05 12:45:15 pause-335738 kubelet[3642]: I0805 12:45:15.237250    3642 kubelet_node_status.go:73] "Attempting to register node" node="pause-335738"
	Aug 05 12:45:17 pause-335738 kubelet[3642]: I0805 12:45:17.565284    3642 kubelet_node_status.go:112] "Node was previously registered" node="pause-335738"
	Aug 05 12:45:17 pause-335738 kubelet[3642]: I0805 12:45:17.565729    3642 kubelet_node_status.go:76] "Successfully registered node" node="pause-335738"
	Aug 05 12:45:17 pause-335738 kubelet[3642]: I0805 12:45:17.566908    3642 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 05 12:45:17 pause-335738 kubelet[3642]: I0805 12:45:17.567888    3642 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 05 12:45:17 pause-335738 kubelet[3642]: E0805 12:45:17.619980    3642 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-pause-335738\" already exists" pod="kube-system/kube-controller-manager-pause-335738"
	Aug 05 12:45:17 pause-335738 kubelet[3642]: I0805 12:45:17.717470    3642 apiserver.go:52] "Watching apiserver"
	Aug 05 12:45:17 pause-335738 kubelet[3642]: I0805 12:45:17.721229    3642 topology_manager.go:215] "Topology Admit Handler" podUID="d7245122-31af-45f8-99bf-1c863cbe5fe0" podNamespace="kube-system" podName="kube-proxy-l65cv"
	Aug 05 12:45:17 pause-335738 kubelet[3642]: I0805 12:45:17.722360    3642 topology_manager.go:215] "Topology Admit Handler" podUID="f9763738-61b9-43a5-9368-e06b52f43cd1" podNamespace="kube-system" podName="coredns-7db6d8ff4d-lzsxg"
	Aug 05 12:45:17 pause-335738 kubelet[3642]: I0805 12:45:17.724038    3642 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Aug 05 12:45:17 pause-335738 kubelet[3642]: I0805 12:45:17.808859    3642 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d7245122-31af-45f8-99bf-1c863cbe5fe0-xtables-lock\") pod \"kube-proxy-l65cv\" (UID: \"d7245122-31af-45f8-99bf-1c863cbe5fe0\") " pod="kube-system/kube-proxy-l65cv"
	Aug 05 12:45:17 pause-335738 kubelet[3642]: I0805 12:45:17.809027    3642 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d7245122-31af-45f8-99bf-1c863cbe5fe0-lib-modules\") pod \"kube-proxy-l65cv\" (UID: \"d7245122-31af-45f8-99bf-1c863cbe5fe0\") " pod="kube-system/kube-proxy-l65cv"
	Aug 05 12:45:18 pause-335738 kubelet[3642]: I0805 12:45:18.023559    3642 scope.go:117] "RemoveContainer" containerID="add61196ce4f029ecc2eb9dbd7dbded2932824edc54b7099b1ee0c73a8ac269d"
	Aug 05 12:45:18 pause-335738 kubelet[3642]: I0805 12:45:18.024734    3642 scope.go:117] "RemoveContainer" containerID="2d03d5b65be38adf050eac82d091e13a70488941b180fbfe98c242246dea6d02"
	Aug 05 12:45:26 pause-335738 kubelet[3642]: I0805 12:45:26.101324    3642 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0805 12:45:35.089950  435798 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19377-383955/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-335738 -n pause-335738
helpers_test.go:261: (dbg) Run:  kubectl --context pause-335738 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (53.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (300.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-635707 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-635707 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (5m0.39116723s)

                                                
                                                
-- stdout --
	* [old-k8s-version-635707] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19377
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19377-383955/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-383955/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-635707" primary control-plane node in "old-k8s-version-635707" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 12:48:21.700131  443495 out.go:291] Setting OutFile to fd 1 ...
	I0805 12:48:21.700452  443495 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:48:21.700465  443495 out.go:304] Setting ErrFile to fd 2...
	I0805 12:48:21.700471  443495 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:48:21.700778  443495 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
	I0805 12:48:21.701583  443495 out.go:298] Setting JSON to false
	I0805 12:48:21.703146  443495 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":9049,"bootTime":1722853053,"procs":267,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0805 12:48:21.703229  443495 start.go:139] virtualization: kvm guest
	I0805 12:48:21.705564  443495 out.go:177] * [old-k8s-version-635707] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0805 12:48:21.707048  443495 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 12:48:21.707107  443495 notify.go:220] Checking for updates...
	I0805 12:48:21.709814  443495 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 12:48:21.711170  443495 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 12:48:21.712397  443495 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 12:48:21.713707  443495 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0805 12:48:21.714903  443495 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 12:48:21.716627  443495 config.go:182] Loaded profile config "bridge-119870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:48:21.716801  443495 config.go:182] Loaded profile config "enable-default-cni-119870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:48:21.716937  443495 config.go:182] Loaded profile config "flannel-119870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:48:21.717064  443495 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 12:48:21.758158  443495 out.go:177] * Using the kvm2 driver based on user configuration
	I0805 12:48:21.759381  443495 start.go:297] selected driver: kvm2
	I0805 12:48:21.759397  443495 start.go:901] validating driver "kvm2" against <nil>
	I0805 12:48:21.759414  443495 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 12:48:21.760452  443495 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 12:48:21.760556  443495 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19377-383955/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0805 12:48:21.778909  443495 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0805 12:48:21.778988  443495 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 12:48:21.779310  443495 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 12:48:21.779349  443495 cni.go:84] Creating CNI manager for ""
	I0805 12:48:21.779360  443495 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:48:21.779373  443495 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 12:48:21.779465  443495 start.go:340] cluster config:
	{Name:old-k8s-version-635707 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-635707 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:48:21.779612  443495 iso.go:125] acquiring lock: {Name:mk78a4988ea0dfb86bb6f7367e362683a39fd912 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 12:48:21.781673  443495 out.go:177] * Starting "old-k8s-version-635707" primary control-plane node in "old-k8s-version-635707" cluster
	I0805 12:48:21.783037  443495 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0805 12:48:21.783087  443495 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0805 12:48:21.783097  443495 cache.go:56] Caching tarball of preloaded images
	I0805 12:48:21.783223  443495 preload.go:172] Found /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0805 12:48:21.783239  443495 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0805 12:48:21.783366  443495 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/config.json ...
	I0805 12:48:21.783393  443495 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/config.json: {Name:mk18f1e956ba5ca62d9c39c96470550d6f89a2f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:48:21.783560  443495 start.go:360] acquireMachinesLock for old-k8s-version-635707: {Name:mk3babe91d55c30c0b650587cdec6489eb3a7ed6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 12:48:49.377124  443495 start.go:364] duration metric: took 27.593508877s to acquireMachinesLock for "old-k8s-version-635707"
	I0805 12:48:49.377215  443495 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-635707 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-635707 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 12:48:49.377342  443495 start.go:125] createHost starting for "" (driver="kvm2")
	I0805 12:48:49.379197  443495 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 12:48:49.379396  443495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:48:49.379456  443495 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:48:49.396921  443495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42047
	I0805 12:48:49.397441  443495 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:48:49.398096  443495 main.go:141] libmachine: Using API Version  1
	I0805 12:48:49.398118  443495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:48:49.398483  443495 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:48:49.398666  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetMachineName
	I0805 12:48:49.398805  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:48:49.398973  443495 start.go:159] libmachine.API.Create for "old-k8s-version-635707" (driver="kvm2")
	I0805 12:48:49.399000  443495 client.go:168] LocalClient.Create starting
	I0805 12:48:49.399044  443495 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem
	I0805 12:48:49.399080  443495 main.go:141] libmachine: Decoding PEM data...
	I0805 12:48:49.399098  443495 main.go:141] libmachine: Parsing certificate...
	I0805 12:48:49.399159  443495 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem
	I0805 12:48:49.399178  443495 main.go:141] libmachine: Decoding PEM data...
	I0805 12:48:49.399189  443495 main.go:141] libmachine: Parsing certificate...
	I0805 12:48:49.399210  443495 main.go:141] libmachine: Running pre-create checks...
	I0805 12:48:49.399219  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .PreCreateCheck
	I0805 12:48:49.399624  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetConfigRaw
	I0805 12:48:49.400067  443495 main.go:141] libmachine: Creating machine...
	I0805 12:48:49.400085  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .Create
	I0805 12:48:49.400224  443495 main.go:141] libmachine: (old-k8s-version-635707) Creating KVM machine...
	I0805 12:48:49.401667  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | found existing default KVM network
	I0805 12:48:49.403513  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:48:49.403333  443833 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:bc:32:e2} reservation:<nil>}
	I0805 12:48:49.405030  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:48:49.404938  443833 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:9e:79:d4} reservation:<nil>}
	I0805 12:48:49.406736  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:48:49.406647  443833 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ff0}
	I0805 12:48:49.406791  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | created network xml: 
	I0805 12:48:49.406804  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | <network>
	I0805 12:48:49.406814  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG |   <name>mk-old-k8s-version-635707</name>
	I0805 12:48:49.406821  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG |   <dns enable='no'/>
	I0805 12:48:49.406828  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG |   
	I0805 12:48:49.406863  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0805 12:48:49.406875  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG |     <dhcp>
	I0805 12:48:49.406891  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0805 12:48:49.406899  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG |     </dhcp>
	I0805 12:48:49.406906  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG |   </ip>
	I0805 12:48:49.406914  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG |   
	I0805 12:48:49.406921  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | </network>
	I0805 12:48:49.406931  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | 
	I0805 12:48:49.412575  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | trying to create private KVM network mk-old-k8s-version-635707 192.168.61.0/24...
	I0805 12:48:49.505953  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | private KVM network mk-old-k8s-version-635707 192.168.61.0/24 created
	I0805 12:48:49.505987  443495 main.go:141] libmachine: (old-k8s-version-635707) Setting up store path in /home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707 ...
	I0805 12:48:49.506001  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:48:49.505922  443833 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 12:48:49.506017  443495 main.go:141] libmachine: (old-k8s-version-635707) Building disk image from file:///home/jenkins/minikube-integration/19377-383955/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0805 12:48:49.506273  443495 main.go:141] libmachine: (old-k8s-version-635707) Downloading /home/jenkins/minikube-integration/19377-383955/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19377-383955/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0805 12:48:50.036015  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:48:50.035863  443833 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/id_rsa...
	I0805 12:48:50.132025  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:48:50.131842  443833 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/old-k8s-version-635707.rawdisk...
	I0805 12:48:50.132063  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | Writing magic tar header
	I0805 12:48:50.132081  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | Writing SSH key tar header
	I0805 12:48:50.132095  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:48:50.132056  443833 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707 ...
	I0805 12:48:50.132279  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707
	I0805 12:48:50.132321  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19377-383955/.minikube/machines
	I0805 12:48:50.132360  443495 main.go:141] libmachine: (old-k8s-version-635707) Setting executable bit set on /home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707 (perms=drwx------)
	I0805 12:48:50.132383  443495 main.go:141] libmachine: (old-k8s-version-635707) Setting executable bit set on /home/jenkins/minikube-integration/19377-383955/.minikube/machines (perms=drwxr-xr-x)
	I0805 12:48:50.132397  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 12:48:50.132419  443495 main.go:141] libmachine: (old-k8s-version-635707) Setting executable bit set on /home/jenkins/minikube-integration/19377-383955/.minikube (perms=drwxr-xr-x)
	I0805 12:48:50.132431  443495 main.go:141] libmachine: (old-k8s-version-635707) Setting executable bit set on /home/jenkins/minikube-integration/19377-383955 (perms=drwxrwxr-x)
	I0805 12:48:50.132441  443495 main.go:141] libmachine: (old-k8s-version-635707) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0805 12:48:50.132452  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19377-383955
	I0805 12:48:50.132477  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0805 12:48:50.132486  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | Checking permissions on dir: /home/jenkins
	I0805 12:48:50.132498  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | Checking permissions on dir: /home
	I0805 12:48:50.132516  443495 main.go:141] libmachine: (old-k8s-version-635707) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0805 12:48:50.132526  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | Skipping /home - not owner
	I0805 12:48:50.132537  443495 main.go:141] libmachine: (old-k8s-version-635707) Creating domain...
	I0805 12:48:50.133822  443495 main.go:141] libmachine: (old-k8s-version-635707) define libvirt domain using xml: 
	I0805 12:48:50.133861  443495 main.go:141] libmachine: (old-k8s-version-635707) <domain type='kvm'>
	I0805 12:48:50.133898  443495 main.go:141] libmachine: (old-k8s-version-635707)   <name>old-k8s-version-635707</name>
	I0805 12:48:50.133926  443495 main.go:141] libmachine: (old-k8s-version-635707)   <memory unit='MiB'>2200</memory>
	I0805 12:48:50.133940  443495 main.go:141] libmachine: (old-k8s-version-635707)   <vcpu>2</vcpu>
	I0805 12:48:50.133955  443495 main.go:141] libmachine: (old-k8s-version-635707)   <features>
	I0805 12:48:50.133968  443495 main.go:141] libmachine: (old-k8s-version-635707)     <acpi/>
	I0805 12:48:50.133981  443495 main.go:141] libmachine: (old-k8s-version-635707)     <apic/>
	I0805 12:48:50.133994  443495 main.go:141] libmachine: (old-k8s-version-635707)     <pae/>
	I0805 12:48:50.134005  443495 main.go:141] libmachine: (old-k8s-version-635707)     
	I0805 12:48:50.134015  443495 main.go:141] libmachine: (old-k8s-version-635707)   </features>
	I0805 12:48:50.134027  443495 main.go:141] libmachine: (old-k8s-version-635707)   <cpu mode='host-passthrough'>
	I0805 12:48:50.134060  443495 main.go:141] libmachine: (old-k8s-version-635707)   
	I0805 12:48:50.134086  443495 main.go:141] libmachine: (old-k8s-version-635707)   </cpu>
	I0805 12:48:50.134095  443495 main.go:141] libmachine: (old-k8s-version-635707)   <os>
	I0805 12:48:50.134106  443495 main.go:141] libmachine: (old-k8s-version-635707)     <type>hvm</type>
	I0805 12:48:50.134116  443495 main.go:141] libmachine: (old-k8s-version-635707)     <boot dev='cdrom'/>
	I0805 12:48:50.134127  443495 main.go:141] libmachine: (old-k8s-version-635707)     <boot dev='hd'/>
	I0805 12:48:50.134139  443495 main.go:141] libmachine: (old-k8s-version-635707)     <bootmenu enable='no'/>
	I0805 12:48:50.134149  443495 main.go:141] libmachine: (old-k8s-version-635707)   </os>
	I0805 12:48:50.134157  443495 main.go:141] libmachine: (old-k8s-version-635707)   <devices>
	I0805 12:48:50.134179  443495 main.go:141] libmachine: (old-k8s-version-635707)     <disk type='file' device='cdrom'>
	I0805 12:48:50.134194  443495 main.go:141] libmachine: (old-k8s-version-635707)       <source file='/home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/boot2docker.iso'/>
	I0805 12:48:50.134208  443495 main.go:141] libmachine: (old-k8s-version-635707)       <target dev='hdc' bus='scsi'/>
	I0805 12:48:50.134218  443495 main.go:141] libmachine: (old-k8s-version-635707)       <readonly/>
	I0805 12:48:50.134235  443495 main.go:141] libmachine: (old-k8s-version-635707)     </disk>
	I0805 12:48:50.134264  443495 main.go:141] libmachine: (old-k8s-version-635707)     <disk type='file' device='disk'>
	I0805 12:48:50.134277  443495 main.go:141] libmachine: (old-k8s-version-635707)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0805 12:48:50.134293  443495 main.go:141] libmachine: (old-k8s-version-635707)       <source file='/home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/old-k8s-version-635707.rawdisk'/>
	I0805 12:48:50.134303  443495 main.go:141] libmachine: (old-k8s-version-635707)       <target dev='hda' bus='virtio'/>
	I0805 12:48:50.134319  443495 main.go:141] libmachine: (old-k8s-version-635707)     </disk>
	I0805 12:48:50.134333  443495 main.go:141] libmachine: (old-k8s-version-635707)     <interface type='network'>
	I0805 12:48:50.134346  443495 main.go:141] libmachine: (old-k8s-version-635707)       <source network='mk-old-k8s-version-635707'/>
	I0805 12:48:50.134359  443495 main.go:141] libmachine: (old-k8s-version-635707)       <model type='virtio'/>
	I0805 12:48:50.134369  443495 main.go:141] libmachine: (old-k8s-version-635707)     </interface>
	I0805 12:48:50.134381  443495 main.go:141] libmachine: (old-k8s-version-635707)     <interface type='network'>
	I0805 12:48:50.134395  443495 main.go:141] libmachine: (old-k8s-version-635707)       <source network='default'/>
	I0805 12:48:50.134416  443495 main.go:141] libmachine: (old-k8s-version-635707)       <model type='virtio'/>
	I0805 12:48:50.134437  443495 main.go:141] libmachine: (old-k8s-version-635707)     </interface>
	I0805 12:48:50.134448  443495 main.go:141] libmachine: (old-k8s-version-635707)     <serial type='pty'>
	I0805 12:48:50.134459  443495 main.go:141] libmachine: (old-k8s-version-635707)       <target port='0'/>
	I0805 12:48:50.134467  443495 main.go:141] libmachine: (old-k8s-version-635707)     </serial>
	I0805 12:48:50.134481  443495 main.go:141] libmachine: (old-k8s-version-635707)     <console type='pty'>
	I0805 12:48:50.134493  443495 main.go:141] libmachine: (old-k8s-version-635707)       <target type='serial' port='0'/>
	I0805 12:48:50.134509  443495 main.go:141] libmachine: (old-k8s-version-635707)     </console>
	I0805 12:48:50.134554  443495 main.go:141] libmachine: (old-k8s-version-635707)     <rng model='virtio'>
	I0805 12:48:50.134577  443495 main.go:141] libmachine: (old-k8s-version-635707)       <backend model='random'>/dev/random</backend>
	I0805 12:48:50.134589  443495 main.go:141] libmachine: (old-k8s-version-635707)     </rng>
	I0805 12:48:50.134603  443495 main.go:141] libmachine: (old-k8s-version-635707)     
	I0805 12:48:50.134615  443495 main.go:141] libmachine: (old-k8s-version-635707)     
	I0805 12:48:50.134622  443495 main.go:141] libmachine: (old-k8s-version-635707)   </devices>
	I0805 12:48:50.134632  443495 main.go:141] libmachine: (old-k8s-version-635707) </domain>
	I0805 12:48:50.134642  443495 main.go:141] libmachine: (old-k8s-version-635707) 
	I0805 12:48:50.205858  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:f9:a3:3d in network default
	I0805 12:48:50.206643  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:48:50.206676  443495 main.go:141] libmachine: (old-k8s-version-635707) Ensuring networks are active...
	I0805 12:48:50.207585  443495 main.go:141] libmachine: (old-k8s-version-635707) Ensuring network default is active
	I0805 12:48:50.207972  443495 main.go:141] libmachine: (old-k8s-version-635707) Ensuring network mk-old-k8s-version-635707 is active
	I0805 12:48:50.208563  443495 main.go:141] libmachine: (old-k8s-version-635707) Getting domain xml...
	I0805 12:48:50.209484  443495 main.go:141] libmachine: (old-k8s-version-635707) Creating domain...
	I0805 12:48:51.706336  443495 main.go:141] libmachine: (old-k8s-version-635707) Waiting to get IP...
	I0805 12:48:51.707234  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:48:51.707705  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:48:51.707736  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:48:51.707685  443833 retry.go:31] will retry after 309.768852ms: waiting for machine to come up
	I0805 12:48:52.019604  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:48:52.020220  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:48:52.020245  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:48:52.020165  443833 retry.go:31] will retry after 320.197001ms: waiting for machine to come up
	I0805 12:48:52.341689  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:48:52.342374  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:48:52.342407  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:48:52.342282  443833 retry.go:31] will retry after 352.7217ms: waiting for machine to come up
	I0805 12:48:52.697181  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:48:52.697866  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:48:52.697895  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:48:52.697762  443833 retry.go:31] will retry after 549.745341ms: waiting for machine to come up
	I0805 12:48:53.249633  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:48:53.250322  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:48:53.250360  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:48:53.250269  443833 retry.go:31] will retry after 760.695943ms: waiting for machine to come up
	I0805 12:48:54.012402  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:48:54.013044  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:48:54.013072  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:48:54.012985  443833 retry.go:31] will retry after 732.164407ms: waiting for machine to come up
	I0805 12:48:54.746760  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:48:54.747199  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:48:54.747229  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:48:54.747146  443833 retry.go:31] will retry after 963.132311ms: waiting for machine to come up
	I0805 12:48:55.712482  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:48:55.713100  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:48:55.713146  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:48:55.713073  443833 retry.go:31] will retry after 929.715522ms: waiting for machine to come up
	I0805 12:48:56.644419  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:48:56.645014  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:48:56.645043  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:48:56.644959  443833 retry.go:31] will retry after 1.34549505s: waiting for machine to come up
	I0805 12:48:57.992662  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:48:57.993231  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:48:57.993277  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:48:57.993196  443833 retry.go:31] will retry after 2.112920902s: waiting for machine to come up
	I0805 12:49:00.107755  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:49:00.108476  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:49:00.108509  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:49:00.108401  443833 retry.go:31] will retry after 2.083449582s: waiting for machine to come up
	I0805 12:49:02.193968  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:49:02.194579  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:49:02.194622  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:49:02.194516  443833 retry.go:31] will retry after 3.021208356s: waiting for machine to come up
	I0805 12:49:05.217142  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:49:05.217847  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:49:05.217870  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:49:05.217785  443833 retry.go:31] will retry after 4.472292948s: waiting for machine to come up
	I0805 12:49:09.691558  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:49:09.692116  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:49:09.692143  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:49:09.692065  443833 retry.go:31] will retry after 4.225227471s: waiting for machine to come up
	I0805 12:49:13.921061  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:49:13.921703  443495 main.go:141] libmachine: (old-k8s-version-635707) Found IP for machine: 192.168.61.41
	I0805 12:49:13.921734  443495 main.go:141] libmachine: (old-k8s-version-635707) Reserving static IP address...
	I0805 12:49:13.921749  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has current primary IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:49:13.922164  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-635707", mac: "52:54:00:2a:da:c5", ip: "192.168.61.41"} in network mk-old-k8s-version-635707
	I0805 12:49:14.001866  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | Getting to WaitForSSH function...
	I0805 12:49:14.001903  443495 main.go:141] libmachine: (old-k8s-version-635707) Reserved static IP address: 192.168.61.41
	I0805 12:49:14.001913  443495 main.go:141] libmachine: (old-k8s-version-635707) Waiting for SSH to be available...
	I0805 12:49:14.005203  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:49:14.005675  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:49:06 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2a:da:c5}
	I0805 12:49:14.005715  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:49:14.005827  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | Using SSH client type: external
	I0805 12:49:14.005871  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | Using SSH private key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/id_rsa (-rw-------)
	I0805 12:49:14.005908  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.41 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 12:49:14.005925  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | About to run SSH command:
	I0805 12:49:14.005936  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | exit 0
	I0805 12:49:14.132334  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | SSH cmd err, output: <nil>: 
	I0805 12:49:14.132632  443495 main.go:141] libmachine: (old-k8s-version-635707) KVM machine creation complete!
	I0805 12:49:14.133071  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetConfigRaw
	I0805 12:49:14.133770  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:49:14.133997  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:49:14.134233  443495 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0805 12:49:14.134254  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetState
	I0805 12:49:14.136249  443495 main.go:141] libmachine: Detecting operating system of created instance...
	I0805 12:49:14.136270  443495 main.go:141] libmachine: Waiting for SSH to be available...
	I0805 12:49:14.136277  443495 main.go:141] libmachine: Getting to WaitForSSH function...
	I0805 12:49:14.136287  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:49:14.139085  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:49:14.139611  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:49:06 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:49:14.139632  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:49:14.139780  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:49:14.139987  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:49:14.140183  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:49:14.140369  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:49:14.140580  443495 main.go:141] libmachine: Using SSH client type: native
	I0805 12:49:14.140858  443495 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I0805 12:49:14.140877  443495 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0805 12:49:14.252590  443495 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 12:49:14.252623  443495 main.go:141] libmachine: Detecting the provisioner...
	I0805 12:49:14.252635  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:49:14.256027  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:49:14.256444  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:49:06 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:49:14.256476  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:49:14.256715  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:49:14.256951  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:49:14.257255  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:49:14.257426  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:49:14.257586  443495 main.go:141] libmachine: Using SSH client type: native
	I0805 12:49:14.257783  443495 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I0805 12:49:14.257797  443495 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0805 12:49:14.369328  443495 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0805 12:49:14.369407  443495 main.go:141] libmachine: found compatible host: buildroot
	I0805 12:49:14.369423  443495 main.go:141] libmachine: Provisioning with buildroot...
	I0805 12:49:14.369437  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetMachineName
	I0805 12:49:14.369736  443495 buildroot.go:166] provisioning hostname "old-k8s-version-635707"
	I0805 12:49:14.369770  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetMachineName
	I0805 12:49:14.370008  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:49:14.373592  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:49:14.374044  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:49:06 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:49:14.374088  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:49:14.374165  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:49:14.374361  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:49:14.374528  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:49:14.374689  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:49:14.374848  443495 main.go:141] libmachine: Using SSH client type: native
	I0805 12:49:14.375062  443495 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I0805 12:49:14.375079  443495 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-635707 && echo "old-k8s-version-635707" | sudo tee /etc/hostname
	I0805 12:49:14.495616  443495 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-635707
	
	I0805 12:49:14.495656  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:49:14.499403  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:49:14.499923  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:49:06 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:49:14.499971  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:49:14.500203  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:49:14.500436  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:49:14.500661  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:49:14.500809  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:49:14.501000  443495 main.go:141] libmachine: Using SSH client type: native
	I0805 12:49:14.501215  443495 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I0805 12:49:14.501232  443495 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-635707' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-635707/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-635707' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 12:49:14.614463  443495 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 12:49:14.614500  443495 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19377-383955/.minikube CaCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19377-383955/.minikube}
	I0805 12:49:14.614558  443495 buildroot.go:174] setting up certificates
	I0805 12:49:14.614576  443495 provision.go:84] configureAuth start
	I0805 12:49:14.614593  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetMachineName
	I0805 12:49:14.614925  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetIP
	I0805 12:49:14.618024  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:49:14.618472  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:49:06 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:49:14.618530  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:49:14.618639  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:49:14.621258  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:49:14.621639  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:49:06 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:49:14.621705  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:49:14.621789  443495 provision.go:143] copyHostCerts
	I0805 12:49:14.621873  443495 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem, removing ...
	I0805 12:49:14.621887  443495 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 12:49:14.621965  443495 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem (1082 bytes)
	I0805 12:49:14.622116  443495 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem, removing ...
	I0805 12:49:14.622130  443495 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 12:49:14.622164  443495 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem (1123 bytes)
	I0805 12:49:14.622263  443495 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem, removing ...
	I0805 12:49:14.622274  443495 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 12:49:14.622301  443495 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem (1675 bytes)
	I0805 12:49:14.622397  443495 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-635707 san=[127.0.0.1 192.168.61.41 localhost minikube old-k8s-version-635707]
	I0805 12:49:14.683252  443495 provision.go:177] copyRemoteCerts
	I0805 12:49:14.683318  443495 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 12:49:14.683347  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:49:14.686283  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:49:14.686633  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:49:06 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:49:14.686657  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:49:14.686898  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:49:14.687107  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:49:14.687282  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:49:14.687431  443495 sshutil.go:53] new ssh client: &{IP:192.168.61.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/id_rsa Username:docker}
	I0805 12:49:14.779173  443495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 12:49:14.808571  443495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0805 12:49:14.834519  443495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 12:49:14.862341  443495 provision.go:87] duration metric: took 247.746148ms to configureAuth
	I0805 12:49:14.862383  443495 buildroot.go:189] setting minikube options for container-runtime
	I0805 12:49:14.862621  443495 config.go:182] Loaded profile config "old-k8s-version-635707": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0805 12:49:14.862722  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:49:14.865575  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:49:14.866078  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:49:06 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:49:14.866111  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:49:14.866274  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:49:14.866499  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:49:14.866698  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:49:14.866872  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:49:14.867117  443495 main.go:141] libmachine: Using SSH client type: native
	I0805 12:49:14.867324  443495 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I0805 12:49:14.867350  443495 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 12:49:15.190752  443495 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 12:49:15.190784  443495 main.go:141] libmachine: Checking connection to Docker...
	I0805 12:49:15.190797  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetURL
	I0805 12:49:15.192195  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | Using libvirt version 6000000
	I0805 12:49:15.194426  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:49:15.194770  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:49:06 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:49:15.194810  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:49:15.195032  443495 main.go:141] libmachine: Docker is up and running!
	I0805 12:49:15.195050  443495 main.go:141] libmachine: Reticulating splines...
	I0805 12:49:15.195058  443495 client.go:171] duration metric: took 25.796050227s to LocalClient.Create
	I0805 12:49:15.195086  443495 start.go:167] duration metric: took 25.796113828s to libmachine.API.Create "old-k8s-version-635707"
	I0805 12:49:15.195114  443495 start.go:293] postStartSetup for "old-k8s-version-635707" (driver="kvm2")
	I0805 12:49:15.195130  443495 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 12:49:15.195154  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:49:15.195402  443495 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 12:49:15.195428  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:49:15.197837  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:49:15.198233  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:49:06 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:49:15.198265  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:49:15.198419  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:49:15.198620  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:49:15.198770  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:49:15.198897  443495 sshutil.go:53] new ssh client: &{IP:192.168.61.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/id_rsa Username:docker}
	I0805 12:49:15.285119  443495 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 12:49:15.289896  443495 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 12:49:15.289927  443495 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/addons for local assets ...
	I0805 12:49:15.289992  443495 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/files for local assets ...
	I0805 12:49:15.290086  443495 filesync.go:149] local asset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> 3912192.pem in /etc/ssl/certs
	I0805 12:49:15.290212  443495 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 12:49:15.301323  443495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:49:15.330646  443495 start.go:296] duration metric: took 135.512925ms for postStartSetup
	I0805 12:49:15.330705  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetConfigRaw
	I0805 12:49:15.331439  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetIP
	I0805 12:49:15.334402  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:49:15.334800  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:49:06 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:49:15.334845  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:49:15.335091  443495 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/config.json ...
	I0805 12:49:15.335333  443495 start.go:128] duration metric: took 25.957976771s to createHost
	I0805 12:49:15.335363  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:49:15.338031  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:49:15.338421  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:49:06 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:49:15.338468  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:49:15.338616  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:49:15.338805  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:49:15.338973  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:49:15.339118  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:49:15.339302  443495 main.go:141] libmachine: Using SSH client type: native
	I0805 12:49:15.339518  443495 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I0805 12:49:15.339531  443495 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0805 12:49:15.445199  443495 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722862155.388531472
	
	I0805 12:49:15.445228  443495 fix.go:216] guest clock: 1722862155.388531472
	I0805 12:49:15.445235  443495 fix.go:229] Guest: 2024-08-05 12:49:15.388531472 +0000 UTC Remote: 2024-08-05 12:49:15.335348776 +0000 UTC m=+53.681135104 (delta=53.182696ms)
	I0805 12:49:15.445256  443495 fix.go:200] guest clock delta is within tolerance: 53.182696ms
	I0805 12:49:15.445262  443495 start.go:83] releasing machines lock for "old-k8s-version-635707", held for 26.068105427s
	I0805 12:49:15.445289  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:49:15.445612  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetIP
	I0805 12:49:15.448895  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:49:15.449297  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:49:06 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:49:15.449335  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:49:15.449634  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:49:15.450215  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:49:15.450448  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:49:15.450561  443495 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 12:49:15.450618  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:49:15.450646  443495 ssh_runner.go:195] Run: cat /version.json
	I0805 12:49:15.450675  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:49:15.453956  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:49:15.454215  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:49:15.454623  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:49:06 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:49:15.454668  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:49:15.454900  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:49:06 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:49:15.454963  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:49:15.455006  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:49:15.455185  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:49:15.455278  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:49:15.455393  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:49:15.455445  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:49:15.455575  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:49:15.455618  443495 sshutil.go:53] new ssh client: &{IP:192.168.61.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/id_rsa Username:docker}
	I0805 12:49:15.455758  443495 sshutil.go:53] new ssh client: &{IP:192.168.61.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/id_rsa Username:docker}
	I0805 12:49:15.542082  443495 ssh_runner.go:195] Run: systemctl --version
	I0805 12:49:15.569475  443495 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 12:49:15.753470  443495 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 12:49:15.760409  443495 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 12:49:15.760498  443495 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 12:49:15.782691  443495 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 12:49:15.782722  443495 start.go:495] detecting cgroup driver to use...
	I0805 12:49:15.782804  443495 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 12:49:15.804272  443495 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 12:49:15.821752  443495 docker.go:217] disabling cri-docker service (if available) ...
	I0805 12:49:15.821829  443495 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 12:49:15.838339  443495 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 12:49:15.854221  443495 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 12:49:15.998351  443495 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 12:49:16.176774  443495 docker.go:233] disabling docker service ...
	I0805 12:49:16.176854  443495 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 12:49:16.192135  443495 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 12:49:16.216154  443495 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 12:49:16.387805  443495 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 12:49:16.535489  443495 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 12:49:16.550750  443495 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 12:49:16.571226  443495 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0805 12:49:16.571290  443495 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:49:16.584982  443495 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 12:49:16.585066  443495 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:49:16.596806  443495 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:49:16.608698  443495 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:49:16.624578  443495 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 12:49:16.638575  443495 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 12:49:16.652059  443495 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0805 12:49:16.652129  443495 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0805 12:49:16.666551  443495 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 12:49:16.677716  443495 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:49:16.819660  443495 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 12:49:16.979229  443495 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 12:49:16.979343  443495 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 12:49:16.985615  443495 start.go:563] Will wait 60s for crictl version
	I0805 12:49:16.985694  443495 ssh_runner.go:195] Run: which crictl
	I0805 12:49:16.990610  443495 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 12:49:17.046653  443495 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 12:49:17.046750  443495 ssh_runner.go:195] Run: crio --version
	I0805 12:49:17.084217  443495 ssh_runner.go:195] Run: crio --version
	I0805 12:49:17.118165  443495 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0805 12:49:17.119673  443495 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetIP
	I0805 12:49:17.122993  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:49:17.123505  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:49:06 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:49:17.123584  443495 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:49:17.123880  443495 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0805 12:49:17.129203  443495 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:49:17.146632  443495 kubeadm.go:883] updating cluster {Name:old-k8s-version-635707 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-635707 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.41 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 12:49:17.146768  443495 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0805 12:49:17.146832  443495 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:49:17.188061  443495 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0805 12:49:17.188144  443495 ssh_runner.go:195] Run: which lz4
	I0805 12:49:17.192320  443495 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0805 12:49:17.196768  443495 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 12:49:17.196797  443495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0805 12:49:18.914207  443495 crio.go:462] duration metric: took 1.721914912s to copy over tarball
	I0805 12:49:18.914297  443495 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 12:49:21.635846  443495 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.721514429s)
	I0805 12:49:21.635895  443495 crio.go:469] duration metric: took 2.721656941s to extract the tarball
	I0805 12:49:21.635906  443495 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0805 12:49:21.692883  443495 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:49:21.778146  443495 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0805 12:49:21.778182  443495 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0805 12:49:21.778305  443495 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:49:21.778354  443495 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0805 12:49:21.778457  443495 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0805 12:49:21.778315  443495 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0805 12:49:21.778666  443495 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0805 12:49:21.778315  443495 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0805 12:49:21.778521  443495 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0805 12:49:21.779456  443495 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0805 12:49:21.781832  443495 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:49:21.781894  443495 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0805 12:49:21.782160  443495 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0805 12:49:21.782381  443495 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0805 12:49:21.782425  443495 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0805 12:49:21.782552  443495 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0805 12:49:21.782634  443495 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0805 12:49:21.783284  443495 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0805 12:49:21.918393  443495 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0805 12:49:21.927262  443495 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0805 12:49:21.927317  443495 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0805 12:49:21.927478  443495 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0805 12:49:21.951443  443495 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0805 12:49:21.954109  443495 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0805 12:49:21.961975  443495 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0805 12:49:21.978965  443495 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0805 12:49:21.979018  443495 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0805 12:49:21.979075  443495 ssh_runner.go:195] Run: which crictl
	I0805 12:49:22.097233  443495 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0805 12:49:22.097286  443495 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0805 12:49:22.097343  443495 ssh_runner.go:195] Run: which crictl
	I0805 12:49:22.111987  443495 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0805 12:49:22.112035  443495 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0805 12:49:22.112071  443495 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0805 12:49:22.112039  443495 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0805 12:49:22.112136  443495 ssh_runner.go:195] Run: which crictl
	I0805 12:49:22.112136  443495 ssh_runner.go:195] Run: which crictl
	I0805 12:49:22.112067  443495 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0805 12:49:22.112184  443495 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0805 12:49:22.112207  443495 ssh_runner.go:195] Run: which crictl
	I0805 12:49:22.123083  443495 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0805 12:49:22.123100  443495 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0805 12:49:22.123135  443495 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0805 12:49:22.123135  443495 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0805 12:49:22.123168  443495 ssh_runner.go:195] Run: which crictl
	I0805 12:49:22.123184  443495 ssh_runner.go:195] Run: which crictl
	I0805 12:49:22.123201  443495 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0805 12:49:22.123169  443495 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0805 12:49:22.126673  443495 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0805 12:49:22.126726  443495 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0805 12:49:22.126742  443495 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0805 12:49:22.135250  443495 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0805 12:49:22.135428  443495 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0805 12:49:22.288418  443495 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0805 12:49:22.288418  443495 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0805 12:49:22.288464  443495 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0805 12:49:22.300591  443495 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0805 12:49:22.300599  443495 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0805 12:49:22.300622  443495 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0805 12:49:22.300634  443495 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0805 12:49:22.660736  443495 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:49:22.803104  443495 cache_images.go:92] duration metric: took 1.024889827s to LoadCachedImages
	W0805 12:49:22.803196  443495 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0805 12:49:22.803212  443495 kubeadm.go:934] updating node { 192.168.61.41 8443 v1.20.0 crio true true} ...
	I0805 12:49:22.803434  443495 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-635707 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.41
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-635707 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 12:49:22.803540  443495 ssh_runner.go:195] Run: crio config
	I0805 12:49:22.862889  443495 cni.go:84] Creating CNI manager for ""
	I0805 12:49:22.862910  443495 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:49:22.862931  443495 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 12:49:22.862953  443495 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.41 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-635707 NodeName:old-k8s-version-635707 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.41"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.41 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0805 12:49:22.863114  443495 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.41
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-635707"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.41
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.41"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 12:49:22.863197  443495 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0805 12:49:22.873685  443495 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 12:49:22.873772  443495 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 12:49:22.884187  443495 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0805 12:49:22.901705  443495 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 12:49:22.919606  443495 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0805 12:49:22.936664  443495 ssh_runner.go:195] Run: grep 192.168.61.41	control-plane.minikube.internal$ /etc/hosts
	I0805 12:49:22.940870  443495 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.41	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:49:22.954239  443495 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:49:23.084915  443495 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 12:49:23.103117  443495 certs.go:68] Setting up /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707 for IP: 192.168.61.41
	I0805 12:49:23.103142  443495 certs.go:194] generating shared ca certs ...
	I0805 12:49:23.103162  443495 certs.go:226] acquiring lock for ca certs: {Name:mk0abfcaff3883fbb5243c47b487f9200d9166d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:49:23.103347  443495 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key
	I0805 12:49:23.103487  443495 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key
	I0805 12:49:23.103511  443495 certs.go:256] generating profile certs ...
	I0805 12:49:23.103596  443495 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/client.key
	I0805 12:49:23.103618  443495 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/client.crt with IP's: []
	I0805 12:49:23.259556  443495 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/client.crt ...
	I0805 12:49:23.259591  443495 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/client.crt: {Name:mkd2cc07886911cfe0c2a3c7cae99af56f451879 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:49:23.290177  443495 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/client.key ...
	I0805 12:49:23.290230  443495 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/client.key: {Name:mk638bc3f2f7f7edd1ec5b5c7e39bc9f72739a3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:49:23.290417  443495 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/apiserver.key.3f42c485
	I0805 12:49:23.290451  443495 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/apiserver.crt.3f42c485 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.41]
	I0805 12:49:23.666954  443495 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/apiserver.crt.3f42c485 ...
	I0805 12:49:23.666989  443495 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/apiserver.crt.3f42c485: {Name:mk494bedf22acce02cf8b48176df345e86c812b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:49:23.667210  443495 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/apiserver.key.3f42c485 ...
	I0805 12:49:23.667230  443495 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/apiserver.key.3f42c485: {Name:mk4fea6df141df18cdc2118c267c013c25946e0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:49:23.667334  443495 certs.go:381] copying /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/apiserver.crt.3f42c485 -> /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/apiserver.crt
	I0805 12:49:23.667466  443495 certs.go:385] copying /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/apiserver.key.3f42c485 -> /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/apiserver.key
	I0805 12:49:23.667560  443495 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/proxy-client.key
	I0805 12:49:23.667585  443495 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/proxy-client.crt with IP's: []
	I0805 12:49:23.844781  443495 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/proxy-client.crt ...
	I0805 12:49:23.844813  443495 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/proxy-client.crt: {Name:mke3573f0b2ace1bfa7259ac246466033f717156 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:49:23.845037  443495 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/proxy-client.key ...
	I0805 12:49:23.845055  443495 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/proxy-client.key: {Name:mk59b6c8f2ae31e7da99de5261c6ab230ee0b1a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:49:23.845298  443495 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem (1338 bytes)
	W0805 12:49:23.845345  443495 certs.go:480] ignoring /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219_empty.pem, impossibly tiny 0 bytes
	I0805 12:49:23.845361  443495 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 12:49:23.845400  443495 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem (1082 bytes)
	I0805 12:49:23.845431  443495 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem (1123 bytes)
	I0805 12:49:23.845466  443495 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem (1675 bytes)
	I0805 12:49:23.845520  443495 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:49:23.846568  443495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 12:49:23.883176  443495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 12:49:23.919923  443495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 12:49:23.959988  443495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 12:49:24.001964  443495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0805 12:49:24.070742  443495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0805 12:49:24.104480  443495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 12:49:24.151467  443495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 12:49:24.199315  443495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /usr/share/ca-certificates/3912192.pem (1708 bytes)
	I0805 12:49:24.235482  443495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 12:49:24.272951  443495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem --> /usr/share/ca-certificates/391219.pem (1338 bytes)
	I0805 12:49:24.305993  443495 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 12:49:24.327057  443495 ssh_runner.go:195] Run: openssl version
	I0805 12:49:24.336352  443495 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3912192.pem && ln -fs /usr/share/ca-certificates/3912192.pem /etc/ssl/certs/3912192.pem"
	I0805 12:49:24.355759  443495 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3912192.pem
	I0805 12:49:24.362757  443495 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 11:39 /usr/share/ca-certificates/3912192.pem
	I0805 12:49:24.362833  443495 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3912192.pem
	I0805 12:49:24.371407  443495 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3912192.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 12:49:24.390280  443495 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 12:49:24.403831  443495 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:49:24.409152  443495 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:49:24.409227  443495 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:49:24.416464  443495 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 12:49:24.432553  443495 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391219.pem && ln -fs /usr/share/ca-certificates/391219.pem /etc/ssl/certs/391219.pem"
	I0805 12:49:24.445420  443495 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/391219.pem
	I0805 12:49:24.450462  443495 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 11:39 /usr/share/ca-certificates/391219.pem
	I0805 12:49:24.450532  443495 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391219.pem
	I0805 12:49:24.456817  443495 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391219.pem /etc/ssl/certs/51391683.0"
	I0805 12:49:24.472900  443495 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 12:49:24.478737  443495 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0805 12:49:24.478803  443495 kubeadm.go:392] StartCluster: {Name:old-k8s-version-635707 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-635707 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.41 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:49:24.478884  443495 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 12:49:24.478948  443495 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:49:24.530012  443495 cri.go:89] found id: ""
	I0805 12:49:24.530078  443495 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 12:49:24.541098  443495 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 12:49:24.552243  443495 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 12:49:24.565729  443495 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 12:49:24.565753  443495 kubeadm.go:157] found existing configuration files:
	
	I0805 12:49:24.565799  443495 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 12:49:24.576922  443495 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 12:49:24.577004  443495 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 12:49:24.589785  443495 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 12:49:24.601668  443495 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 12:49:24.601749  443495 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 12:49:24.615884  443495 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 12:49:24.626266  443495 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 12:49:24.626337  443495 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 12:49:24.637036  443495 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 12:49:24.647108  443495 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 12:49:24.647177  443495 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 12:49:24.657482  443495 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 12:49:24.839102  443495 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0805 12:49:24.839212  443495 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 12:49:25.013609  443495 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 12:49:25.013761  443495 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 12:49:25.013913  443495 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 12:49:25.244085  443495 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 12:49:25.245765  443495 out.go:204]   - Generating certificates and keys ...
	I0805 12:49:25.245869  443495 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 12:49:25.245960  443495 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 12:49:25.527923  443495 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0805 12:49:25.904978  443495 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0805 12:49:26.523210  443495 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0805 12:49:26.996199  443495 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0805 12:49:27.157988  443495 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0805 12:49:27.158211  443495 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-635707] and IPs [192.168.61.41 127.0.0.1 ::1]
	I0805 12:49:27.566009  443495 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0805 12:49:27.566213  443495 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-635707] and IPs [192.168.61.41 127.0.0.1 ::1]
	I0805 12:49:27.835335  443495 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0805 12:49:28.256225  443495 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0805 12:49:28.391069  443495 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0805 12:49:28.391219  443495 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 12:49:28.524686  443495 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 12:49:28.609673  443495 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 12:49:28.808695  443495 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 12:49:29.065039  443495 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 12:49:29.089523  443495 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 12:49:29.091818  443495 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 12:49:29.091937  443495 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 12:49:29.236336  443495 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 12:49:29.238284  443495 out.go:204]   - Booting up control plane ...
	I0805 12:49:29.238419  443495 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 12:49:29.250371  443495 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 12:49:29.251403  443495 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 12:49:29.252472  443495 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 12:49:29.258920  443495 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0805 12:50:09.202579  443495 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0805 12:50:09.202702  443495 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 12:50:09.203106  443495 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 12:50:14.203562  443495 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 12:50:14.203865  443495 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 12:50:24.203093  443495 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 12:50:24.203340  443495 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 12:50:44.203615  443495 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 12:50:44.203986  443495 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 12:51:24.205496  443495 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 12:51:24.206016  443495 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 12:51:24.206042  443495 kubeadm.go:310] 
	I0805 12:51:24.206156  443495 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0805 12:51:24.206258  443495 kubeadm.go:310] 		timed out waiting for the condition
	I0805 12:51:24.206269  443495 kubeadm.go:310] 
	I0805 12:51:24.206346  443495 kubeadm.go:310] 	This error is likely caused by:
	I0805 12:51:24.206430  443495 kubeadm.go:310] 		- The kubelet is not running
	I0805 12:51:24.206698  443495 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0805 12:51:24.206711  443495 kubeadm.go:310] 
	I0805 12:51:24.206951  443495 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0805 12:51:24.207045  443495 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0805 12:51:24.207160  443495 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0805 12:51:24.207181  443495 kubeadm.go:310] 
	I0805 12:51:24.207425  443495 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0805 12:51:24.207585  443495 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0805 12:51:24.207601  443495 kubeadm.go:310] 
	I0805 12:51:24.207818  443495 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0805 12:51:24.208056  443495 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0805 12:51:24.208225  443495 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0805 12:51:24.208669  443495 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0805 12:51:24.208699  443495 kubeadm.go:310] 
	I0805 12:51:24.209195  443495 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 12:51:24.209329  443495 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0805 12:51:24.209483  443495 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0805 12:51:24.209618  443495 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-635707] and IPs [192.168.61.41 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-635707] and IPs [192.168.61.41 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-635707] and IPs [192.168.61.41 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-635707] and IPs [192.168.61.41 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0805 12:51:24.209710  443495 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0805 12:51:25.257742  443495 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.047991794s)
	I0805 12:51:25.257838  443495 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 12:51:25.271938  443495 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 12:51:25.281882  443495 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 12:51:25.281907  443495 kubeadm.go:157] found existing configuration files:
	
	I0805 12:51:25.281969  443495 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 12:51:25.291153  443495 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 12:51:25.291216  443495 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 12:51:25.300364  443495 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 12:51:25.309291  443495 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 12:51:25.309348  443495 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 12:51:25.318519  443495 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 12:51:25.327319  443495 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 12:51:25.327374  443495 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 12:51:25.336619  443495 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 12:51:25.345353  443495 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 12:51:25.345403  443495 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 12:51:25.354324  443495 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 12:51:25.437786  443495 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0805 12:51:25.437895  443495 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 12:51:25.581908  443495 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 12:51:25.582005  443495 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 12:51:25.582098  443495 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 12:51:25.770315  443495 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 12:51:25.773209  443495 out.go:204]   - Generating certificates and keys ...
	I0805 12:51:25.773325  443495 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 12:51:25.773407  443495 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 12:51:25.773499  443495 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0805 12:51:25.773559  443495 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0805 12:51:25.773639  443495 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0805 12:51:25.773686  443495 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0805 12:51:25.773737  443495 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0805 12:51:25.773788  443495 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0805 12:51:25.773848  443495 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0805 12:51:25.773935  443495 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0805 12:51:25.773977  443495 kubeadm.go:310] [certs] Using the existing "sa" key
	I0805 12:51:25.774025  443495 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 12:51:26.016866  443495 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 12:51:26.098925  443495 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 12:51:26.198868  443495 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 12:51:26.261939  443495 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 12:51:26.282656  443495 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 12:51:26.284901  443495 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 12:51:26.285114  443495 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 12:51:26.434310  443495 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 12:51:26.436107  443495 out.go:204]   - Booting up control plane ...
	I0805 12:51:26.436221  443495 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 12:51:26.442907  443495 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 12:51:26.443955  443495 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 12:51:26.444811  443495 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 12:51:26.447065  443495 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0805 12:52:06.447405  443495 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0805 12:52:06.448340  443495 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 12:52:06.448526  443495 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 12:52:11.448690  443495 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 12:52:11.448897  443495 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 12:52:21.449466  443495 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 12:52:21.449708  443495 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 12:52:41.450504  443495 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 12:52:41.450728  443495 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 12:53:21.452551  443495 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 12:53:21.452802  443495 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 12:53:21.452829  443495 kubeadm.go:310] 
	I0805 12:53:21.452904  443495 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0805 12:53:21.452969  443495 kubeadm.go:310] 		timed out waiting for the condition
	I0805 12:53:21.452979  443495 kubeadm.go:310] 
	I0805 12:53:21.453019  443495 kubeadm.go:310] 	This error is likely caused by:
	I0805 12:53:21.453074  443495 kubeadm.go:310] 		- The kubelet is not running
	I0805 12:53:21.453221  443495 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0805 12:53:21.453233  443495 kubeadm.go:310] 
	I0805 12:53:21.453387  443495 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0805 12:53:21.453428  443495 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0805 12:53:21.453475  443495 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0805 12:53:21.453485  443495 kubeadm.go:310] 
	I0805 12:53:21.453631  443495 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0805 12:53:21.453763  443495 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0805 12:53:21.453778  443495 kubeadm.go:310] 
	I0805 12:53:21.453923  443495 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0805 12:53:21.454036  443495 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0805 12:53:21.454130  443495 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0805 12:53:21.454201  443495 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0805 12:53:21.454212  443495 kubeadm.go:310] 
	I0805 12:53:21.454581  443495 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 12:53:21.454665  443495 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0805 12:53:21.454833  443495 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0805 12:53:21.454926  443495 kubeadm.go:394] duration metric: took 3m56.976128941s to StartCluster
	I0805 12:53:21.454975  443495 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 12:53:21.455032  443495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 12:53:21.494595  443495 cri.go:89] found id: ""
	I0805 12:53:21.494635  443495 logs.go:276] 0 containers: []
	W0805 12:53:21.494647  443495 logs.go:278] No container was found matching "kube-apiserver"
	I0805 12:53:21.494655  443495 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 12:53:21.494727  443495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 12:53:21.528772  443495 cri.go:89] found id: ""
	I0805 12:53:21.528807  443495 logs.go:276] 0 containers: []
	W0805 12:53:21.528817  443495 logs.go:278] No container was found matching "etcd"
	I0805 12:53:21.528826  443495 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 12:53:21.528890  443495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 12:53:21.562837  443495 cri.go:89] found id: ""
	I0805 12:53:21.562873  443495 logs.go:276] 0 containers: []
	W0805 12:53:21.562883  443495 logs.go:278] No container was found matching "coredns"
	I0805 12:53:21.562890  443495 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 12:53:21.562965  443495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 12:53:21.598188  443495 cri.go:89] found id: ""
	I0805 12:53:21.598227  443495 logs.go:276] 0 containers: []
	W0805 12:53:21.598238  443495 logs.go:278] No container was found matching "kube-scheduler"
	I0805 12:53:21.598246  443495 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 12:53:21.598312  443495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 12:53:21.632506  443495 cri.go:89] found id: ""
	I0805 12:53:21.632535  443495 logs.go:276] 0 containers: []
	W0805 12:53:21.632544  443495 logs.go:278] No container was found matching "kube-proxy"
	I0805 12:53:21.632551  443495 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 12:53:21.632605  443495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 12:53:21.667083  443495 cri.go:89] found id: ""
	I0805 12:53:21.667122  443495 logs.go:276] 0 containers: []
	W0805 12:53:21.667134  443495 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 12:53:21.667142  443495 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 12:53:21.667209  443495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 12:53:21.702491  443495 cri.go:89] found id: ""
	I0805 12:53:21.702518  443495 logs.go:276] 0 containers: []
	W0805 12:53:21.702530  443495 logs.go:278] No container was found matching "kindnet"
	I0805 12:53:21.702544  443495 logs.go:123] Gathering logs for describe nodes ...
	I0805 12:53:21.702561  443495 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 12:53:21.817391  443495 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 12:53:21.817412  443495 logs.go:123] Gathering logs for CRI-O ...
	I0805 12:53:21.817432  443495 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 12:53:21.913711  443495 logs.go:123] Gathering logs for container status ...
	I0805 12:53:21.913753  443495 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 12:53:21.967461  443495 logs.go:123] Gathering logs for kubelet ...
	I0805 12:53:21.967492  443495 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 12:53:22.015432  443495 logs.go:123] Gathering logs for dmesg ...
	I0805 12:53:22.015467  443495 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0805 12:53:22.028419  443495 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0805 12:53:22.028458  443495 out.go:239] * 
	* 
	W0805 12:53:22.028518  443495 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0805 12:53:22.028540  443495 out.go:239] * 
	* 
	W0805 12:53:22.029629  443495 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 12:53:22.033000  443495 out.go:177] 
	W0805 12:53:22.034211  443495 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0805 12:53:22.034254  443495 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0805 12:53:22.034270  443495 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0805 12:53:22.036498  443495 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-635707 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-635707 -n old-k8s-version-635707
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-635707 -n old-k8s-version-635707: exit status 6 (216.31891ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0805 12:53:22.301984  450248 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-635707" does not appear in /home/jenkins/minikube-integration/19377-383955/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-635707" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (300.67s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (138.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-321139 --alsologtostderr -v=3
E0805 12:50:54.107444  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/client.crt: no such file or directory
E0805 12:50:54.579574  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/auto-119870/client.crt: no such file or directory
E0805 12:50:59.228564  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/client.crt: no such file or directory
E0805 12:50:59.700226  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/auto-119870/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-321139 --alsologtostderr -v=3: exit status 82 (2m0.501709017s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-321139"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 12:50:52.724647  449301 out.go:291] Setting OutFile to fd 1 ...
	I0805 12:50:52.724858  449301 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:50:52.724869  449301 out.go:304] Setting ErrFile to fd 2...
	I0805 12:50:52.724873  449301 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:50:52.725035  449301 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
	I0805 12:50:52.725254  449301 out.go:298] Setting JSON to false
	I0805 12:50:52.725323  449301 mustload.go:65] Loading cluster: embed-certs-321139
	I0805 12:50:52.725625  449301 config.go:182] Loaded profile config "embed-certs-321139": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:50:52.725687  449301 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139/config.json ...
	I0805 12:50:52.725840  449301 mustload.go:65] Loading cluster: embed-certs-321139
	I0805 12:50:52.725937  449301 config.go:182] Loaded profile config "embed-certs-321139": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:50:52.725965  449301 stop.go:39] StopHost: embed-certs-321139
	I0805 12:50:52.726302  449301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:50:52.726357  449301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:50:52.743013  449301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34807
	I0805 12:50:52.743516  449301 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:50:52.744210  449301 main.go:141] libmachine: Using API Version  1
	I0805 12:50:52.744236  449301 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:50:52.744587  449301 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:50:52.746827  449301 out.go:177] * Stopping node "embed-certs-321139"  ...
	I0805 12:50:52.748610  449301 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0805 12:50:52.748654  449301 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:50:52.748898  449301 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0805 12:50:52.748929  449301 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:50:52.752040  449301 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:50:52.752595  449301 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:50:52.752630  449301 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:50:52.752800  449301 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:50:52.752985  449301 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:50:52.753114  449301 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:50:52.753277  449301 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa Username:docker}
	I0805 12:50:52.852175  449301 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0805 12:50:52.910832  449301 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0805 12:50:52.968650  449301 main.go:141] libmachine: Stopping "embed-certs-321139"...
	I0805 12:50:52.968718  449301 main.go:141] libmachine: (embed-certs-321139) Calling .GetState
	I0805 12:50:52.970722  449301 main.go:141] libmachine: (embed-certs-321139) Calling .Stop
	I0805 12:50:52.974737  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 0/120
	I0805 12:50:53.975960  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 1/120
	I0805 12:50:54.978139  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 2/120
	I0805 12:50:55.980699  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 3/120
	I0805 12:50:56.982255  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 4/120
	I0805 12:50:57.984538  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 5/120
	I0805 12:50:58.986174  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 6/120
	I0805 12:50:59.987632  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 7/120
	I0805 12:51:00.989387  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 8/120
	I0805 12:51:01.991088  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 9/120
	I0805 12:51:02.992772  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 10/120
	I0805 12:51:03.994607  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 11/120
	I0805 12:51:04.996051  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 12/120
	I0805 12:51:05.998332  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 13/120
	I0805 12:51:07.000134  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 14/120
	I0805 12:51:08.001948  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 15/120
	I0805 12:51:09.003216  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 16/120
	I0805 12:51:10.004551  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 17/120
	I0805 12:51:11.006490  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 18/120
	I0805 12:51:12.007864  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 19/120
	I0805 12:51:13.009414  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 20/120
	I0805 12:51:14.010834  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 21/120
	I0805 12:51:15.012720  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 22/120
	I0805 12:51:16.014611  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 23/120
	I0805 12:51:17.016104  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 24/120
	I0805 12:51:18.018056  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 25/120
	I0805 12:51:19.019519  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 26/120
	I0805 12:51:20.020856  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 27/120
	I0805 12:51:21.023174  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 28/120
	I0805 12:51:22.024527  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 29/120
	I0805 12:51:23.026833  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 30/120
	I0805 12:51:24.028331  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 31/120
	I0805 12:51:25.030096  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 32/120
	I0805 12:51:26.031610  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 33/120
	I0805 12:51:27.032972  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 34/120
	I0805 12:51:28.034929  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 35/120
	I0805 12:51:29.036708  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 36/120
	I0805 12:51:30.038819  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 37/120
	I0805 12:51:31.040555  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 38/120
	I0805 12:51:32.042240  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 39/120
	I0805 12:51:33.044583  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 40/120
	I0805 12:51:34.046429  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 41/120
	I0805 12:51:35.048383  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 42/120
	I0805 12:51:36.050204  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 43/120
	I0805 12:51:37.051673  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 44/120
	I0805 12:51:38.053632  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 45/120
	I0805 12:51:39.055028  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 46/120
	I0805 12:51:40.056545  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 47/120
	I0805 12:51:41.058123  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 48/120
	I0805 12:51:42.059634  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 49/120
	I0805 12:51:43.061893  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 50/120
	I0805 12:51:44.063465  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 51/120
	I0805 12:51:45.065419  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 52/120
	I0805 12:51:46.066746  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 53/120
	I0805 12:51:47.068019  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 54/120
	I0805 12:51:48.070051  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 55/120
	I0805 12:51:49.071437  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 56/120
	I0805 12:51:50.072840  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 57/120
	I0805 12:51:51.074322  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 58/120
	I0805 12:51:52.075768  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 59/120
	I0805 12:51:53.077974  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 60/120
	I0805 12:51:54.079582  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 61/120
	I0805 12:51:55.081037  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 62/120
	I0805 12:51:56.082469  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 63/120
	I0805 12:51:57.083810  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 64/120
	I0805 12:51:58.085511  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 65/120
	I0805 12:51:59.086841  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 66/120
	I0805 12:52:00.088376  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 67/120
	I0805 12:52:01.089772  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 68/120
	I0805 12:52:02.091165  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 69/120
	I0805 12:52:03.092600  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 70/120
	I0805 12:52:04.094052  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 71/120
	I0805 12:52:05.095654  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 72/120
	I0805 12:52:06.097138  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 73/120
	I0805 12:52:07.099265  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 74/120
	I0805 12:52:08.101245  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 75/120
	I0805 12:52:09.102834  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 76/120
	I0805 12:52:10.104494  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 77/120
	I0805 12:52:11.105734  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 78/120
	I0805 12:52:12.107049  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 79/120
	I0805 12:52:13.108492  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 80/120
	I0805 12:52:14.109887  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 81/120
	I0805 12:52:15.111698  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 82/120
	I0805 12:52:16.113176  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 83/120
	I0805 12:52:17.114833  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 84/120
	I0805 12:52:18.117261  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 85/120
	I0805 12:52:19.118901  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 86/120
	I0805 12:52:20.120827  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 87/120
	I0805 12:52:21.122216  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 88/120
	I0805 12:52:22.123680  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 89/120
	I0805 12:52:23.126139  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 90/120
	I0805 12:52:24.127610  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 91/120
	I0805 12:52:25.129083  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 92/120
	I0805 12:52:26.130327  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 93/120
	I0805 12:52:27.132013  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 94/120
	I0805 12:52:28.134093  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 95/120
	I0805 12:52:29.135271  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 96/120
	I0805 12:52:30.136668  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 97/120
	I0805 12:52:31.137846  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 98/120
	I0805 12:52:32.139283  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 99/120
	I0805 12:52:33.141323  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 100/120
	I0805 12:52:34.142838  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 101/120
	I0805 12:52:35.144360  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 102/120
	I0805 12:52:36.145795  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 103/120
	I0805 12:52:37.147353  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 104/120
	I0805 12:52:38.149293  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 105/120
	I0805 12:52:39.150686  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 106/120
	I0805 12:52:40.152236  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 107/120
	I0805 12:52:41.153733  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 108/120
	I0805 12:52:42.155010  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 109/120
	I0805 12:52:43.157353  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 110/120
	I0805 12:52:44.158766  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 111/120
	I0805 12:52:45.160158  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 112/120
	I0805 12:52:46.161704  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 113/120
	I0805 12:52:47.162990  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 114/120
	I0805 12:52:48.165161  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 115/120
	I0805 12:52:49.166390  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 116/120
	I0805 12:52:50.167863  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 117/120
	I0805 12:52:51.169347  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 118/120
	I0805 12:52:52.170817  449301 main.go:141] libmachine: (embed-certs-321139) Waiting for machine to stop 119/120
	I0805 12:52:53.171783  449301 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0805 12:52:53.171865  449301 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0805 12:52:53.173961  449301 out.go:177] 
	W0805 12:52:53.175558  449301 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0805 12:52:53.175582  449301 out.go:239] * 
	* 
	W0805 12:52:53.179098  449301 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 12:52:53.180560  449301 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-321139 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-321139 -n embed-certs-321139
E0805 12:52:53.388929  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/custom-flannel-119870/client.crt: no such file or directory
E0805 12:52:58.509873  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/custom-flannel-119870/client.crt: no such file or directory
E0805 12:53:08.751112  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/custom-flannel-119870/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-321139 -n embed-certs-321139: exit status 3 (18.469848435s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0805 12:53:11.652213  450021 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.196:22: connect: no route to host
	E0805 12:53:11.652231  450021 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.196:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-321139" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (138.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-669469 --alsologtostderr -v=3
E0805 12:51:29.950115  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/client.crt: no such file or directory
E0805 12:51:30.421495  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/auto-119870/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-669469 --alsologtostderr -v=3: exit status 82 (2m0.497332092s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-669469"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 12:51:12.073753  449500 out.go:291] Setting OutFile to fd 1 ...
	I0805 12:51:12.074182  449500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:51:12.074199  449500 out.go:304] Setting ErrFile to fd 2...
	I0805 12:51:12.074206  449500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:51:12.074707  449500 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
	I0805 12:51:12.075120  449500 out.go:298] Setting JSON to false
	I0805 12:51:12.075431  449500 mustload.go:65] Loading cluster: no-preload-669469
	I0805 12:51:12.075848  449500 config.go:182] Loaded profile config "no-preload-669469": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0805 12:51:12.075923  449500 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469/config.json ...
	I0805 12:51:12.076131  449500 mustload.go:65] Loading cluster: no-preload-669469
	I0805 12:51:12.076240  449500 config.go:182] Loaded profile config "no-preload-669469": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0805 12:51:12.076272  449500 stop.go:39] StopHost: no-preload-669469
	I0805 12:51:12.076810  449500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:51:12.076875  449500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:51:12.091643  449500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38319
	I0805 12:51:12.092304  449500 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:51:12.092909  449500 main.go:141] libmachine: Using API Version  1
	I0805 12:51:12.092935  449500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:51:12.093363  449500 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:51:12.095878  449500 out.go:177] * Stopping node "no-preload-669469"  ...
	I0805 12:51:12.097327  449500 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0805 12:51:12.097366  449500 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 12:51:12.097669  449500 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0805 12:51:12.097694  449500 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:51:12.100794  449500 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:51:12.101244  449500 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:49:32 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:51:12.101266  449500 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:51:12.101464  449500 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:51:12.101623  449500 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:51:12.101794  449500 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:51:12.101952  449500 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa Username:docker}
	I0805 12:51:12.201701  449500 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0805 12:51:12.254263  449500 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0805 12:51:12.309060  449500 main.go:141] libmachine: Stopping "no-preload-669469"...
	I0805 12:51:12.309091  449500 main.go:141] libmachine: (no-preload-669469) Calling .GetState
	I0805 12:51:12.310917  449500 main.go:141] libmachine: (no-preload-669469) Calling .Stop
	I0805 12:51:12.314909  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 0/120
	I0805 12:51:13.316689  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 1/120
	I0805 12:51:14.317862  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 2/120
	I0805 12:51:15.319338  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 3/120
	I0805 12:51:16.320959  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 4/120
	I0805 12:51:17.322997  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 5/120
	I0805 12:51:18.324357  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 6/120
	I0805 12:51:19.325672  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 7/120
	I0805 12:51:20.327215  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 8/120
	I0805 12:51:21.328663  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 9/120
	I0805 12:51:22.331056  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 10/120
	I0805 12:51:23.332617  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 11/120
	I0805 12:51:24.334598  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 12/120
	I0805 12:51:25.335935  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 13/120
	I0805 12:51:26.337456  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 14/120
	I0805 12:51:27.339418  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 15/120
	I0805 12:51:28.340898  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 16/120
	I0805 12:51:29.342559  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 17/120
	I0805 12:51:30.344162  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 18/120
	I0805 12:51:31.346544  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 19/120
	I0805 12:51:32.348792  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 20/120
	I0805 12:51:33.350503  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 21/120
	I0805 12:51:34.352105  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 22/120
	I0805 12:51:35.353893  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 23/120
	I0805 12:51:36.355151  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 24/120
	I0805 12:51:37.357091  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 25/120
	I0805 12:51:38.359226  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 26/120
	I0805 12:51:39.360504  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 27/120
	I0805 12:51:40.362259  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 28/120
	I0805 12:51:41.363771  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 29/120
	I0805 12:51:42.365975  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 30/120
	I0805 12:51:43.367390  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 31/120
	I0805 12:51:44.369139  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 32/120
	I0805 12:51:45.370863  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 33/120
	I0805 12:51:46.372459  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 34/120
	I0805 12:51:47.374506  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 35/120
	I0805 12:51:48.375727  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 36/120
	I0805 12:51:49.377175  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 37/120
	I0805 12:51:50.379510  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 38/120
	I0805 12:51:51.380977  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 39/120
	I0805 12:51:52.383632  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 40/120
	I0805 12:51:53.385566  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 41/120
	I0805 12:51:54.387423  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 42/120
	I0805 12:51:55.389036  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 43/120
	I0805 12:51:56.390292  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 44/120
	I0805 12:51:57.392413  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 45/120
	I0805 12:51:58.393814  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 46/120
	I0805 12:51:59.395314  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 47/120
	I0805 12:52:00.397006  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 48/120
	I0805 12:52:01.398384  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 49/120
	I0805 12:52:02.400639  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 50/120
	I0805 12:52:03.401931  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 51/120
	I0805 12:52:04.403466  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 52/120
	I0805 12:52:05.404769  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 53/120
	I0805 12:52:06.406146  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 54/120
	I0805 12:52:07.408068  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 55/120
	I0805 12:52:08.409650  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 56/120
	I0805 12:52:09.410962  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 57/120
	I0805 12:52:10.412886  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 58/120
	I0805 12:52:11.414505  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 59/120
	I0805 12:52:12.416987  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 60/120
	I0805 12:52:13.418821  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 61/120
	I0805 12:52:14.420315  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 62/120
	I0805 12:52:15.421774  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 63/120
	I0805 12:52:16.423392  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 64/120
	I0805 12:52:17.425602  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 65/120
	I0805 12:52:18.427355  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 66/120
	I0805 12:52:19.428775  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 67/120
	I0805 12:52:20.430220  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 68/120
	I0805 12:52:21.431812  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 69/120
	I0805 12:52:22.433978  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 70/120
	I0805 12:52:23.435409  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 71/120
	I0805 12:52:24.436996  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 72/120
	I0805 12:52:25.438510  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 73/120
	I0805 12:52:26.439954  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 74/120
	I0805 12:52:27.442001  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 75/120
	I0805 12:52:28.443461  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 76/120
	I0805 12:52:29.445018  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 77/120
	I0805 12:52:30.446387  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 78/120
	I0805 12:52:31.448472  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 79/120
	I0805 12:52:32.450700  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 80/120
	I0805 12:52:33.452441  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 81/120
	I0805 12:52:34.453828  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 82/120
	I0805 12:52:35.455086  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 83/120
	I0805 12:52:36.456502  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 84/120
	I0805 12:52:37.458164  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 85/120
	I0805 12:52:38.459515  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 86/120
	I0805 12:52:39.460801  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 87/120
	I0805 12:52:40.462199  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 88/120
	I0805 12:52:41.463418  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 89/120
	I0805 12:52:42.465655  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 90/120
	I0805 12:52:43.466996  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 91/120
	I0805 12:52:44.468491  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 92/120
	I0805 12:52:45.470103  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 93/120
	I0805 12:52:46.471469  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 94/120
	I0805 12:52:47.473379  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 95/120
	I0805 12:52:48.474754  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 96/120
	I0805 12:52:49.476124  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 97/120
	I0805 12:52:50.477940  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 98/120
	I0805 12:52:51.479251  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 99/120
	I0805 12:52:52.481442  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 100/120
	I0805 12:52:53.483116  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 101/120
	I0805 12:52:54.484696  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 102/120
	I0805 12:52:55.486342  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 103/120
	I0805 12:52:56.487792  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 104/120
	I0805 12:52:57.489854  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 105/120
	I0805 12:52:58.491482  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 106/120
	I0805 12:52:59.492954  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 107/120
	I0805 12:53:00.494462  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 108/120
	I0805 12:53:01.496482  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 109/120
	I0805 12:53:02.497885  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 110/120
	I0805 12:53:03.499351  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 111/120
	I0805 12:53:04.500733  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 112/120
	I0805 12:53:05.502608  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 113/120
	I0805 12:53:06.504077  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 114/120
	I0805 12:53:07.506315  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 115/120
	I0805 12:53:08.507533  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 116/120
	I0805 12:53:09.508811  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 117/120
	I0805 12:53:10.510230  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 118/120
	I0805 12:53:11.511570  449500 main.go:141] libmachine: (no-preload-669469) Waiting for machine to stop 119/120
	I0805 12:53:12.512701  449500 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0805 12:53:12.512794  449500 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0805 12:53:12.514680  449500 out.go:177] 
	W0805 12:53:12.516018  449500 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0805 12:53:12.516034  449500 out.go:239] * 
	* 
	W0805 12:53:12.519310  449500 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 12:53:12.520534  449500 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-669469 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-669469 -n no-preload-669469
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-669469 -n no-preload-669469: exit status 3 (18.585329756s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0805 12:53:31.108008  450148 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.223:22: connect: no route to host
	E0805 12:53:31.108027  450148 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.223:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-669469" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-371585 --alsologtostderr -v=3
E0805 12:52:09.899916  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/calico-119870/client.crt: no such file or directory
E0805 12:52:09.905212  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/calico-119870/client.crt: no such file or directory
E0805 12:52:09.915423  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/calico-119870/client.crt: no such file or directory
E0805 12:52:09.936374  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/calico-119870/client.crt: no such file or directory
E0805 12:52:09.976636  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/calico-119870/client.crt: no such file or directory
E0805 12:52:10.056962  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/calico-119870/client.crt: no such file or directory
E0805 12:52:10.217875  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/calico-119870/client.crt: no such file or directory
E0805 12:52:10.538290  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/calico-119870/client.crt: no such file or directory
E0805 12:52:10.911117  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/client.crt: no such file or directory
E0805 12:52:11.179141  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/calico-119870/client.crt: no such file or directory
E0805 12:52:11.382562  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/auto-119870/client.crt: no such file or directory
E0805 12:52:12.460365  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/calico-119870/client.crt: no such file or directory
E0805 12:52:15.020917  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/calico-119870/client.crt: no such file or directory
E0805 12:52:20.141095  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/calico-119870/client.crt: no such file or directory
E0805 12:52:30.382281  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/calico-119870/client.crt: no such file or directory
E0805 12:52:48.268654  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/custom-flannel-119870/client.crt: no such file or directory
E0805 12:52:48.273950  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/custom-flannel-119870/client.crt: no such file or directory
E0805 12:52:48.284221  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/custom-flannel-119870/client.crt: no such file or directory
E0805 12:52:48.304501  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/custom-flannel-119870/client.crt: no such file or directory
E0805 12:52:48.344835  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/custom-flannel-119870/client.crt: no such file or directory
E0805 12:52:48.425270  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/custom-flannel-119870/client.crt: no such file or directory
E0805 12:52:48.586143  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/custom-flannel-119870/client.crt: no such file or directory
E0805 12:52:48.906723  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/custom-flannel-119870/client.crt: no such file or directory
E0805 12:52:49.547541  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/custom-flannel-119870/client.crt: no such file or directory
E0805 12:52:50.828639  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/custom-flannel-119870/client.crt: no such file or directory
E0805 12:52:50.863109  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/calico-119870/client.crt: no such file or directory
E0805 12:52:52.926341  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/functional-014296/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-371585 --alsologtostderr -v=3: exit status 82 (2m0.486933383s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-371585"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 12:51:45.643490  449739 out.go:291] Setting OutFile to fd 1 ...
	I0805 12:51:45.643616  449739 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:51:45.643626  449739 out.go:304] Setting ErrFile to fd 2...
	I0805 12:51:45.643632  449739 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:51:45.643868  449739 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
	I0805 12:51:45.644170  449739 out.go:298] Setting JSON to false
	I0805 12:51:45.644255  449739 mustload.go:65] Loading cluster: default-k8s-diff-port-371585
	I0805 12:51:45.644602  449739 config.go:182] Loaded profile config "default-k8s-diff-port-371585": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:51:45.644667  449739 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585/config.json ...
	I0805 12:51:45.644843  449739 mustload.go:65] Loading cluster: default-k8s-diff-port-371585
	I0805 12:51:45.644943  449739 config.go:182] Loaded profile config "default-k8s-diff-port-371585": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:51:45.644970  449739 stop.go:39] StopHost: default-k8s-diff-port-371585
	I0805 12:51:45.645391  449739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:51:45.645442  449739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:51:45.660066  449739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33253
	I0805 12:51:45.660568  449739 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:51:45.661165  449739 main.go:141] libmachine: Using API Version  1
	I0805 12:51:45.661189  449739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:51:45.661536  449739 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:51:45.663892  449739 out.go:177] * Stopping node "default-k8s-diff-port-371585"  ...
	I0805 12:51:45.665070  449739 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0805 12:51:45.665117  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 12:51:45.665372  449739 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0805 12:51:45.665403  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:51:45.667951  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:51:45.668389  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:51:45.668429  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:51:45.668557  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:51:45.668815  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:51:45.668991  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:51:45.669142  449739 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa Username:docker}
	I0805 12:51:45.756191  449739 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0805 12:51:45.818564  449739 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0805 12:51:45.883339  449739 main.go:141] libmachine: Stopping "default-k8s-diff-port-371585"...
	I0805 12:51:45.883377  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetState
	I0805 12:51:45.885190  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .Stop
	I0805 12:51:45.889322  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 0/120
	I0805 12:51:46.891208  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 1/120
	I0805 12:51:47.892725  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 2/120
	I0805 12:51:48.894113  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 3/120
	I0805 12:51:49.895711  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 4/120
	I0805 12:51:50.897632  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 5/120
	I0805 12:51:51.899065  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 6/120
	I0805 12:51:52.900690  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 7/120
	I0805 12:51:53.902778  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 8/120
	I0805 12:51:54.904265  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 9/120
	I0805 12:51:55.905457  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 10/120
	I0805 12:51:56.907773  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 11/120
	I0805 12:51:57.909267  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 12/120
	I0805 12:51:58.911177  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 13/120
	I0805 12:51:59.912609  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 14/120
	I0805 12:52:00.914698  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 15/120
	I0805 12:52:01.916118  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 16/120
	I0805 12:52:02.918263  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 17/120
	I0805 12:52:03.919832  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 18/120
	I0805 12:52:04.921219  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 19/120
	I0805 12:52:05.923516  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 20/120
	I0805 12:52:06.924896  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 21/120
	I0805 12:52:07.926422  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 22/120
	I0805 12:52:08.927873  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 23/120
	I0805 12:52:09.929147  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 24/120
	I0805 12:52:10.931660  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 25/120
	I0805 12:52:11.932994  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 26/120
	I0805 12:52:12.934273  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 27/120
	I0805 12:52:13.935977  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 28/120
	I0805 12:52:14.937558  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 29/120
	I0805 12:52:15.939165  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 30/120
	I0805 12:52:16.940593  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 31/120
	I0805 12:52:17.942248  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 32/120
	I0805 12:52:18.943718  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 33/120
	I0805 12:52:19.945148  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 34/120
	I0805 12:52:20.947418  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 35/120
	I0805 12:52:21.948996  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 36/120
	I0805 12:52:22.950331  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 37/120
	I0805 12:52:23.952224  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 38/120
	I0805 12:52:24.953642  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 39/120
	I0805 12:52:25.956054  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 40/120
	I0805 12:52:26.957447  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 41/120
	I0805 12:52:27.958689  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 42/120
	I0805 12:52:28.959992  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 43/120
	I0805 12:52:29.961323  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 44/120
	I0805 12:52:30.963269  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 45/120
	I0805 12:52:31.964599  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 46/120
	I0805 12:52:32.966180  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 47/120
	I0805 12:52:33.967536  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 48/120
	I0805 12:52:34.969040  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 49/120
	I0805 12:52:35.971242  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 50/120
	I0805 12:52:36.972853  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 51/120
	I0805 12:52:37.974592  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 52/120
	I0805 12:52:38.975986  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 53/120
	I0805 12:52:39.977431  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 54/120
	I0805 12:52:40.979581  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 55/120
	I0805 12:52:41.980833  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 56/120
	I0805 12:52:42.982373  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 57/120
	I0805 12:52:43.983611  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 58/120
	I0805 12:52:44.984895  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 59/120
	I0805 12:52:45.987288  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 60/120
	I0805 12:52:46.988797  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 61/120
	I0805 12:52:47.990134  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 62/120
	I0805 12:52:48.991620  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 63/120
	I0805 12:52:49.993190  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 64/120
	I0805 12:52:50.995415  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 65/120
	I0805 12:52:51.996857  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 66/120
	I0805 12:52:52.998404  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 67/120
	I0805 12:52:53.999684  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 68/120
	I0805 12:52:55.000947  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 69/120
	I0805 12:52:56.003221  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 70/120
	I0805 12:52:57.004604  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 71/120
	I0805 12:52:58.006087  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 72/120
	I0805 12:52:59.007446  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 73/120
	I0805 12:53:00.008787  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 74/120
	I0805 12:53:01.010786  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 75/120
	I0805 12:53:02.012193  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 76/120
	I0805 12:53:03.013784  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 77/120
	I0805 12:53:04.015224  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 78/120
	I0805 12:53:05.016730  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 79/120
	I0805 12:53:06.018899  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 80/120
	I0805 12:53:07.020376  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 81/120
	I0805 12:53:08.021642  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 82/120
	I0805 12:53:09.023116  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 83/120
	I0805 12:53:10.024564  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 84/120
	I0805 12:53:11.026763  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 85/120
	I0805 12:53:12.028403  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 86/120
	I0805 12:53:13.029793  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 87/120
	I0805 12:53:14.031436  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 88/120
	I0805 12:53:15.032791  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 89/120
	I0805 12:53:16.035317  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 90/120
	I0805 12:53:17.036949  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 91/120
	I0805 12:53:18.038332  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 92/120
	I0805 12:53:19.039606  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 93/120
	I0805 12:53:20.040993  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 94/120
	I0805 12:53:21.042352  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 95/120
	I0805 12:53:22.043838  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 96/120
	I0805 12:53:23.044871  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 97/120
	I0805 12:53:24.046396  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 98/120
	I0805 12:53:25.047867  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 99/120
	I0805 12:53:26.050001  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 100/120
	I0805 12:53:27.051559  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 101/120
	I0805 12:53:28.052812  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 102/120
	I0805 12:53:29.054212  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 103/120
	I0805 12:53:30.055523  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 104/120
	I0805 12:53:31.057542  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 105/120
	I0805 12:53:32.058869  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 106/120
	I0805 12:53:33.060016  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 107/120
	I0805 12:53:34.061325  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 108/120
	I0805 12:53:35.062763  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 109/120
	I0805 12:53:36.064968  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 110/120
	I0805 12:53:37.066335  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 111/120
	I0805 12:53:38.067522  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 112/120
	I0805 12:53:39.068900  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 113/120
	I0805 12:53:40.070082  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 114/120
	I0805 12:53:41.072081  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 115/120
	I0805 12:53:42.073526  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 116/120
	I0805 12:53:43.074909  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 117/120
	I0805 12:53:44.076263  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 118/120
	I0805 12:53:45.077632  449739 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for machine to stop 119/120
	I0805 12:53:46.078790  449739 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0805 12:53:46.078850  449739 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0805 12:53:46.080714  449739 out.go:177] 
	W0805 12:53:46.082052  449739 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0805 12:53:46.082073  449739 out.go:239] * 
	* 
	W0805 12:53:46.085532  449739 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 12:53:46.086781  449739 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-371585 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-371585 -n default-k8s-diff-port-371585
E0805 12:53:46.576950  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/enable-default-cni-119870/client.crt: no such file or directory
E0805 12:53:56.817596  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/enable-default-cni-119870/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-371585 -n default-k8s-diff-port-371585: exit status 3 (18.555259021s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0805 12:54:04.644144  450640 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.228:22: connect: no route to host
	E0805 12:54:04.644166  450640 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.228:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-371585" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-321139 -n embed-certs-321139
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-321139 -n embed-certs-321139: exit status 3 (3.167563624s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0805 12:53:14.820117  450102 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.196:22: connect: no route to host
	E0805 12:53:14.820137  450102 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.196:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-321139 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-321139 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154597706s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.196:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-321139 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-321139 -n embed-certs-321139
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-321139 -n embed-certs-321139: exit status 3 (3.061119457s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0805 12:53:24.036156  450214 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.196:22: connect: no route to host
	E0805 12:53:24.036184  450214 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.196:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-321139" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-635707 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-635707 create -f testdata/busybox.yaml: exit status 1 (43.256463ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-635707" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-635707 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-635707 -n old-k8s-version-635707
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-635707 -n old-k8s-version-635707: exit status 6 (212.506646ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0805 12:53:22.558608  450303 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-635707" does not appear in /home/jenkins/minikube-integration/19377-383955/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-635707" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-635707 -n old-k8s-version-635707
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-635707 -n old-k8s-version-635707: exit status 6 (212.203059ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0805 12:53:22.771036  450333 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-635707" does not appear in /home/jenkins/minikube-integration/19377-383955/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-635707" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (105.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-635707 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-635707 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m45.401110056s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-635707 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-635707 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-635707 describe deploy/metrics-server -n kube-system: exit status 1 (44.808554ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-635707" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-635707 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-635707 -n old-k8s-version-635707
E0805 12:55:08.286545  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/bridge-119870/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-635707 -n old-k8s-version-635707: exit status 6 (215.627065ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0805 12:55:08.431918  451126 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-635707" does not appear in /home/jenkins/minikube-integration/19377-383955/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-635707" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (105.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-669469 -n no-preload-669469
E0805 12:53:31.823984  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/calico-119870/client.crt: no such file or directory
E0805 12:53:32.832110  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/client.crt: no such file or directory
E0805 12:53:33.303565  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/auto-119870/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-669469 -n no-preload-669469: exit status 3 (3.168166443s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0805 12:53:34.276113  450450 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.223:22: connect: no route to host
	E0805 12:53:34.276134  450450 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.223:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-669469 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0805 12:53:36.335578  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/enable-default-cni-119870/client.crt: no such file or directory
E0805 12:53:36.340886  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/enable-default-cni-119870/client.crt: no such file or directory
E0805 12:53:36.351153  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/enable-default-cni-119870/client.crt: no such file or directory
E0805 12:53:36.371397  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/enable-default-cni-119870/client.crt: no such file or directory
E0805 12:53:36.411702  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/enable-default-cni-119870/client.crt: no such file or directory
E0805 12:53:36.492144  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/enable-default-cni-119870/client.crt: no such file or directory
E0805 12:53:36.652664  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/enable-default-cni-119870/client.crt: no such file or directory
E0805 12:53:36.973452  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/enable-default-cni-119870/client.crt: no such file or directory
E0805 12:53:37.614112  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/enable-default-cni-119870/client.crt: no such file or directory
E0805 12:53:38.895051  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/enable-default-cni-119870/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-669469 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153727042s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.223:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-669469 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-669469 -n no-preload-669469
E0805 12:53:41.455797  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/enable-default-cni-119870/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-669469 -n no-preload-669469: exit status 3 (3.062003505s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0805 12:53:43.492262  450530 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.223:22: connect: no route to host
	E0805 12:53:43.492288  450530 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.223:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-669469" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-371585 -n default-k8s-diff-port-371585
E0805 12:54:06.748750  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/flannel-119870/client.crt: no such file or directory
E0805 12:54:06.754037  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/flannel-119870/client.crt: no such file or directory
E0805 12:54:06.764171  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/flannel-119870/client.crt: no such file or directory
E0805 12:54:06.784554  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/flannel-119870/client.crt: no such file or directory
E0805 12:54:06.824934  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/flannel-119870/client.crt: no such file or directory
E0805 12:54:06.905436  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/flannel-119870/client.crt: no such file or directory
E0805 12:54:07.066118  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/flannel-119870/client.crt: no such file or directory
E0805 12:54:07.386753  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/flannel-119870/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-371585 -n default-k8s-diff-port-371585: exit status 3 (3.167814866s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0805 12:54:07.812086  450754 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.228:22: connect: no route to host
	E0805 12:54:07.812106  450754 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.228:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-371585 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0805 12:54:08.027869  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/flannel-119870/client.crt: no such file or directory
E0805 12:54:09.308768  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/flannel-119870/client.crt: no such file or directory
E0805 12:54:10.191972  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/custom-flannel-119870/client.crt: no such file or directory
E0805 12:54:11.869055  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/flannel-119870/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-371585 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153593138s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.228:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-371585 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-371585 -n default-k8s-diff-port-371585
E0805 12:54:15.979560  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/functional-014296/client.crt: no such file or directory
E0805 12:54:16.990164  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/flannel-119870/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-371585 -n default-k8s-diff-port-371585: exit status 3 (3.061860555s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0805 12:54:17.028053  450855 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.228:22: connect: no route to host
	E0805 12:54:17.028071  450855 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.228:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-371585" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (721.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-635707 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0805 12:55:12.128350  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/bridge-119870/client.crt: no such file or directory
E0805 12:55:17.249501  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/bridge-119870/client.crt: no such file or directory
E0805 12:55:27.489894  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/bridge-119870/client.crt: no such file or directory
E0805 12:55:27.753420  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.crt: no such file or directory
E0805 12:55:28.673049  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/flannel-119870/client.crt: no such file or directory
E0805 12:55:32.113090  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/custom-flannel-119870/client.crt: no such file or directory
E0805 12:55:47.970460  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/bridge-119870/client.crt: no such file or directory
E0805 12:55:48.987044  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/client.crt: no such file or directory
E0805 12:55:49.458874  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/auto-119870/client.crt: no such file or directory
E0805 12:56:16.672418  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/client.crt: no such file or directory
E0805 12:56:17.144370  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/auto-119870/client.crt: no such file or directory
E0805 12:56:20.179354  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/enable-default-cni-119870/client.crt: no such file or directory
E0805 12:56:28.930812  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/bridge-119870/client.crt: no such file or directory
E0805 12:56:50.593366  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/flannel-119870/client.crt: no such file or directory
E0805 12:57:09.900574  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/calico-119870/client.crt: no such file or directory
E0805 12:57:37.584713  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/calico-119870/client.crt: no such file or directory
E0805 12:57:48.267949  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/custom-flannel-119870/client.crt: no such file or directory
E0805 12:57:50.851028  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/bridge-119870/client.crt: no such file or directory
E0805 12:57:52.926422  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/functional-014296/client.crt: no such file or directory
E0805 12:58:15.954107  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/custom-flannel-119870/client.crt: no such file or directory
E0805 12:58:36.335892  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/enable-default-cni-119870/client.crt: no such file or directory
E0805 12:59:04.020304  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/enable-default-cni-119870/client.crt: no such file or directory
E0805 12:59:06.749125  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/flannel-119870/client.crt: no such file or directory
E0805 12:59:34.434207  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/flannel-119870/client.crt: no such file or directory
E0805 13:00:07.008585  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/bridge-119870/client.crt: no such file or directory
E0805 13:00:27.753087  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.crt: no such file or directory
E0805 13:00:34.691352  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/bridge-119870/client.crt: no such file or directory
E0805 13:00:48.987705  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/client.crt: no such file or directory
E0805 13:00:49.459119  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/auto-119870/client.crt: no such file or directory
E0805 13:02:09.900410  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/calico-119870/client.crt: no such file or directory
E0805 13:02:48.268406  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/custom-flannel-119870/client.crt: no such file or directory
E0805 13:02:52.927308  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/functional-014296/client.crt: no such file or directory
E0805 13:03:30.810184  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.crt: no such file or directory
E0805 13:03:36.335355  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/enable-default-cni-119870/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-635707 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (11m57.970567616s)

                                                
                                                
-- stdout --
	* [old-k8s-version-635707] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19377
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19377-383955/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-383955/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-635707" primary control-plane node in "old-k8s-version-635707" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-635707" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 12:55:11.960192  451238 out.go:291] Setting OutFile to fd 1 ...
	I0805 12:55:11.960471  451238 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:55:11.960479  451238 out.go:304] Setting ErrFile to fd 2...
	I0805 12:55:11.960484  451238 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:55:11.960646  451238 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
	I0805 12:55:11.961145  451238 out.go:298] Setting JSON to false
	I0805 12:55:11.962063  451238 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":9459,"bootTime":1722853053,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0805 12:55:11.962121  451238 start.go:139] virtualization: kvm guest
	I0805 12:55:11.964372  451238 out.go:177] * [old-k8s-version-635707] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0805 12:55:11.965770  451238 notify.go:220] Checking for updates...
	I0805 12:55:11.965787  451238 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 12:55:11.967106  451238 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 12:55:11.968790  451238 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 12:55:11.970181  451238 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 12:55:11.971500  451238 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0805 12:55:11.973243  451238 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 12:55:11.974825  451238 config.go:182] Loaded profile config "old-k8s-version-635707": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0805 12:55:11.975239  451238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:55:11.975319  451238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:55:11.990296  451238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40583
	I0805 12:55:11.990704  451238 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:55:11.991235  451238 main.go:141] libmachine: Using API Version  1
	I0805 12:55:11.991259  451238 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:55:11.991575  451238 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:55:11.991765  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:55:11.993484  451238 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0805 12:55:11.994687  451238 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 12:55:11.994952  451238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:55:11.994984  451238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:55:12.009528  451238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37395
	I0805 12:55:12.009879  451238 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:55:12.010353  451238 main.go:141] libmachine: Using API Version  1
	I0805 12:55:12.010375  451238 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:55:12.010670  451238 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:55:12.010857  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:55:12.044634  451238 out.go:177] * Using the kvm2 driver based on existing profile
	I0805 12:55:12.045859  451238 start.go:297] selected driver: kvm2
	I0805 12:55:12.045876  451238 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-635707 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-635707 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.41 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:55:12.045987  451238 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 12:55:12.046662  451238 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 12:55:12.046731  451238 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19377-383955/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0805 12:55:12.061918  451238 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0805 12:55:12.062400  451238 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 12:55:12.062484  451238 cni.go:84] Creating CNI manager for ""
	I0805 12:55:12.062502  451238 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:55:12.062572  451238 start.go:340] cluster config:
	{Name:old-k8s-version-635707 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-635707 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.41 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:55:12.062722  451238 iso.go:125] acquiring lock: {Name:mk78a4988ea0dfb86bb6f7367e362683a39fd912 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 12:55:12.064478  451238 out.go:177] * Starting "old-k8s-version-635707" primary control-plane node in "old-k8s-version-635707" cluster
	I0805 12:55:12.065640  451238 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0805 12:55:12.065680  451238 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0805 12:55:12.065701  451238 cache.go:56] Caching tarball of preloaded images
	I0805 12:55:12.065786  451238 preload.go:172] Found /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0805 12:55:12.065797  451238 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0805 12:55:12.065897  451238 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/config.json ...
	I0805 12:55:12.066073  451238 start.go:360] acquireMachinesLock for old-k8s-version-635707: {Name:mk3babe91d55c30c0b650587cdec6489eb3a7ed6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 12:58:42.080993  451238 start.go:364] duration metric: took 3m30.014883629s to acquireMachinesLock for "old-k8s-version-635707"
	I0805 12:58:42.081066  451238 start.go:96] Skipping create...Using existing machine configuration
	I0805 12:58:42.081076  451238 fix.go:54] fixHost starting: 
	I0805 12:58:42.081569  451238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:58:42.081611  451238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:58:42.101889  451238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43379
	I0805 12:58:42.102366  451238 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:58:42.102910  451238 main.go:141] libmachine: Using API Version  1
	I0805 12:58:42.102947  451238 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:58:42.103310  451238 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:58:42.103552  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:58:42.103718  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetState
	I0805 12:58:42.105465  451238 fix.go:112] recreateIfNeeded on old-k8s-version-635707: state=Stopped err=<nil>
	I0805 12:58:42.105504  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	W0805 12:58:42.105674  451238 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 12:58:42.107563  451238 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-635707" ...
	I0805 12:58:42.109016  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .Start
	I0805 12:58:42.109214  451238 main.go:141] libmachine: (old-k8s-version-635707) Ensuring networks are active...
	I0805 12:58:42.110192  451238 main.go:141] libmachine: (old-k8s-version-635707) Ensuring network default is active
	I0805 12:58:42.110686  451238 main.go:141] libmachine: (old-k8s-version-635707) Ensuring network mk-old-k8s-version-635707 is active
	I0805 12:58:42.111108  451238 main.go:141] libmachine: (old-k8s-version-635707) Getting domain xml...
	I0805 12:58:42.112194  451238 main.go:141] libmachine: (old-k8s-version-635707) Creating domain...
	I0805 12:58:43.453015  451238 main.go:141] libmachine: (old-k8s-version-635707) Waiting to get IP...
	I0805 12:58:43.453994  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:43.454435  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:43.454504  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:43.454435  452186 retry.go:31] will retry after 270.355403ms: waiting for machine to come up
	I0805 12:58:43.727101  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:43.727583  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:43.727641  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:43.727568  452186 retry.go:31] will retry after 313.75466ms: waiting for machine to come up
	I0805 12:58:44.043303  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:44.043954  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:44.043981  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:44.043855  452186 retry.go:31] will retry after 308.608573ms: waiting for machine to come up
	I0805 12:58:44.354830  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:44.355396  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:44.355421  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:44.355305  452186 retry.go:31] will retry after 510.256657ms: waiting for machine to come up
	I0805 12:58:44.866970  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:44.867534  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:44.867559  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:44.867424  452186 retry.go:31] will retry after 668.55006ms: waiting for machine to come up
	I0805 12:58:45.537377  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:45.537959  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:45.537989  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:45.537909  452186 retry.go:31] will retry after 677.549944ms: waiting for machine to come up
	I0805 12:58:46.217077  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:46.217591  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:46.217625  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:46.217483  452186 retry.go:31] will retry after 847.636867ms: waiting for machine to come up
	I0805 12:58:47.067245  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:47.067895  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:47.067930  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:47.067838  452186 retry.go:31] will retry after 1.275228928s: waiting for machine to come up
	I0805 12:58:48.344881  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:48.345295  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:48.345319  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:48.345258  452186 retry.go:31] will retry after 1.826891386s: waiting for machine to come up
	I0805 12:58:50.174583  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:50.175111  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:50.175138  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:50.175074  452186 retry.go:31] will retry after 1.53756677s: waiting for machine to come up
	I0805 12:58:51.714025  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:51.714529  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:51.714553  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:51.714485  452186 retry.go:31] will retry after 2.762270002s: waiting for machine to come up
	I0805 12:58:54.478201  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:54.478619  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:54.478650  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:54.478579  452186 retry.go:31] will retry after 2.992766963s: waiting for machine to come up
	I0805 12:58:57.473094  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:57.473555  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:57.473587  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:57.473495  452186 retry.go:31] will retry after 4.27138033s: waiting for machine to come up
	I0805 12:59:01.750111  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.750558  451238 main.go:141] libmachine: (old-k8s-version-635707) Found IP for machine: 192.168.61.41
	I0805 12:59:01.750586  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has current primary IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.750593  451238 main.go:141] libmachine: (old-k8s-version-635707) Reserving static IP address...
	I0805 12:59:01.751003  451238 main.go:141] libmachine: (old-k8s-version-635707) Reserved static IP address: 192.168.61.41
	I0805 12:59:01.751061  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "old-k8s-version-635707", mac: "52:54:00:2a:da:c5", ip: "192.168.61.41"} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:01.751081  451238 main.go:141] libmachine: (old-k8s-version-635707) Waiting for SSH to be available...
	I0805 12:59:01.751112  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | skip adding static IP to network mk-old-k8s-version-635707 - found existing host DHCP lease matching {name: "old-k8s-version-635707", mac: "52:54:00:2a:da:c5", ip: "192.168.61.41"}
	I0805 12:59:01.751130  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | Getting to WaitForSSH function...
	I0805 12:59:01.753240  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.753634  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:01.753672  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.753810  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | Using SSH client type: external
	I0805 12:59:01.753854  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | Using SSH private key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/id_rsa (-rw-------)
	I0805 12:59:01.753900  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.41 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 12:59:01.753919  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | About to run SSH command:
	I0805 12:59:01.753933  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | exit 0
	I0805 12:59:01.875919  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | SSH cmd err, output: <nil>: 
	I0805 12:59:01.876298  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetConfigRaw
	I0805 12:59:01.877028  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetIP
	I0805 12:59:01.879644  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.880120  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:01.880164  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.880508  451238 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/config.json ...
	I0805 12:59:01.880778  451238 machine.go:94] provisionDockerMachine start ...
	I0805 12:59:01.880805  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:59:01.881039  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:01.882998  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.883362  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:01.883389  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.883553  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:01.883755  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:01.883900  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:01.884012  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:01.884248  451238 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:01.884496  451238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I0805 12:59:01.884511  451238 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 12:59:01.984198  451238 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 12:59:01.984237  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetMachineName
	I0805 12:59:01.984501  451238 buildroot.go:166] provisioning hostname "old-k8s-version-635707"
	I0805 12:59:01.984534  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetMachineName
	I0805 12:59:01.984750  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:01.987690  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.988085  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:01.988115  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.988240  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:01.988470  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:01.988782  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:01.988945  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:01.989173  451238 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:01.989407  451238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I0805 12:59:01.989425  451238 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-635707 && echo "old-k8s-version-635707" | sudo tee /etc/hostname
	I0805 12:59:02.108368  451238 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-635707
	
	I0805 12:59:02.108406  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:02.111301  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.111669  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:02.111712  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.111837  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:02.112027  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:02.112212  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:02.112393  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:02.112563  451238 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:02.112797  451238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I0805 12:59:02.112824  451238 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-635707' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-635707/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-635707' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 12:59:02.225638  451238 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 12:59:02.225681  451238 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19377-383955/.minikube CaCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19377-383955/.minikube}
	I0805 12:59:02.225731  451238 buildroot.go:174] setting up certificates
	I0805 12:59:02.225745  451238 provision.go:84] configureAuth start
	I0805 12:59:02.225760  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetMachineName
	I0805 12:59:02.226099  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetIP
	I0805 12:59:02.229252  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.229643  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:02.229671  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.229885  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:02.232479  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.232912  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:02.232951  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.233125  451238 provision.go:143] copyHostCerts
	I0805 12:59:02.233188  451238 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem, removing ...
	I0805 12:59:02.233201  451238 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 12:59:02.233271  451238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem (1123 bytes)
	I0805 12:59:02.233412  451238 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem, removing ...
	I0805 12:59:02.233426  451238 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 12:59:02.233459  451238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem (1675 bytes)
	I0805 12:59:02.233543  451238 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem, removing ...
	I0805 12:59:02.233553  451238 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 12:59:02.233581  451238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem (1082 bytes)
	I0805 12:59:02.233661  451238 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-635707 san=[127.0.0.1 192.168.61.41 localhost minikube old-k8s-version-635707]
	I0805 12:59:02.470213  451238 provision.go:177] copyRemoteCerts
	I0805 12:59:02.470328  451238 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 12:59:02.470369  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:02.473450  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.473791  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:02.473829  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.473964  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:02.474173  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:02.474313  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:02.474429  451238 sshutil.go:53] new ssh client: &{IP:192.168.61.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/id_rsa Username:docker}
	I0805 12:59:02.558831  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 12:59:02.583652  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0805 12:59:02.609154  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0805 12:59:02.635827  451238 provision.go:87] duration metric: took 410.067115ms to configureAuth
	I0805 12:59:02.635862  451238 buildroot.go:189] setting minikube options for container-runtime
	I0805 12:59:02.636109  451238 config.go:182] Loaded profile config "old-k8s-version-635707": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0805 12:59:02.636357  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:02.638964  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.639466  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:02.639489  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.639644  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:02.639953  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:02.640197  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:02.640454  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:02.640733  451238 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:02.640975  451238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I0805 12:59:02.641000  451238 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 12:59:02.917466  451238 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 12:59:02.917499  451238 machine.go:97] duration metric: took 1.036701572s to provisionDockerMachine
	I0805 12:59:02.917512  451238 start.go:293] postStartSetup for "old-k8s-version-635707" (driver="kvm2")
	I0805 12:59:02.917522  451238 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 12:59:02.917539  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:59:02.917946  451238 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 12:59:02.917979  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:02.920900  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.921383  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:02.921426  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.921552  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:02.921773  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:02.921958  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:02.922220  451238 sshutil.go:53] new ssh client: &{IP:192.168.61.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/id_rsa Username:docker}
	I0805 12:59:03.003670  451238 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 12:59:03.008348  451238 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 12:59:03.008384  451238 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/addons for local assets ...
	I0805 12:59:03.008468  451238 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/files for local assets ...
	I0805 12:59:03.008588  451238 filesync.go:149] local asset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> 3912192.pem in /etc/ssl/certs
	I0805 12:59:03.008727  451238 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 12:59:03.019098  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:59:03.042969  451238 start.go:296] duration metric: took 125.441712ms for postStartSetup
	I0805 12:59:03.043011  451238 fix.go:56] duration metric: took 20.961935899s for fixHost
	I0805 12:59:03.043034  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:03.045667  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.046030  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:03.046062  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.046254  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:03.046508  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:03.046701  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:03.046824  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:03.047002  451238 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:03.047182  451238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I0805 12:59:03.047192  451238 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0805 12:59:03.148773  451238 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722862743.120260193
	
	I0805 12:59:03.148798  451238 fix.go:216] guest clock: 1722862743.120260193
	I0805 12:59:03.148807  451238 fix.go:229] Guest: 2024-08-05 12:59:03.120260193 +0000 UTC Remote: 2024-08-05 12:59:03.043015059 +0000 UTC m=+231.118249223 (delta=77.245134ms)
	I0805 12:59:03.148831  451238 fix.go:200] guest clock delta is within tolerance: 77.245134ms
	I0805 12:59:03.148836  451238 start.go:83] releasing machines lock for "old-k8s-version-635707", held for 21.067801046s
	I0805 12:59:03.148857  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:59:03.149131  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetIP
	I0805 12:59:03.152026  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.152444  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:03.152475  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.152645  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:59:03.153237  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:59:03.153423  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:59:03.153495  451238 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 12:59:03.153551  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:03.153860  451238 ssh_runner.go:195] Run: cat /version.json
	I0805 12:59:03.153895  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:03.156566  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.156903  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:03.156963  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.156994  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.157187  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:03.157411  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:03.157479  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:03.157508  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.157594  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:03.157770  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:03.157782  451238 sshutil.go:53] new ssh client: &{IP:192.168.61.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/id_rsa Username:docker}
	I0805 12:59:03.157924  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:03.158107  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:03.158344  451238 sshutil.go:53] new ssh client: &{IP:192.168.61.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/id_rsa Username:docker}
	I0805 12:59:03.254162  451238 ssh_runner.go:195] Run: systemctl --version
	I0805 12:59:03.260684  451238 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 12:59:03.409837  451238 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 12:59:03.416010  451238 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 12:59:03.416093  451238 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 12:59:03.433548  451238 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 12:59:03.433584  451238 start.go:495] detecting cgroup driver to use...
	I0805 12:59:03.433667  451238 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 12:59:03.450756  451238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 12:59:03.467281  451238 docker.go:217] disabling cri-docker service (if available) ...
	I0805 12:59:03.467341  451238 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 12:59:03.482537  451238 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 12:59:03.498623  451238 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 12:59:03.621224  451238 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 12:59:03.781777  451238 docker.go:233] disabling docker service ...
	I0805 12:59:03.781842  451238 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 12:59:03.798020  451238 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 12:59:03.818262  451238 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 12:59:03.940897  451238 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 12:59:04.075622  451238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 12:59:04.092487  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 12:59:04.112699  451238 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0805 12:59:04.112769  451238 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:04.124102  451238 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 12:59:04.124181  451238 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:04.136339  451238 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:04.147689  451238 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:04.158552  451238 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 12:59:04.171412  451238 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 12:59:04.183284  451238 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0805 12:59:04.183336  451238 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0805 12:59:04.199465  451238 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 12:59:04.215571  451238 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:59:04.342540  451238 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 12:59:04.521705  451238 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 12:59:04.521786  451238 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 12:59:04.526734  451238 start.go:563] Will wait 60s for crictl version
	I0805 12:59:04.526795  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:04.530528  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 12:59:04.572468  451238 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 12:59:04.572557  451238 ssh_runner.go:195] Run: crio --version
	I0805 12:59:04.602411  451238 ssh_runner.go:195] Run: crio --version
	I0805 12:59:04.636641  451238 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0805 12:59:04.638062  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetIP
	I0805 12:59:04.641240  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:04.641734  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:04.641763  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:04.641991  451238 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0805 12:59:04.646446  451238 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:59:04.659876  451238 kubeadm.go:883] updating cluster {Name:old-k8s-version-635707 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-635707 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.41 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 12:59:04.660037  451238 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0805 12:59:04.660105  451238 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:59:04.709636  451238 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0805 12:59:04.709725  451238 ssh_runner.go:195] Run: which lz4
	I0805 12:59:04.714439  451238 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0805 12:59:04.719014  451238 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 12:59:04.719047  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0805 12:59:06.414858  451238 crio.go:462] duration metric: took 1.70045694s to copy over tarball
	I0805 12:59:06.414950  451238 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 12:59:09.473711  451238 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.058718584s)
	I0805 12:59:09.473740  451238 crio.go:469] duration metric: took 3.058854233s to extract the tarball
	I0805 12:59:09.473748  451238 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0805 12:59:09.524420  451238 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:59:09.562003  451238 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0805 12:59:09.562035  451238 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0805 12:59:09.562107  451238 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:59:09.562159  451238 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0805 12:59:09.562156  451238 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0805 12:59:09.562194  451238 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0805 12:59:09.562228  451238 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0805 12:59:09.562256  451238 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0805 12:59:09.562374  451238 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0805 12:59:09.562274  451238 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0805 12:59:09.563981  451238 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0805 12:59:09.563993  451238 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0805 12:59:09.564007  451238 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0805 12:59:09.564015  451238 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0805 12:59:09.564032  451238 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0805 12:59:09.564041  451238 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0805 12:59:09.564076  451238 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:59:09.564075  451238 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0805 12:59:09.727888  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0805 12:59:09.732060  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0805 12:59:09.732150  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0805 12:59:09.736408  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0805 12:59:09.748051  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0805 12:59:09.753579  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0805 12:59:09.762561  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0805 12:59:09.822623  451238 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0805 12:59:09.822681  451238 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0805 12:59:09.822742  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:09.824314  451238 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0805 12:59:09.824360  451238 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0805 12:59:09.824404  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:09.905619  451238 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0805 12:59:09.905778  451238 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0805 12:59:09.905738  451238 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0805 12:59:09.905944  451238 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0805 12:59:09.905998  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:09.905851  451238 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0805 12:59:09.906075  451238 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0805 12:59:09.906133  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:09.905861  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:09.916767  451238 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0805 12:59:09.916796  451238 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0805 12:59:09.916812  451238 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0805 12:59:09.916830  451238 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0805 12:59:09.916864  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:09.916868  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:09.916905  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0805 12:59:09.916958  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0805 12:59:09.918683  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0805 12:59:09.918718  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0805 12:59:09.918776  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0805 12:59:10.007687  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0805 12:59:10.007721  451238 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0805 12:59:10.007871  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0805 12:59:10.042432  451238 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0805 12:59:10.061343  451238 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0805 12:59:10.061400  451238 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0805 12:59:10.061469  451238 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0805 12:59:10.073852  451238 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0805 12:59:10.084957  451238 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0805 12:59:10.423355  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:59:10.563992  451238 cache_images.go:92] duration metric: took 1.001937985s to LoadCachedImages
	W0805 12:59:10.564184  451238 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0805 12:59:10.564211  451238 kubeadm.go:934] updating node { 192.168.61.41 8443 v1.20.0 crio true true} ...
	I0805 12:59:10.564345  451238 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-635707 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.41
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-635707 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 12:59:10.564427  451238 ssh_runner.go:195] Run: crio config
	I0805 12:59:10.612146  451238 cni.go:84] Creating CNI manager for ""
	I0805 12:59:10.612180  451238 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:59:10.612197  451238 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 12:59:10.612226  451238 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.41 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-635707 NodeName:old-k8s-version-635707 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.41"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.41 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0805 12:59:10.612415  451238 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.41
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-635707"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.41
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.41"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 12:59:10.612507  451238 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0805 12:59:10.623036  451238 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 12:59:10.623121  451238 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 12:59:10.633484  451238 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0805 12:59:10.652444  451238 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 12:59:10.673192  451238 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0805 12:59:10.694533  451238 ssh_runner.go:195] Run: grep 192.168.61.41	control-plane.minikube.internal$ /etc/hosts
	I0805 12:59:10.699901  451238 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.41	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:59:10.714251  451238 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:59:10.838992  451238 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 12:59:10.857248  451238 certs.go:68] Setting up /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707 for IP: 192.168.61.41
	I0805 12:59:10.857279  451238 certs.go:194] generating shared ca certs ...
	I0805 12:59:10.857303  451238 certs.go:226] acquiring lock for ca certs: {Name:mk0abfcaff3883fbb5243c47b487f9200d9166d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:59:10.857515  451238 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key
	I0805 12:59:10.857587  451238 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key
	I0805 12:59:10.857602  451238 certs.go:256] generating profile certs ...
	I0805 12:59:10.857746  451238 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/client.key
	I0805 12:59:10.857847  451238 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/apiserver.key.3f42c485
	I0805 12:59:10.857907  451238 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/proxy-client.key
	I0805 12:59:10.858072  451238 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem (1338 bytes)
	W0805 12:59:10.858122  451238 certs.go:480] ignoring /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219_empty.pem, impossibly tiny 0 bytes
	I0805 12:59:10.858143  451238 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 12:59:10.858177  451238 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem (1082 bytes)
	I0805 12:59:10.858207  451238 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem (1123 bytes)
	I0805 12:59:10.858235  451238 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem (1675 bytes)
	I0805 12:59:10.858294  451238 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:59:10.859247  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 12:59:10.908518  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 12:59:10.949310  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 12:59:10.981447  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 12:59:11.008085  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0805 12:59:11.035539  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0805 12:59:11.071371  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 12:59:11.099842  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 12:59:11.135629  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 12:59:11.164194  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem --> /usr/share/ca-certificates/391219.pem (1338 bytes)
	I0805 12:59:11.190595  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /usr/share/ca-certificates/3912192.pem (1708 bytes)
	I0805 12:59:11.219765  451238 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 12:59:11.240836  451238 ssh_runner.go:195] Run: openssl version
	I0805 12:59:11.247516  451238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3912192.pem && ln -fs /usr/share/ca-certificates/3912192.pem /etc/ssl/certs/3912192.pem"
	I0805 12:59:11.260736  451238 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3912192.pem
	I0805 12:59:11.266004  451238 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 11:39 /usr/share/ca-certificates/3912192.pem
	I0805 12:59:11.266100  451238 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3912192.pem
	I0805 12:59:11.273012  451238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3912192.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 12:59:11.285453  451238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 12:59:11.296934  451238 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:59:11.301588  451238 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:59:11.301655  451238 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:59:11.307459  451238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 12:59:11.318833  451238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391219.pem && ln -fs /usr/share/ca-certificates/391219.pem /etc/ssl/certs/391219.pem"
	I0805 12:59:11.330224  451238 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/391219.pem
	I0805 12:59:11.334864  451238 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 11:39 /usr/share/ca-certificates/391219.pem
	I0805 12:59:11.334917  451238 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391219.pem
	I0805 12:59:11.341338  451238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391219.pem /etc/ssl/certs/51391683.0"
	I0805 12:59:11.353084  451238 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 12:59:11.358532  451238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 12:59:11.365419  451238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 12:59:11.371581  451238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 12:59:11.378308  451238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 12:59:11.384640  451238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 12:59:11.390622  451238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 12:59:11.397027  451238 kubeadm.go:392] StartCluster: {Name:old-k8s-version-635707 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-635707 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.41 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:59:11.397199  451238 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 12:59:11.397286  451238 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:59:11.436612  451238 cri.go:89] found id: ""
	I0805 12:59:11.436689  451238 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 12:59:11.447906  451238 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0805 12:59:11.447927  451238 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0805 12:59:11.447984  451238 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0805 12:59:11.459282  451238 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0805 12:59:11.460548  451238 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-635707" does not appear in /home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 12:59:11.461355  451238 kubeconfig.go:62] /home/jenkins/minikube-integration/19377-383955/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-635707" cluster setting kubeconfig missing "old-k8s-version-635707" context setting]
	I0805 12:59:11.462324  451238 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/kubeconfig: {Name:mkf2ea766e58530103015ce4ba9d1ed3336f3926 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:59:11.476306  451238 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0805 12:59:11.487869  451238 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.41
	I0805 12:59:11.487911  451238 kubeadm.go:1160] stopping kube-system containers ...
	I0805 12:59:11.487927  451238 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0805 12:59:11.487988  451238 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:59:11.526601  451238 cri.go:89] found id: ""
	I0805 12:59:11.526674  451238 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0805 12:59:11.545429  451238 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 12:59:11.556725  451238 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 12:59:11.556755  451238 kubeadm.go:157] found existing configuration files:
	
	I0805 12:59:11.556820  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 12:59:11.566564  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 12:59:11.566648  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 12:59:11.576859  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 12:59:11.586237  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 12:59:11.586329  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 12:59:11.596721  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 12:59:11.607239  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 12:59:11.607340  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 12:59:11.617626  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 12:59:11.627179  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 12:59:11.627251  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 12:59:11.637566  451238 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 12:59:11.648889  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:11.780270  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:12.549918  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:12.781853  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:12.877381  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:12.978141  451238 api_server.go:52] waiting for apiserver process to appear ...
	I0805 12:59:12.978250  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:13.479242  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:13.978456  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:14.478575  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:14.978783  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:15.479342  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:15.978307  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:16.479180  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:16.978915  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:17.479019  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:17.978574  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:18.478343  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:18.978820  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:19.478488  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:19.978335  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:20.478945  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:20.979040  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:21.479324  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:21.979289  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:22.478367  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:22.978424  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:23.478877  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:23.978841  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:24.478635  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:24.978824  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:25.479076  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:25.979222  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:26.478928  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:26.978648  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:27.478951  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:27.978405  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:28.479008  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:28.978521  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:29.479199  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:29.979288  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:30.479030  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:30.978372  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:31.479194  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:31.978481  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:32.479031  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:32.978796  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:33.478677  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:33.979377  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:34.478595  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:34.979227  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:35.478695  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:35.978911  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:36.479327  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:36.978361  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:37.478380  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:37.978354  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:38.478283  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:38.979257  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:39.478407  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:39.978772  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:40.478395  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:40.979309  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:41.478302  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:41.978791  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:42.478841  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:42.979289  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:43.478344  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:43.978613  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:44.478756  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:44.978392  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:45.478363  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:45.978354  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:46.478417  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:46.978356  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:47.478322  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:47.978417  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:48.478966  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:48.979317  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:49.478449  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:49.978364  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:50.479294  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:50.978435  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:51.478614  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:51.978526  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:52.479187  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:52.979090  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:53.478733  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:53.978571  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:54.478525  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:54.979125  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:55.478711  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:55.979266  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:56.478956  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:56.979226  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:57.479019  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:57.978634  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:58.478338  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:58.978987  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:59.479290  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:59.978383  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:00.478373  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:00.978412  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:01.479312  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:01.978392  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:02.479119  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:02.978313  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:03.478401  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:03.979029  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:04.478963  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:04.978393  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:05.478418  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:05.978381  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:06.479229  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:06.979172  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:07.479251  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:07.979183  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:08.478722  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:08.979248  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:09.478527  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:09.978581  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:10.478499  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:10.978520  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:11.478843  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:11.978536  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:12.478504  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:12.979179  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:12.979258  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:13.022653  451238 cri.go:89] found id: ""
	I0805 13:00:13.022680  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.022689  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:13.022696  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:13.022766  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:13.059292  451238 cri.go:89] found id: ""
	I0805 13:00:13.059326  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.059336  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:13.059343  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:13.059399  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:13.098750  451238 cri.go:89] found id: ""
	I0805 13:00:13.098782  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.098793  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:13.098802  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:13.098866  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:13.133307  451238 cri.go:89] found id: ""
	I0805 13:00:13.133338  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.133346  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:13.133353  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:13.133420  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:13.171124  451238 cri.go:89] found id: ""
	I0805 13:00:13.171160  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.171170  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:13.171177  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:13.171237  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:13.209200  451238 cri.go:89] found id: ""
	I0805 13:00:13.209235  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.209247  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:13.209254  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:13.209312  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:13.244261  451238 cri.go:89] found id: ""
	I0805 13:00:13.244302  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.244313  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:13.244324  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:13.244397  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:13.283295  451238 cri.go:89] found id: ""
	I0805 13:00:13.283331  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.283342  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:13.283356  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:13.283372  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:13.344134  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:13.344174  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:13.384084  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:13.384119  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:13.433784  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:13.433821  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:13.449756  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:13.449786  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:13.573090  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:16.074053  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:16.087817  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:16.087900  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:16.130938  451238 cri.go:89] found id: ""
	I0805 13:00:16.130970  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.130981  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:16.130989  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:16.131058  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:16.184208  451238 cri.go:89] found id: ""
	I0805 13:00:16.184245  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.184259  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:16.184269  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:16.184346  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:16.230959  451238 cri.go:89] found id: ""
	I0805 13:00:16.230998  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.231011  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:16.231020  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:16.231100  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:16.282886  451238 cri.go:89] found id: ""
	I0805 13:00:16.282940  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.282954  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:16.282963  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:16.283024  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:16.320345  451238 cri.go:89] found id: ""
	I0805 13:00:16.320381  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.320397  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:16.320404  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:16.320521  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:16.356390  451238 cri.go:89] found id: ""
	I0805 13:00:16.356427  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.356439  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:16.356447  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:16.356503  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:16.400477  451238 cri.go:89] found id: ""
	I0805 13:00:16.400510  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.400529  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:16.400539  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:16.400612  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:16.440634  451238 cri.go:89] found id: ""
	I0805 13:00:16.440662  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.440673  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:16.440685  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:16.440702  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:16.510879  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:16.510922  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:16.554294  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:16.554332  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:16.607798  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:16.607853  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:16.622618  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:16.622655  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:16.702599  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:19.202789  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:19.215776  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:19.215851  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:19.250503  451238 cri.go:89] found id: ""
	I0805 13:00:19.250540  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.250551  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:19.250558  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:19.250630  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:19.287358  451238 cri.go:89] found id: ""
	I0805 13:00:19.287392  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.287403  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:19.287412  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:19.287484  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:19.322167  451238 cri.go:89] found id: ""
	I0805 13:00:19.322195  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.322203  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:19.322209  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:19.322262  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:19.356874  451238 cri.go:89] found id: ""
	I0805 13:00:19.356905  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.356923  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:19.356931  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:19.357006  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:19.395172  451238 cri.go:89] found id: ""
	I0805 13:00:19.395206  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.395217  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:19.395227  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:19.395294  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:19.438404  451238 cri.go:89] found id: ""
	I0805 13:00:19.438431  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.438439  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:19.438445  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:19.438510  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:19.474727  451238 cri.go:89] found id: ""
	I0805 13:00:19.474755  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.474762  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:19.474769  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:19.474832  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:19.513906  451238 cri.go:89] found id: ""
	I0805 13:00:19.513945  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.513953  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:19.513963  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:19.513977  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:19.528337  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:19.528378  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:19.601135  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:19.601168  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:19.601185  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:19.676792  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:19.676844  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:19.716861  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:19.716894  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:22.266971  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:22.280346  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:22.280422  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:22.314788  451238 cri.go:89] found id: ""
	I0805 13:00:22.314816  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.314824  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:22.314831  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:22.314884  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:22.357357  451238 cri.go:89] found id: ""
	I0805 13:00:22.357394  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.357405  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:22.357414  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:22.357483  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:22.393254  451238 cri.go:89] found id: ""
	I0805 13:00:22.393288  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.393296  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:22.393302  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:22.393366  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:22.434766  451238 cri.go:89] found id: ""
	I0805 13:00:22.434796  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.434807  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:22.434815  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:22.434887  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:22.475649  451238 cri.go:89] found id: ""
	I0805 13:00:22.475676  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.475684  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:22.475690  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:22.475754  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:22.515633  451238 cri.go:89] found id: ""
	I0805 13:00:22.515662  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.515670  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:22.515677  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:22.515757  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:22.550716  451238 cri.go:89] found id: ""
	I0805 13:00:22.550749  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.550759  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:22.550767  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:22.550849  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:22.588537  451238 cri.go:89] found id: ""
	I0805 13:00:22.588571  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.588583  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:22.588595  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:22.588609  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:22.638535  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:22.638577  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:22.654879  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:22.654919  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:22.721482  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:22.721513  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:22.721529  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:22.801442  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:22.801489  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:25.343805  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:25.358068  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:25.358176  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:25.393734  451238 cri.go:89] found id: ""
	I0805 13:00:25.393767  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.393778  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:25.393785  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:25.393849  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:25.428217  451238 cri.go:89] found id: ""
	I0805 13:00:25.428244  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.428252  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:25.428257  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:25.428316  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:25.462826  451238 cri.go:89] found id: ""
	I0805 13:00:25.462858  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.462869  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:25.462877  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:25.462961  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:25.502960  451238 cri.go:89] found id: ""
	I0805 13:00:25.502989  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.502998  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:25.503006  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:25.503072  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:25.538859  451238 cri.go:89] found id: ""
	I0805 13:00:25.538888  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.538897  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:25.538902  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:25.538964  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:25.577850  451238 cri.go:89] found id: ""
	I0805 13:00:25.577883  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.577894  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:25.577901  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:25.577988  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:25.611728  451238 cri.go:89] found id: ""
	I0805 13:00:25.611773  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.611785  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:25.611793  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:25.611865  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:25.654987  451238 cri.go:89] found id: ""
	I0805 13:00:25.655018  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.655027  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:25.655039  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:25.655052  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:25.669124  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:25.669160  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:25.747354  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:25.747380  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:25.747398  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:25.825198  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:25.825241  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:25.865511  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:25.865546  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:28.418263  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:28.431831  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:28.431895  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:28.470249  451238 cri.go:89] found id: ""
	I0805 13:00:28.470280  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.470291  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:28.470301  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:28.470373  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:28.506935  451238 cri.go:89] found id: ""
	I0805 13:00:28.506968  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.506977  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:28.506985  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:28.507053  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:28.546621  451238 cri.go:89] found id: ""
	I0805 13:00:28.546652  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.546663  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:28.546671  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:28.546749  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:28.584699  451238 cri.go:89] found id: ""
	I0805 13:00:28.584734  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.584745  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:28.584753  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:28.584820  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:28.620693  451238 cri.go:89] found id: ""
	I0805 13:00:28.620726  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.620736  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:28.620744  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:28.620814  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:28.657340  451238 cri.go:89] found id: ""
	I0805 13:00:28.657370  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.657379  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:28.657385  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:28.657438  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:28.695126  451238 cri.go:89] found id: ""
	I0805 13:00:28.695156  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.695166  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:28.695174  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:28.695239  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:28.729757  451238 cri.go:89] found id: ""
	I0805 13:00:28.729808  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.729821  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:28.729834  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:28.729852  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:28.769642  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:28.769675  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:28.818076  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:28.818114  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:28.831466  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:28.831496  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:28.902788  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:28.902818  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:28.902836  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:31.482482  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:31.497767  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:31.497867  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:31.536922  451238 cri.go:89] found id: ""
	I0805 13:00:31.536948  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.536960  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:31.536969  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:31.537040  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:31.572422  451238 cri.go:89] found id: ""
	I0805 13:00:31.572456  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.572466  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:31.572472  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:31.572531  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:31.607961  451238 cri.go:89] found id: ""
	I0805 13:00:31.607996  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.608008  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:31.608016  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:31.608082  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:31.641771  451238 cri.go:89] found id: ""
	I0805 13:00:31.641800  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.641822  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:31.641830  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:31.641904  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:31.681661  451238 cri.go:89] found id: ""
	I0805 13:00:31.681695  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.681707  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:31.681715  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:31.681791  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:31.723777  451238 cri.go:89] found id: ""
	I0805 13:00:31.723814  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.723823  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:31.723829  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:31.723922  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:31.759898  451238 cri.go:89] found id: ""
	I0805 13:00:31.759935  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.759948  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:31.759957  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:31.760022  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:31.798433  451238 cri.go:89] found id: ""
	I0805 13:00:31.798462  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.798470  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:31.798480  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:31.798497  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:31.872005  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:31.872030  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:31.872045  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:31.952201  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:31.952240  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:31.995920  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:31.995955  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:32.047453  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:32.047493  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:34.562369  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:34.576644  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:34.576708  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:34.613002  451238 cri.go:89] found id: ""
	I0805 13:00:34.613036  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.613047  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:34.613056  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:34.613127  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:34.650723  451238 cri.go:89] found id: ""
	I0805 13:00:34.650757  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.650769  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:34.650777  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:34.650851  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:34.689047  451238 cri.go:89] found id: ""
	I0805 13:00:34.689073  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.689081  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:34.689088  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:34.689148  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:34.727552  451238 cri.go:89] found id: ""
	I0805 13:00:34.727592  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.727604  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:34.727612  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:34.727683  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:34.761661  451238 cri.go:89] found id: ""
	I0805 13:00:34.761696  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.761707  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:34.761715  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:34.761791  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:34.800062  451238 cri.go:89] found id: ""
	I0805 13:00:34.800116  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.800128  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:34.800137  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:34.800198  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:34.833536  451238 cri.go:89] found id: ""
	I0805 13:00:34.833566  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.833578  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:34.833586  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:34.833654  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:34.868079  451238 cri.go:89] found id: ""
	I0805 13:00:34.868117  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.868126  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:34.868135  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:34.868149  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:34.920092  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:34.920124  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:34.934484  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:34.934510  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:35.007716  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:35.007751  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:35.007768  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:35.088183  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:35.088233  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:37.633443  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:37.647405  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:37.647470  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:37.684682  451238 cri.go:89] found id: ""
	I0805 13:00:37.684711  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.684720  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:37.684727  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:37.684779  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:37.723413  451238 cri.go:89] found id: ""
	I0805 13:00:37.723442  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.723449  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:37.723455  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:37.723506  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:37.758388  451238 cri.go:89] found id: ""
	I0805 13:00:37.758418  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.758428  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:37.758437  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:37.758501  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:37.797846  451238 cri.go:89] found id: ""
	I0805 13:00:37.797879  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.797890  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:37.797901  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:37.797971  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:37.837053  451238 cri.go:89] found id: ""
	I0805 13:00:37.837082  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.837092  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:37.837104  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:37.837163  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:37.876185  451238 cri.go:89] found id: ""
	I0805 13:00:37.876211  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.876220  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:37.876226  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:37.876294  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:37.915318  451238 cri.go:89] found id: ""
	I0805 13:00:37.915350  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.915362  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:37.915370  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:37.915429  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:37.953916  451238 cri.go:89] found id: ""
	I0805 13:00:37.953944  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.953954  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:37.953964  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:37.953976  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:37.991116  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:37.991154  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:38.043796  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:38.043838  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:38.058636  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:38.058669  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:38.143022  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:38.143051  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:38.143067  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:40.721468  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:40.735679  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:40.735774  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:40.773583  451238 cri.go:89] found id: ""
	I0805 13:00:40.773609  451238 logs.go:276] 0 containers: []
	W0805 13:00:40.773617  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:40.773626  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:40.773685  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:40.819857  451238 cri.go:89] found id: ""
	I0805 13:00:40.819886  451238 logs.go:276] 0 containers: []
	W0805 13:00:40.819895  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:40.819901  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:40.819963  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:40.857156  451238 cri.go:89] found id: ""
	I0805 13:00:40.857184  451238 logs.go:276] 0 containers: []
	W0805 13:00:40.857192  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:40.857198  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:40.857251  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:40.892933  451238 cri.go:89] found id: ""
	I0805 13:00:40.892970  451238 logs.go:276] 0 containers: []
	W0805 13:00:40.892981  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:40.892990  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:40.893046  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:40.927128  451238 cri.go:89] found id: ""
	I0805 13:00:40.927163  451238 logs.go:276] 0 containers: []
	W0805 13:00:40.927173  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:40.927182  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:40.927237  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:40.961790  451238 cri.go:89] found id: ""
	I0805 13:00:40.961817  451238 logs.go:276] 0 containers: []
	W0805 13:00:40.961826  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:40.961832  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:40.961886  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:40.996249  451238 cri.go:89] found id: ""
	I0805 13:00:40.996282  451238 logs.go:276] 0 containers: []
	W0805 13:00:40.996293  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:40.996300  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:40.996371  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:41.032305  451238 cri.go:89] found id: ""
	I0805 13:00:41.032332  451238 logs.go:276] 0 containers: []
	W0805 13:00:41.032342  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:41.032358  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:41.032375  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:41.075993  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:41.076027  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:41.126020  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:41.126057  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:41.140263  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:41.140288  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:41.216648  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:41.216670  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:41.216683  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:43.796367  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:43.810086  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:43.810162  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:43.844373  451238 cri.go:89] found id: ""
	I0805 13:00:43.844410  451238 logs.go:276] 0 containers: []
	W0805 13:00:43.844422  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:43.844430  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:43.844502  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:43.880249  451238 cri.go:89] found id: ""
	I0805 13:00:43.880285  451238 logs.go:276] 0 containers: []
	W0805 13:00:43.880295  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:43.880303  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:43.880376  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:43.921279  451238 cri.go:89] found id: ""
	I0805 13:00:43.921313  451238 logs.go:276] 0 containers: []
	W0805 13:00:43.921323  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:43.921329  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:43.921382  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:43.963736  451238 cri.go:89] found id: ""
	I0805 13:00:43.963782  451238 logs.go:276] 0 containers: []
	W0805 13:00:43.963794  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:43.963803  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:43.963869  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:44.009001  451238 cri.go:89] found id: ""
	I0805 13:00:44.009038  451238 logs.go:276] 0 containers: []
	W0805 13:00:44.009050  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:44.009057  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:44.009128  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:44.059484  451238 cri.go:89] found id: ""
	I0805 13:00:44.059514  451238 logs.go:276] 0 containers: []
	W0805 13:00:44.059526  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:44.059534  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:44.059605  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:44.102043  451238 cri.go:89] found id: ""
	I0805 13:00:44.102075  451238 logs.go:276] 0 containers: []
	W0805 13:00:44.102088  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:44.102094  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:44.102170  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:44.137518  451238 cri.go:89] found id: ""
	I0805 13:00:44.137558  451238 logs.go:276] 0 containers: []
	W0805 13:00:44.137569  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:44.137584  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:44.137600  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:44.188139  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:44.188175  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:44.202544  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:44.202588  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:44.278486  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:44.278508  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:44.278521  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:44.363419  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:44.363458  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:46.905665  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:46.922141  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:46.922206  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:46.963468  451238 cri.go:89] found id: ""
	I0805 13:00:46.963494  451238 logs.go:276] 0 containers: []
	W0805 13:00:46.963502  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:46.963508  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:46.963557  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:47.003445  451238 cri.go:89] found id: ""
	I0805 13:00:47.003472  451238 logs.go:276] 0 containers: []
	W0805 13:00:47.003480  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:47.003486  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:47.003537  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:47.043271  451238 cri.go:89] found id: ""
	I0805 13:00:47.043306  451238 logs.go:276] 0 containers: []
	W0805 13:00:47.043318  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:47.043326  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:47.043394  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:47.079843  451238 cri.go:89] found id: ""
	I0805 13:00:47.079874  451238 logs.go:276] 0 containers: []
	W0805 13:00:47.079884  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:47.079893  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:47.079954  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:47.116819  451238 cri.go:89] found id: ""
	I0805 13:00:47.116847  451238 logs.go:276] 0 containers: []
	W0805 13:00:47.116856  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:47.116861  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:47.116917  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:47.156302  451238 cri.go:89] found id: ""
	I0805 13:00:47.156331  451238 logs.go:276] 0 containers: []
	W0805 13:00:47.156340  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:47.156353  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:47.156410  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:47.200419  451238 cri.go:89] found id: ""
	I0805 13:00:47.200449  451238 logs.go:276] 0 containers: []
	W0805 13:00:47.200463  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:47.200469  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:47.200533  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:47.237483  451238 cri.go:89] found id: ""
	I0805 13:00:47.237515  451238 logs.go:276] 0 containers: []
	W0805 13:00:47.237522  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:47.237532  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:47.237545  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:47.251598  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:47.251632  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:47.326457  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:47.326483  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:47.326501  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:47.410413  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:47.410455  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:47.452696  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:47.452732  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:50.005335  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:50.019610  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:50.019679  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:50.057401  451238 cri.go:89] found id: ""
	I0805 13:00:50.057435  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.057447  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:50.057456  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:50.057516  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:50.101710  451238 cri.go:89] found id: ""
	I0805 13:00:50.101743  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.101751  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:50.101758  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:50.101822  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:50.139624  451238 cri.go:89] found id: ""
	I0805 13:00:50.139658  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.139669  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:50.139677  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:50.139761  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:50.176004  451238 cri.go:89] found id: ""
	I0805 13:00:50.176031  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.176039  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:50.176045  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:50.176123  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:50.219319  451238 cri.go:89] found id: ""
	I0805 13:00:50.219352  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.219362  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:50.219369  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:50.219437  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:50.287443  451238 cri.go:89] found id: ""
	I0805 13:00:50.287478  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.287489  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:50.287498  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:50.287582  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:50.321018  451238 cri.go:89] found id: ""
	I0805 13:00:50.321047  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.321056  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:50.321063  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:50.321124  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:50.354559  451238 cri.go:89] found id: ""
	I0805 13:00:50.354597  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.354610  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:50.354625  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:50.354642  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:50.398621  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:50.398657  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:50.451693  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:50.451735  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:50.466810  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:50.466851  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:50.542431  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:50.542461  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:50.542482  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:53.128466  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:53.144139  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:53.144216  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:53.178383  451238 cri.go:89] found id: ""
	I0805 13:00:53.178427  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.178438  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:53.178447  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:53.178516  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:53.220312  451238 cri.go:89] found id: ""
	I0805 13:00:53.220348  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.220358  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:53.220365  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:53.220432  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:53.255352  451238 cri.go:89] found id: ""
	I0805 13:00:53.255380  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.255390  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:53.255398  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:53.255473  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:53.293254  451238 cri.go:89] found id: ""
	I0805 13:00:53.293292  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.293311  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:53.293320  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:53.293395  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:53.329407  451238 cri.go:89] found id: ""
	I0805 13:00:53.329436  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.329448  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:53.329455  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:53.329523  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:53.362838  451238 cri.go:89] found id: ""
	I0805 13:00:53.362868  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.362876  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:53.362883  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:53.362957  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:53.399283  451238 cri.go:89] found id: ""
	I0805 13:00:53.399313  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.399324  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:53.399332  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:53.399405  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:53.438527  451238 cri.go:89] found id: ""
	I0805 13:00:53.438558  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.438567  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:53.438578  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:53.438597  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:53.492709  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:53.492760  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:53.507522  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:53.507555  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:53.581690  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:53.581710  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:53.581724  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:53.664402  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:53.664451  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:56.209640  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:56.224403  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:56.224487  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:56.266214  451238 cri.go:89] found id: ""
	I0805 13:00:56.266243  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.266254  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:56.266263  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:56.266328  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:56.304034  451238 cri.go:89] found id: ""
	I0805 13:00:56.304070  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.304082  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:56.304091  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:56.304172  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:56.342133  451238 cri.go:89] found id: ""
	I0805 13:00:56.342159  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.342167  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:56.342173  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:56.342225  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:56.378549  451238 cri.go:89] found id: ""
	I0805 13:00:56.378588  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.378599  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:56.378606  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:56.378667  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:56.415613  451238 cri.go:89] found id: ""
	I0805 13:00:56.415641  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.415651  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:56.415657  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:56.415715  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:56.451915  451238 cri.go:89] found id: ""
	I0805 13:00:56.451944  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.451953  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:56.451960  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:56.452021  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:56.492219  451238 cri.go:89] found id: ""
	I0805 13:00:56.492255  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.492267  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:56.492275  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:56.492347  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:56.534564  451238 cri.go:89] found id: ""
	I0805 13:00:56.534606  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.534618  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:56.534632  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:56.534652  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:56.548772  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:56.548813  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:56.625649  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:56.625678  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:56.625695  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:56.716735  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:56.716787  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:56.771881  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:56.771910  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:59.325624  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:59.338796  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:59.338869  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:59.375002  451238 cri.go:89] found id: ""
	I0805 13:00:59.375039  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.375050  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:59.375059  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:59.375138  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:59.410778  451238 cri.go:89] found id: ""
	I0805 13:00:59.410800  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.410810  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:59.410817  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:59.410873  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:59.453728  451238 cri.go:89] found id: ""
	I0805 13:00:59.453760  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.453771  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:59.453779  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:59.453845  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:59.492968  451238 cri.go:89] found id: ""
	I0805 13:00:59.493002  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.493013  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:59.493021  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:59.493091  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:59.533342  451238 cri.go:89] found id: ""
	I0805 13:00:59.533372  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.533383  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:59.533390  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:59.533445  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:59.569677  451238 cri.go:89] found id: ""
	I0805 13:00:59.569705  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.569715  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:59.569722  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:59.569789  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:59.605106  451238 cri.go:89] found id: ""
	I0805 13:00:59.605139  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.605150  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:59.605158  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:59.605228  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:59.639948  451238 cri.go:89] found id: ""
	I0805 13:00:59.639980  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.639989  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:59.640000  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:59.640016  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:59.679926  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:59.679956  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:59.731545  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:59.731591  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:59.746286  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:59.746320  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:59.828398  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:59.828420  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:59.828439  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:02.412560  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:02.429633  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:02.429718  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:02.475916  451238 cri.go:89] found id: ""
	I0805 13:01:02.475951  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.475963  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:02.475971  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:02.476061  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:02.528807  451238 cri.go:89] found id: ""
	I0805 13:01:02.528837  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.528849  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:02.528856  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:02.528924  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:02.575164  451238 cri.go:89] found id: ""
	I0805 13:01:02.575194  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.575210  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:02.575218  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:02.575286  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:02.614709  451238 cri.go:89] found id: ""
	I0805 13:01:02.614800  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.614815  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:02.614824  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:02.614902  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:02.654941  451238 cri.go:89] found id: ""
	I0805 13:01:02.654979  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.654990  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:02.654997  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:02.655069  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:02.690552  451238 cri.go:89] found id: ""
	I0805 13:01:02.690586  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.690595  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:02.690602  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:02.690657  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:02.725607  451238 cri.go:89] found id: ""
	I0805 13:01:02.725644  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.725656  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:02.725665  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:02.725745  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:02.760180  451238 cri.go:89] found id: ""
	I0805 13:01:02.760211  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.760223  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:02.760244  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:02.760262  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:02.813071  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:02.813128  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:02.828633  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:02.828665  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:02.898049  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:02.898074  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:02.898087  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:02.988077  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:02.988124  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:05.532719  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:05.546423  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:05.546489  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:05.590978  451238 cri.go:89] found id: ""
	I0805 13:01:05.591006  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.591013  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:05.591019  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:05.591071  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:05.631251  451238 cri.go:89] found id: ""
	I0805 13:01:05.631287  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.631298  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:05.631306  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:05.631391  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:05.671826  451238 cri.go:89] found id: ""
	I0805 13:01:05.671863  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.671875  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:05.671883  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:05.671951  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:05.708147  451238 cri.go:89] found id: ""
	I0805 13:01:05.708176  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.708186  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:05.708194  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:05.708262  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:05.741962  451238 cri.go:89] found id: ""
	I0805 13:01:05.741994  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.742006  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:05.742015  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:05.742087  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:05.777930  451238 cri.go:89] found id: ""
	I0805 13:01:05.777965  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.777976  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:05.777985  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:05.778061  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:05.813066  451238 cri.go:89] found id: ""
	I0805 13:01:05.813099  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.813111  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:05.813119  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:05.813189  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:05.849382  451238 cri.go:89] found id: ""
	I0805 13:01:05.849410  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.849418  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:05.849428  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:05.849440  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:05.903376  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:05.903423  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:05.918540  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:05.918575  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:05.990608  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:05.990637  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:05.990658  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:06.072524  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:06.072571  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:08.617528  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:08.631637  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:08.631713  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:08.669999  451238 cri.go:89] found id: ""
	I0805 13:01:08.670039  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.670050  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:08.670065  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:08.670147  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:08.705322  451238 cri.go:89] found id: ""
	I0805 13:01:08.705356  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.705365  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:08.705370  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:08.705442  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:08.744884  451238 cri.go:89] found id: ""
	I0805 13:01:08.744915  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.744927  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:08.744936  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:08.745018  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:08.782394  451238 cri.go:89] found id: ""
	I0805 13:01:08.782428  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.782440  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:08.782448  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:08.782518  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:08.816989  451238 cri.go:89] found id: ""
	I0805 13:01:08.817018  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.817027  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:08.817034  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:08.817106  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:08.856389  451238 cri.go:89] found id: ""
	I0805 13:01:08.856420  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.856431  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:08.856439  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:08.856506  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:08.891942  451238 cri.go:89] found id: ""
	I0805 13:01:08.891975  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.891986  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:08.891995  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:08.892064  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:08.930329  451238 cri.go:89] found id: ""
	I0805 13:01:08.930364  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.930375  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:08.930389  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:08.930406  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:08.972574  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:08.972610  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:09.026194  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:09.026228  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:09.040973  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:09.041002  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:09.115094  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:09.115121  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:09.115143  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:11.698322  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:11.711841  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:11.711927  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:11.749152  451238 cri.go:89] found id: ""
	I0805 13:01:11.749187  451238 logs.go:276] 0 containers: []
	W0805 13:01:11.749199  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:11.749207  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:11.749274  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:11.785395  451238 cri.go:89] found id: ""
	I0805 13:01:11.785430  451238 logs.go:276] 0 containers: []
	W0805 13:01:11.785441  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:11.785449  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:11.785516  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:11.822240  451238 cri.go:89] found id: ""
	I0805 13:01:11.822282  451238 logs.go:276] 0 containers: []
	W0805 13:01:11.822293  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:11.822302  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:11.822372  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:11.858755  451238 cri.go:89] found id: ""
	I0805 13:01:11.858794  451238 logs.go:276] 0 containers: []
	W0805 13:01:11.858805  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:11.858814  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:11.858884  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:11.893064  451238 cri.go:89] found id: ""
	I0805 13:01:11.893101  451238 logs.go:276] 0 containers: []
	W0805 13:01:11.893113  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:11.893121  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:11.893195  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:11.930965  451238 cri.go:89] found id: ""
	I0805 13:01:11.931003  451238 logs.go:276] 0 containers: []
	W0805 13:01:11.931015  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:11.931025  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:11.931089  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:11.967594  451238 cri.go:89] found id: ""
	I0805 13:01:11.967620  451238 logs.go:276] 0 containers: []
	W0805 13:01:11.967630  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:11.967638  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:11.967697  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:12.004978  451238 cri.go:89] found id: ""
	I0805 13:01:12.005007  451238 logs.go:276] 0 containers: []
	W0805 13:01:12.005015  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:12.005025  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:12.005037  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:12.087476  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:12.087500  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:12.087515  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:12.177690  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:12.177757  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:12.222858  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:12.222889  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:12.273322  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:12.273362  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:14.788210  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:14.802351  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:14.802426  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:14.837705  451238 cri.go:89] found id: ""
	I0805 13:01:14.837736  451238 logs.go:276] 0 containers: []
	W0805 13:01:14.837746  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:14.837755  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:14.837824  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:14.873389  451238 cri.go:89] found id: ""
	I0805 13:01:14.873420  451238 logs.go:276] 0 containers: []
	W0805 13:01:14.873430  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:14.873438  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:14.873506  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:14.913969  451238 cri.go:89] found id: ""
	I0805 13:01:14.913999  451238 logs.go:276] 0 containers: []
	W0805 13:01:14.914009  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:14.914018  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:14.914081  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:14.953478  451238 cri.go:89] found id: ""
	I0805 13:01:14.953510  451238 logs.go:276] 0 containers: []
	W0805 13:01:14.953521  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:14.953528  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:14.953584  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:14.992166  451238 cri.go:89] found id: ""
	I0805 13:01:14.992197  451238 logs.go:276] 0 containers: []
	W0805 13:01:14.992206  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:14.992212  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:14.992291  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:15.031258  451238 cri.go:89] found id: ""
	I0805 13:01:15.031285  451238 logs.go:276] 0 containers: []
	W0805 13:01:15.031293  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:15.031300  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:15.031353  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:15.068944  451238 cri.go:89] found id: ""
	I0805 13:01:15.068972  451238 logs.go:276] 0 containers: []
	W0805 13:01:15.068980  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:15.068986  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:15.069042  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:15.105413  451238 cri.go:89] found id: ""
	I0805 13:01:15.105443  451238 logs.go:276] 0 containers: []
	W0805 13:01:15.105454  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:15.105467  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:15.105489  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:15.161925  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:15.161969  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:15.177174  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:15.177206  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:15.257950  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:15.257975  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:15.257989  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:15.336672  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:15.336716  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:17.876314  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:17.889842  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:17.889909  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:17.928050  451238 cri.go:89] found id: ""
	I0805 13:01:17.928077  451238 logs.go:276] 0 containers: []
	W0805 13:01:17.928086  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:17.928092  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:17.928150  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:17.965713  451238 cri.go:89] found id: ""
	I0805 13:01:17.965751  451238 logs.go:276] 0 containers: []
	W0805 13:01:17.965762  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:17.965770  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:17.965837  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:18.002938  451238 cri.go:89] found id: ""
	I0805 13:01:18.002972  451238 logs.go:276] 0 containers: []
	W0805 13:01:18.002984  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:18.002992  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:18.003062  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:18.040140  451238 cri.go:89] found id: ""
	I0805 13:01:18.040178  451238 logs.go:276] 0 containers: []
	W0805 13:01:18.040190  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:18.040198  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:18.040269  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:18.075427  451238 cri.go:89] found id: ""
	I0805 13:01:18.075463  451238 logs.go:276] 0 containers: []
	W0805 13:01:18.075475  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:18.075490  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:18.075558  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:18.113469  451238 cri.go:89] found id: ""
	I0805 13:01:18.113507  451238 logs.go:276] 0 containers: []
	W0805 13:01:18.113521  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:18.113528  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:18.113587  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:18.152626  451238 cri.go:89] found id: ""
	I0805 13:01:18.152662  451238 logs.go:276] 0 containers: []
	W0805 13:01:18.152672  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:18.152678  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:18.152745  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:18.189540  451238 cri.go:89] found id: ""
	I0805 13:01:18.189577  451238 logs.go:276] 0 containers: []
	W0805 13:01:18.189590  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:18.189602  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:18.189618  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:18.244314  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:18.244353  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:18.257912  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:18.257939  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:18.339659  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:18.339682  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:18.339699  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:18.425391  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:18.425449  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:20.975889  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:20.989798  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:20.989868  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:21.030858  451238 cri.go:89] found id: ""
	I0805 13:01:21.030894  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.030906  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:21.030915  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:21.030979  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:21.067367  451238 cri.go:89] found id: ""
	I0805 13:01:21.067402  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.067411  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:21.067419  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:21.067476  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:21.104307  451238 cri.go:89] found id: ""
	I0805 13:01:21.104337  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.104352  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:21.104361  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:21.104424  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:21.141486  451238 cri.go:89] found id: ""
	I0805 13:01:21.141519  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.141531  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:21.141539  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:21.141606  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:21.179247  451238 cri.go:89] found id: ""
	I0805 13:01:21.179305  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.179317  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:21.179330  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:21.179406  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:21.215030  451238 cri.go:89] found id: ""
	I0805 13:01:21.215065  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.215075  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:21.215083  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:21.215152  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:21.252982  451238 cri.go:89] found id: ""
	I0805 13:01:21.253008  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.253016  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:21.253022  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:21.253097  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:21.290256  451238 cri.go:89] found id: ""
	I0805 13:01:21.290292  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.290302  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:21.290325  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:21.290343  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:21.342809  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:21.342855  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:21.357959  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:21.358000  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:21.433087  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:21.433120  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:21.433143  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:21.514261  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:21.514312  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:24.060402  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:24.076056  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:24.076131  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:24.115976  451238 cri.go:89] found id: ""
	I0805 13:01:24.116009  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.116022  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:24.116031  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:24.116111  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:24.158411  451238 cri.go:89] found id: ""
	I0805 13:01:24.158440  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.158448  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:24.158454  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:24.158520  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:24.194589  451238 cri.go:89] found id: ""
	I0805 13:01:24.194624  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.194635  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:24.194644  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:24.194720  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:24.231528  451238 cri.go:89] found id: ""
	I0805 13:01:24.231562  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.231569  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:24.231576  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:24.231649  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:24.268491  451238 cri.go:89] found id: ""
	I0805 13:01:24.268523  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.268532  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:24.268538  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:24.268602  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:24.306718  451238 cri.go:89] found id: ""
	I0805 13:01:24.306752  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.306763  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:24.306772  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:24.306839  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:24.343552  451238 cri.go:89] found id: ""
	I0805 13:01:24.343578  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.343586  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:24.343593  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:24.343649  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:24.384555  451238 cri.go:89] found id: ""
	I0805 13:01:24.384590  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.384602  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:24.384615  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:24.384633  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:24.430256  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:24.430298  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:24.484616  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:24.484661  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:24.500926  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:24.500958  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:24.581379  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:24.581410  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:24.581424  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:27.167538  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:27.181959  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:27.182035  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:27.223243  451238 cri.go:89] found id: ""
	I0805 13:01:27.223282  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.223293  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:27.223301  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:27.223374  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:27.257806  451238 cri.go:89] found id: ""
	I0805 13:01:27.257843  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.257856  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:27.257864  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:27.257940  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:27.304306  451238 cri.go:89] found id: ""
	I0805 13:01:27.304342  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.304353  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:27.304370  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:27.304439  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:27.342595  451238 cri.go:89] found id: ""
	I0805 13:01:27.342623  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.342631  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:27.342638  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:27.342707  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:27.385628  451238 cri.go:89] found id: ""
	I0805 13:01:27.385661  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.385670  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:27.385677  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:27.385760  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:27.425059  451238 cri.go:89] found id: ""
	I0805 13:01:27.425091  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.425100  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:27.425106  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:27.425175  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:27.465739  451238 cri.go:89] found id: ""
	I0805 13:01:27.465783  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.465794  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:27.465807  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:27.465869  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:27.506431  451238 cri.go:89] found id: ""
	I0805 13:01:27.506460  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.506468  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:27.506477  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:27.506494  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:27.586440  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:27.586467  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:27.586482  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:27.667826  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:27.667869  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:27.710458  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:27.710496  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:27.763057  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:27.763100  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:30.278799  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:30.293788  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:30.293874  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:30.336209  451238 cri.go:89] found id: ""
	I0805 13:01:30.336240  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.336248  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:30.336255  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:30.336323  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:30.371593  451238 cri.go:89] found id: ""
	I0805 13:01:30.371627  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.371642  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:30.371649  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:30.371714  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:30.408266  451238 cri.go:89] found id: ""
	I0805 13:01:30.408298  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.408317  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:30.408325  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:30.408388  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:30.448841  451238 cri.go:89] found id: ""
	I0805 13:01:30.448864  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.448872  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:30.448878  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:30.448940  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:30.488367  451238 cri.go:89] found id: ""
	I0805 13:01:30.488403  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.488411  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:30.488418  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:30.488485  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:30.527131  451238 cri.go:89] found id: ""
	I0805 13:01:30.527163  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.527173  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:30.527181  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:30.527249  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:30.568089  451238 cri.go:89] found id: ""
	I0805 13:01:30.568122  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.568131  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:30.568138  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:30.568203  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:30.605952  451238 cri.go:89] found id: ""
	I0805 13:01:30.605990  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.606007  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:30.606021  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:30.606041  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:30.656449  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:30.656491  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:30.710124  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:30.710164  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:30.724417  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:30.724455  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:30.820639  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:30.820669  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:30.820687  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:33.403497  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:33.419581  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:33.419651  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:33.462011  451238 cri.go:89] found id: ""
	I0805 13:01:33.462042  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.462051  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:33.462057  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:33.462126  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:33.502476  451238 cri.go:89] found id: ""
	I0805 13:01:33.502509  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.502519  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:33.502527  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:33.502601  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:33.547392  451238 cri.go:89] found id: ""
	I0805 13:01:33.547421  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.547430  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:33.547437  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:33.547490  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:33.584013  451238 cri.go:89] found id: ""
	I0805 13:01:33.584040  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.584048  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:33.584054  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:33.584125  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:33.617325  451238 cri.go:89] found id: ""
	I0805 13:01:33.617359  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.617367  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:33.617374  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:33.617429  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:33.651922  451238 cri.go:89] found id: ""
	I0805 13:01:33.651959  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.651971  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:33.651980  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:33.652049  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:33.689487  451238 cri.go:89] found id: ""
	I0805 13:01:33.689515  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.689522  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:33.689529  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:33.689580  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:33.723220  451238 cri.go:89] found id: ""
	I0805 13:01:33.723251  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.723260  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:33.723270  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:33.723282  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:33.777271  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:33.777311  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:33.792497  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:33.792532  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:33.866801  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:33.866826  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:33.866842  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:33.946739  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:33.946774  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:36.486108  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:36.501316  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:36.501397  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:36.542082  451238 cri.go:89] found id: ""
	I0805 13:01:36.542118  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.542130  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:36.542139  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:36.542217  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:36.581005  451238 cri.go:89] found id: ""
	I0805 13:01:36.581047  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.581059  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:36.581068  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:36.581148  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:36.623945  451238 cri.go:89] found id: ""
	I0805 13:01:36.623974  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.623982  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:36.623987  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:36.624041  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:36.661632  451238 cri.go:89] found id: ""
	I0805 13:01:36.661665  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.661673  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:36.661680  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:36.661738  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:36.701808  451238 cri.go:89] found id: ""
	I0805 13:01:36.701839  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.701850  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:36.701857  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:36.701941  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:36.742287  451238 cri.go:89] found id: ""
	I0805 13:01:36.742320  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.742331  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:36.742340  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:36.742410  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:36.794581  451238 cri.go:89] found id: ""
	I0805 13:01:36.794610  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.794621  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:36.794629  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:36.794690  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:36.833271  451238 cri.go:89] found id: ""
	I0805 13:01:36.833301  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.833311  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:36.833325  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:36.833346  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:36.921427  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:36.921467  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:36.965468  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:36.965503  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:37.018475  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:37.018515  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:37.033671  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:37.033697  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:37.105339  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:39.606042  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:39.619215  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:39.619296  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:39.655614  451238 cri.go:89] found id: ""
	I0805 13:01:39.655648  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.655660  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:39.655668  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:39.655760  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:39.691489  451238 cri.go:89] found id: ""
	I0805 13:01:39.691523  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.691535  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:39.691543  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:39.691610  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:39.726394  451238 cri.go:89] found id: ""
	I0805 13:01:39.726427  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.726438  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:39.726446  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:39.726518  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:39.759847  451238 cri.go:89] found id: ""
	I0805 13:01:39.759897  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.759909  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:39.759918  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:39.759988  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:39.795011  451238 cri.go:89] found id: ""
	I0805 13:01:39.795043  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.795051  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:39.795057  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:39.795120  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:39.831302  451238 cri.go:89] found id: ""
	I0805 13:01:39.831336  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.831346  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:39.831356  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:39.831432  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:39.866506  451238 cri.go:89] found id: ""
	I0805 13:01:39.866540  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.866547  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:39.866554  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:39.866622  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:39.898083  451238 cri.go:89] found id: ""
	I0805 13:01:39.898108  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.898115  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:39.898128  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:39.898147  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:39.912192  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:39.912221  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:39.989216  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:39.989246  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:39.989262  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:40.069702  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:40.069746  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:40.118390  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:40.118428  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:42.669421  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:42.682287  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:42.682359  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:42.722933  451238 cri.go:89] found id: ""
	I0805 13:01:42.722961  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.722969  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:42.722975  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:42.723037  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:42.757604  451238 cri.go:89] found id: ""
	I0805 13:01:42.757635  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.757646  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:42.757654  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:42.757723  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:42.795825  451238 cri.go:89] found id: ""
	I0805 13:01:42.795852  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.795863  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:42.795871  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:42.795939  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:42.831749  451238 cri.go:89] found id: ""
	I0805 13:01:42.831779  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.831791  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:42.831800  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:42.831862  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:42.866280  451238 cri.go:89] found id: ""
	I0805 13:01:42.866310  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.866322  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:42.866330  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:42.866390  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:42.904393  451238 cri.go:89] found id: ""
	I0805 13:01:42.904427  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.904436  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:42.904445  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:42.904510  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:42.943175  451238 cri.go:89] found id: ""
	I0805 13:01:42.943204  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.943215  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:42.943223  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:42.943292  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:42.979117  451238 cri.go:89] found id: ""
	I0805 13:01:42.979144  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.979152  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:42.979174  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:42.979191  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:43.032032  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:43.032070  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:43.046285  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:43.046315  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:43.120300  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:43.120327  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:43.120347  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:43.209800  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:43.209851  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:45.759057  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:45.771984  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:45.772056  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:45.805421  451238 cri.go:89] found id: ""
	I0805 13:01:45.805451  451238 logs.go:276] 0 containers: []
	W0805 13:01:45.805459  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:45.805466  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:45.805521  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:45.841552  451238 cri.go:89] found id: ""
	I0805 13:01:45.841579  451238 logs.go:276] 0 containers: []
	W0805 13:01:45.841588  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:45.841597  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:45.841672  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:45.878502  451238 cri.go:89] found id: ""
	I0805 13:01:45.878529  451238 logs.go:276] 0 containers: []
	W0805 13:01:45.878537  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:45.878546  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:45.878622  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:45.921145  451238 cri.go:89] found id: ""
	I0805 13:01:45.921187  451238 logs.go:276] 0 containers: []
	W0805 13:01:45.921198  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:45.921207  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:45.921273  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:45.958408  451238 cri.go:89] found id: ""
	I0805 13:01:45.958437  451238 logs.go:276] 0 containers: []
	W0805 13:01:45.958445  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:45.958452  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:45.958521  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:45.994632  451238 cri.go:89] found id: ""
	I0805 13:01:45.994660  451238 logs.go:276] 0 containers: []
	W0805 13:01:45.994669  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:45.994676  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:45.994727  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:46.032930  451238 cri.go:89] found id: ""
	I0805 13:01:46.032961  451238 logs.go:276] 0 containers: []
	W0805 13:01:46.032971  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:46.032978  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:46.033041  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:46.074396  451238 cri.go:89] found id: ""
	I0805 13:01:46.074429  451238 logs.go:276] 0 containers: []
	W0805 13:01:46.074441  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:46.074454  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:46.074475  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:46.131977  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:46.132020  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:46.147924  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:46.147957  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:46.222005  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:46.222038  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:46.222054  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:46.306799  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:46.306842  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:48.856982  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:48.870945  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:48.871025  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:48.930811  451238 cri.go:89] found id: ""
	I0805 13:01:48.930837  451238 logs.go:276] 0 containers: []
	W0805 13:01:48.930852  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:48.930858  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:48.930917  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:48.986604  451238 cri.go:89] found id: ""
	I0805 13:01:48.986629  451238 logs.go:276] 0 containers: []
	W0805 13:01:48.986637  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:48.986643  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:48.986706  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:49.039433  451238 cri.go:89] found id: ""
	I0805 13:01:49.039468  451238 logs.go:276] 0 containers: []
	W0805 13:01:49.039479  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:49.039487  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:49.039555  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:49.079593  451238 cri.go:89] found id: ""
	I0805 13:01:49.079625  451238 logs.go:276] 0 containers: []
	W0805 13:01:49.079637  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:49.079645  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:49.079714  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:49.116243  451238 cri.go:89] found id: ""
	I0805 13:01:49.116274  451238 logs.go:276] 0 containers: []
	W0805 13:01:49.116284  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:49.116292  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:49.116360  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:49.158744  451238 cri.go:89] found id: ""
	I0805 13:01:49.158779  451238 logs.go:276] 0 containers: []
	W0805 13:01:49.158790  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:49.158799  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:49.158868  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:49.193747  451238 cri.go:89] found id: ""
	I0805 13:01:49.193778  451238 logs.go:276] 0 containers: []
	W0805 13:01:49.193786  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:49.193792  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:49.193843  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:49.227663  451238 cri.go:89] found id: ""
	I0805 13:01:49.227691  451238 logs.go:276] 0 containers: []
	W0805 13:01:49.227704  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:49.227714  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:49.227727  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:49.281380  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:49.281424  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:49.296286  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:49.296318  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:49.368584  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:49.368609  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:49.368625  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:49.453857  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:49.453909  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:51.993057  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:52.006066  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:52.006148  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:52.043179  451238 cri.go:89] found id: ""
	I0805 13:01:52.043212  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.043223  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:52.043231  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:52.043300  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:52.076469  451238 cri.go:89] found id: ""
	I0805 13:01:52.076502  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.076512  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:52.076520  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:52.076586  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:52.112443  451238 cri.go:89] found id: ""
	I0805 13:01:52.112477  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.112488  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:52.112497  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:52.112569  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:52.147589  451238 cri.go:89] found id: ""
	I0805 13:01:52.147620  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.147631  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:52.147638  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:52.147702  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:52.184016  451238 cri.go:89] found id: ""
	I0805 13:01:52.184053  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.184063  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:52.184072  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:52.184134  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:52.219670  451238 cri.go:89] found id: ""
	I0805 13:01:52.219702  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.219714  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:52.219727  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:52.219820  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:52.258697  451238 cri.go:89] found id: ""
	I0805 13:01:52.258731  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.258744  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:52.258752  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:52.258818  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:52.299599  451238 cri.go:89] found id: ""
	I0805 13:01:52.299636  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.299649  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:52.299665  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:52.299683  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:52.351730  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:52.351772  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:52.365993  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:52.366022  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:52.436019  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:52.436041  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:52.436056  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:52.520082  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:52.520118  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:55.064214  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:55.077358  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:55.077454  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:55.110523  451238 cri.go:89] found id: ""
	I0805 13:01:55.110555  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.110564  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:55.110570  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:55.110630  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:55.147870  451238 cri.go:89] found id: ""
	I0805 13:01:55.147905  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.147916  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:55.147925  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:55.147998  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:55.180769  451238 cri.go:89] found id: ""
	I0805 13:01:55.180803  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.180814  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:55.180822  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:55.180890  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:55.217290  451238 cri.go:89] found id: ""
	I0805 13:01:55.217332  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.217343  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:55.217353  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:55.217420  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:55.254185  451238 cri.go:89] found id: ""
	I0805 13:01:55.254221  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.254232  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:55.254239  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:55.254295  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:55.290633  451238 cri.go:89] found id: ""
	I0805 13:01:55.290662  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.290673  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:55.290681  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:55.290747  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:55.325830  451238 cri.go:89] found id: ""
	I0805 13:01:55.325862  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.325873  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:55.325880  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:55.325947  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:55.359887  451238 cri.go:89] found id: ""
	I0805 13:01:55.359922  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.359931  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:55.359941  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:55.359953  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:55.418251  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:55.418299  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:55.432007  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:55.432038  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:55.507177  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:55.507205  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:55.507219  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:55.586919  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:55.586965  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:58.128822  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:58.142726  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:58.142799  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:58.178027  451238 cri.go:89] found id: ""
	I0805 13:01:58.178056  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.178067  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:58.178075  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:58.178147  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:58.213309  451238 cri.go:89] found id: ""
	I0805 13:01:58.213340  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.213351  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:58.213358  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:58.213430  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:58.247296  451238 cri.go:89] found id: ""
	I0805 13:01:58.247323  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.247332  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:58.247338  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:58.247393  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:58.280226  451238 cri.go:89] found id: ""
	I0805 13:01:58.280255  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.280266  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:58.280277  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:58.280335  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:58.316934  451238 cri.go:89] found id: ""
	I0805 13:01:58.316969  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.316981  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:58.316989  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:58.317055  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:58.360931  451238 cri.go:89] found id: ""
	I0805 13:01:58.360967  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.360979  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:58.360987  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:58.361055  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:58.399112  451238 cri.go:89] found id: ""
	I0805 13:01:58.399150  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.399163  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:58.399171  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:58.399244  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:58.441903  451238 cri.go:89] found id: ""
	I0805 13:01:58.441930  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.441941  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:58.441952  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:58.441967  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:58.524869  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:58.524908  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:58.562598  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:58.562634  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:58.618274  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:58.618313  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:58.633011  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:58.633039  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:58.706287  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:01.206971  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:01.222277  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:01.222357  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:01.266949  451238 cri.go:89] found id: ""
	I0805 13:02:01.266982  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.266993  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:01.267007  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:01.267108  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:01.306765  451238 cri.go:89] found id: ""
	I0805 13:02:01.306791  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.306799  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:01.306805  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:01.306859  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:01.345108  451238 cri.go:89] found id: ""
	I0805 13:02:01.345145  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.345157  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:01.345164  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:01.345227  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:01.383201  451238 cri.go:89] found id: ""
	I0805 13:02:01.383231  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.383239  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:01.383245  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:01.383307  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:01.419292  451238 cri.go:89] found id: ""
	I0805 13:02:01.419320  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.419331  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:01.419338  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:01.419410  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:01.456447  451238 cri.go:89] found id: ""
	I0805 13:02:01.456482  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.456492  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:01.456500  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:01.456568  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:01.496266  451238 cri.go:89] found id: ""
	I0805 13:02:01.496298  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.496306  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:01.496312  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:01.496375  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:01.541492  451238 cri.go:89] found id: ""
	I0805 13:02:01.541529  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.541541  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:01.541555  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:01.541571  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:01.593140  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:01.593185  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:01.606641  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:01.606670  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:01.681989  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:01.682015  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:01.682030  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:01.765612  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:01.765655  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:04.311066  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:04.326530  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:04.326599  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:04.360091  451238 cri.go:89] found id: ""
	I0805 13:02:04.360124  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.360136  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:04.360142  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:04.360214  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:04.398983  451238 cri.go:89] found id: ""
	I0805 13:02:04.399014  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.399026  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:04.399045  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:04.399122  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:04.433444  451238 cri.go:89] found id: ""
	I0805 13:02:04.433474  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.433483  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:04.433495  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:04.433546  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:04.470113  451238 cri.go:89] found id: ""
	I0805 13:02:04.470145  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.470156  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:04.470167  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:04.470233  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:04.505695  451238 cri.go:89] found id: ""
	I0805 13:02:04.505721  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.505731  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:04.505738  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:04.505801  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:04.544093  451238 cri.go:89] found id: ""
	I0805 13:02:04.544121  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.544129  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:04.544136  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:04.544196  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:04.579663  451238 cri.go:89] found id: ""
	I0805 13:02:04.579702  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.579715  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:04.579724  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:04.579803  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:04.616524  451238 cri.go:89] found id: ""
	I0805 13:02:04.616565  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.616577  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:04.616590  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:04.616607  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:04.693014  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:04.693035  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:04.693048  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:04.772508  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:04.772550  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:04.813014  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:04.813043  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:04.864653  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:04.864702  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:07.378816  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:07.392347  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:07.392439  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:07.425843  451238 cri.go:89] found id: ""
	I0805 13:02:07.425876  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.425887  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:07.425895  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:07.425958  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:07.461547  451238 cri.go:89] found id: ""
	I0805 13:02:07.461575  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.461584  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:07.461591  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:07.461651  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:07.496461  451238 cri.go:89] found id: ""
	I0805 13:02:07.496500  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.496510  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:07.496521  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:07.496599  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:07.531520  451238 cri.go:89] found id: ""
	I0805 13:02:07.531556  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.531566  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:07.531574  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:07.531642  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:07.571821  451238 cri.go:89] found id: ""
	I0805 13:02:07.571855  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.571866  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:07.571876  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:07.571948  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:07.611111  451238 cri.go:89] found id: ""
	I0805 13:02:07.611151  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.611159  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:07.611165  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:07.611226  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:07.651428  451238 cri.go:89] found id: ""
	I0805 13:02:07.651456  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.651464  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:07.651470  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:07.651520  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:07.689828  451238 cri.go:89] found id: ""
	I0805 13:02:07.689858  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.689866  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:07.689877  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:07.689893  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:07.746381  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:07.746422  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:07.760953  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:07.760989  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:07.834859  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:07.834883  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:07.834901  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:07.915344  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:07.915376  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:10.459232  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:10.472789  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:10.472853  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:10.508434  451238 cri.go:89] found id: ""
	I0805 13:02:10.508462  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.508470  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:10.508477  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:10.508539  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:10.543487  451238 cri.go:89] found id: ""
	I0805 13:02:10.543515  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.543524  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:10.543530  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:10.543582  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:10.588274  451238 cri.go:89] found id: ""
	I0805 13:02:10.588302  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.588310  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:10.588317  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:10.588379  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:10.620810  451238 cri.go:89] found id: ""
	I0805 13:02:10.620851  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.620863  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:10.620871  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:10.620945  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:10.657882  451238 cri.go:89] found id: ""
	I0805 13:02:10.657913  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.657923  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:10.657929  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:10.657993  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:10.696188  451238 cri.go:89] found id: ""
	I0805 13:02:10.696220  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.696229  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:10.696235  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:10.696294  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:10.729942  451238 cri.go:89] found id: ""
	I0805 13:02:10.729977  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.729988  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:10.729996  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:10.730050  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:10.761972  451238 cri.go:89] found id: ""
	I0805 13:02:10.762000  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.762008  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:10.762018  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:10.762032  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:10.816859  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:10.816890  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:10.830348  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:10.830379  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:10.902720  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:10.902753  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:10.902771  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:10.981464  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:10.981505  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:13.528296  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:13.541813  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:13.541887  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:13.575632  451238 cri.go:89] found id: ""
	I0805 13:02:13.575669  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.575681  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:13.575689  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:13.575766  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:13.612646  451238 cri.go:89] found id: ""
	I0805 13:02:13.612680  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.612691  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:13.612699  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:13.612755  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:13.650310  451238 cri.go:89] found id: ""
	I0805 13:02:13.650341  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.650361  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:13.650369  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:13.650439  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:13.686941  451238 cri.go:89] found id: ""
	I0805 13:02:13.686970  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.686981  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:13.686990  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:13.687054  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:13.722250  451238 cri.go:89] found id: ""
	I0805 13:02:13.722285  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.722297  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:13.722306  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:13.722388  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:13.758337  451238 cri.go:89] found id: ""
	I0805 13:02:13.758367  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.758375  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:13.758382  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:13.758443  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:13.792980  451238 cri.go:89] found id: ""
	I0805 13:02:13.793016  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.793028  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:13.793036  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:13.793127  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:13.831511  451238 cri.go:89] found id: ""
	I0805 13:02:13.831539  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.831547  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:13.831558  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:13.831579  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:13.885124  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:13.885169  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:13.899112  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:13.899155  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:13.977058  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:13.977099  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:13.977115  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:14.060873  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:14.060911  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:16.602595  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:16.617557  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:16.617638  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:16.660212  451238 cri.go:89] found id: ""
	I0805 13:02:16.660244  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.660256  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:16.660264  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:16.660323  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:16.695515  451238 cri.go:89] found id: ""
	I0805 13:02:16.695553  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.695564  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:16.695572  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:16.695638  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:16.732844  451238 cri.go:89] found id: ""
	I0805 13:02:16.732875  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.732884  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:16.732891  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:16.732943  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:16.772465  451238 cri.go:89] found id: ""
	I0805 13:02:16.772497  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.772504  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:16.772517  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:16.772582  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:16.809826  451238 cri.go:89] found id: ""
	I0805 13:02:16.809863  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.809875  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:16.809882  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:16.809949  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:16.849480  451238 cri.go:89] found id: ""
	I0805 13:02:16.849512  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.849523  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:16.849531  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:16.849598  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:16.884098  451238 cri.go:89] found id: ""
	I0805 13:02:16.884132  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.884144  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:16.884152  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:16.884222  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:16.920497  451238 cri.go:89] found id: ""
	I0805 13:02:16.920523  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.920530  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:16.920541  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:16.920556  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:16.975287  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:16.975317  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:16.989524  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:16.989552  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:17.057997  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:17.058022  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:17.058037  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:17.133721  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:17.133763  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:19.672385  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:19.687948  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:19.688017  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:19.724105  451238 cri.go:89] found id: ""
	I0805 13:02:19.724132  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.724140  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:19.724147  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:19.724199  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:19.758263  451238 cri.go:89] found id: ""
	I0805 13:02:19.758296  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.758306  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:19.758314  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:19.758381  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:19.792924  451238 cri.go:89] found id: ""
	I0805 13:02:19.792954  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.792961  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:19.792967  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:19.793023  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:19.826340  451238 cri.go:89] found id: ""
	I0805 13:02:19.826367  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.826375  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:19.826382  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:19.826434  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:19.864289  451238 cri.go:89] found id: ""
	I0805 13:02:19.864323  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.864334  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:19.864343  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:19.864413  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:19.899630  451238 cri.go:89] found id: ""
	I0805 13:02:19.899661  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.899673  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:19.899682  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:19.899786  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:19.935798  451238 cri.go:89] found id: ""
	I0805 13:02:19.935826  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.935836  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:19.935843  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:19.935896  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:19.977984  451238 cri.go:89] found id: ""
	I0805 13:02:19.978019  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.978031  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:19.978044  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:19.978062  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:20.030096  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:20.030131  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:20.043878  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:20.043940  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:20.119251  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:20.119279  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:20.119297  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:20.202445  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:20.202488  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:22.744728  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:22.758606  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:22.758675  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:22.791663  451238 cri.go:89] found id: ""
	I0805 13:02:22.791696  451238 logs.go:276] 0 containers: []
	W0805 13:02:22.791708  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:22.791717  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:22.791821  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:22.826568  451238 cri.go:89] found id: ""
	I0805 13:02:22.826594  451238 logs.go:276] 0 containers: []
	W0805 13:02:22.826603  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:22.826609  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:22.826671  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:22.860430  451238 cri.go:89] found id: ""
	I0805 13:02:22.860459  451238 logs.go:276] 0 containers: []
	W0805 13:02:22.860470  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:22.860479  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:22.860543  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:22.893815  451238 cri.go:89] found id: ""
	I0805 13:02:22.893846  451238 logs.go:276] 0 containers: []
	W0805 13:02:22.893854  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:22.893860  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:22.893929  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:22.929804  451238 cri.go:89] found id: ""
	I0805 13:02:22.929830  451238 logs.go:276] 0 containers: []
	W0805 13:02:22.929840  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:22.929849  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:22.929915  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:22.964918  451238 cri.go:89] found id: ""
	I0805 13:02:22.964950  451238 logs.go:276] 0 containers: []
	W0805 13:02:22.964961  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:22.964969  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:22.965035  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:23.000236  451238 cri.go:89] found id: ""
	I0805 13:02:23.000271  451238 logs.go:276] 0 containers: []
	W0805 13:02:23.000282  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:23.000290  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:23.000354  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:23.052075  451238 cri.go:89] found id: ""
	I0805 13:02:23.052108  451238 logs.go:276] 0 containers: []
	W0805 13:02:23.052117  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:23.052128  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:23.052141  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:23.104213  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:23.104248  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:23.118811  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:23.118851  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:23.188552  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:23.188578  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:23.188595  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:23.272518  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:23.272562  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:25.811116  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:25.825030  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:25.825113  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:25.864282  451238 cri.go:89] found id: ""
	I0805 13:02:25.864318  451238 logs.go:276] 0 containers: []
	W0805 13:02:25.864331  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:25.864339  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:25.864413  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:25.901712  451238 cri.go:89] found id: ""
	I0805 13:02:25.901746  451238 logs.go:276] 0 containers: []
	W0805 13:02:25.901754  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:25.901760  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:25.901822  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:25.937036  451238 cri.go:89] found id: ""
	I0805 13:02:25.937068  451238 logs.go:276] 0 containers: []
	W0805 13:02:25.937077  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:25.937083  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:25.937146  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:25.974598  451238 cri.go:89] found id: ""
	I0805 13:02:25.974627  451238 logs.go:276] 0 containers: []
	W0805 13:02:25.974638  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:25.974646  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:25.974713  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:26.011083  451238 cri.go:89] found id: ""
	I0805 13:02:26.011116  451238 logs.go:276] 0 containers: []
	W0805 13:02:26.011124  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:26.011130  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:26.011190  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:26.050187  451238 cri.go:89] found id: ""
	I0805 13:02:26.050219  451238 logs.go:276] 0 containers: []
	W0805 13:02:26.050231  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:26.050242  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:26.050317  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:26.085038  451238 cri.go:89] found id: ""
	I0805 13:02:26.085067  451238 logs.go:276] 0 containers: []
	W0805 13:02:26.085077  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:26.085086  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:26.085151  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:26.122121  451238 cri.go:89] found id: ""
	I0805 13:02:26.122150  451238 logs.go:276] 0 containers: []
	W0805 13:02:26.122158  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:26.122173  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:26.122191  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:26.193819  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:26.193850  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:26.193865  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:26.273453  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:26.273492  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:26.312474  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:26.312509  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:26.363176  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:26.363215  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:28.878523  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:28.892242  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:28.892330  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:28.928650  451238 cri.go:89] found id: ""
	I0805 13:02:28.928682  451238 logs.go:276] 0 containers: []
	W0805 13:02:28.928693  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:28.928702  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:28.928772  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:28.965582  451238 cri.go:89] found id: ""
	I0805 13:02:28.965615  451238 logs.go:276] 0 containers: []
	W0805 13:02:28.965626  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:28.965634  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:28.965698  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:29.001824  451238 cri.go:89] found id: ""
	I0805 13:02:29.001855  451238 logs.go:276] 0 containers: []
	W0805 13:02:29.001865  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:29.001874  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:29.001939  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:29.037688  451238 cri.go:89] found id: ""
	I0805 13:02:29.037715  451238 logs.go:276] 0 containers: []
	W0805 13:02:29.037722  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:29.037730  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:29.037780  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:29.078495  451238 cri.go:89] found id: ""
	I0805 13:02:29.078540  451238 logs.go:276] 0 containers: []
	W0805 13:02:29.078552  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:29.078559  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:29.078627  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:29.113728  451238 cri.go:89] found id: ""
	I0805 13:02:29.113764  451238 logs.go:276] 0 containers: []
	W0805 13:02:29.113776  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:29.113786  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:29.113851  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:29.147590  451238 cri.go:89] found id: ""
	I0805 13:02:29.147618  451238 logs.go:276] 0 containers: []
	W0805 13:02:29.147629  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:29.147638  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:29.147702  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:29.186015  451238 cri.go:89] found id: ""
	I0805 13:02:29.186043  451238 logs.go:276] 0 containers: []
	W0805 13:02:29.186052  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:29.186062  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:29.186074  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:29.242795  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:29.242850  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:29.257012  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:29.257046  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:29.330528  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:29.330555  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:29.330569  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:29.418109  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:29.418145  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:31.986351  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:32.001265  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:32.001349  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:32.035152  451238 cri.go:89] found id: ""
	I0805 13:02:32.035191  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.035200  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:32.035208  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:32.035262  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:32.069086  451238 cri.go:89] found id: ""
	I0805 13:02:32.069118  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.069128  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:32.069136  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:32.069204  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:32.103788  451238 cri.go:89] found id: ""
	I0805 13:02:32.103814  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.103822  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:32.103831  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:32.103893  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:32.139104  451238 cri.go:89] found id: ""
	I0805 13:02:32.139138  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.139149  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:32.139157  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:32.139222  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:32.192759  451238 cri.go:89] found id: ""
	I0805 13:02:32.192789  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.192798  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:32.192804  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:32.192865  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:32.231080  451238 cri.go:89] found id: ""
	I0805 13:02:32.231115  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.231126  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:32.231135  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:32.231200  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:32.266547  451238 cri.go:89] found id: ""
	I0805 13:02:32.266578  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.266587  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:32.266594  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:32.266647  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:32.301828  451238 cri.go:89] found id: ""
	I0805 13:02:32.301856  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.301865  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:32.301875  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:32.301888  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:32.358439  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:32.358479  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:32.372349  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:32.372383  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:32.442335  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:32.442369  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:32.442388  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:32.521705  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:32.521744  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:35.060867  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:35.074370  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:35.074433  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:35.111149  451238 cri.go:89] found id: ""
	I0805 13:02:35.111181  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.111191  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:35.111200  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:35.111268  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:35.153781  451238 cri.go:89] found id: ""
	I0805 13:02:35.153814  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.153825  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:35.153832  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:35.153894  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:35.193207  451238 cri.go:89] found id: ""
	I0805 13:02:35.193239  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.193256  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:35.193291  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:35.193370  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:35.243879  451238 cri.go:89] found id: ""
	I0805 13:02:35.243915  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.243928  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:35.243936  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:35.243994  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:35.297922  451238 cri.go:89] found id: ""
	I0805 13:02:35.297954  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.297966  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:35.297973  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:35.298039  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:35.333201  451238 cri.go:89] found id: ""
	I0805 13:02:35.333234  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.333245  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:35.333254  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:35.333316  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:35.366327  451238 cri.go:89] found id: ""
	I0805 13:02:35.366361  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.366373  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:35.366381  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:35.366449  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:35.401515  451238 cri.go:89] found id: ""
	I0805 13:02:35.401546  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.401555  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:35.401565  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:35.401578  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:35.451057  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:35.451090  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:35.465054  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:35.465095  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:35.547111  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:35.547142  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:35.547160  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:35.627451  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:35.627490  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:38.169022  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:38.181892  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:38.181968  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:38.217919  451238 cri.go:89] found id: ""
	I0805 13:02:38.217951  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.217961  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:38.217970  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:38.218041  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:38.253967  451238 cri.go:89] found id: ""
	I0805 13:02:38.253999  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.254008  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:38.254020  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:38.254073  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:38.293757  451238 cri.go:89] found id: ""
	I0805 13:02:38.293789  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.293801  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:38.293809  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:38.293904  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:38.329657  451238 cri.go:89] found id: ""
	I0805 13:02:38.329686  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.329697  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:38.329705  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:38.329772  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:38.364602  451238 cri.go:89] found id: ""
	I0805 13:02:38.364635  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.364647  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:38.364656  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:38.364732  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:38.396352  451238 cri.go:89] found id: ""
	I0805 13:02:38.396382  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.396394  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:38.396403  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:38.396471  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:38.429172  451238 cri.go:89] found id: ""
	I0805 13:02:38.429203  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.429214  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:38.429223  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:38.429293  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:38.464855  451238 cri.go:89] found id: ""
	I0805 13:02:38.464891  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.464903  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:38.464916  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:38.464931  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:38.514924  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:38.514967  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:38.530076  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:38.530113  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:38.602472  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:38.602494  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:38.602509  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:38.683905  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:38.683948  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:41.226878  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:41.245027  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:41.245100  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:41.280482  451238 cri.go:89] found id: ""
	I0805 13:02:41.280511  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.280523  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:41.280532  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:41.280597  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:41.316592  451238 cri.go:89] found id: ""
	I0805 13:02:41.316622  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.316633  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:41.316641  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:41.316708  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:41.353282  451238 cri.go:89] found id: ""
	I0805 13:02:41.353313  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.353324  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:41.353333  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:41.353397  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:41.393379  451238 cri.go:89] found id: ""
	I0805 13:02:41.393406  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.393417  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:41.393426  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:41.393502  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:41.430980  451238 cri.go:89] found id: ""
	I0805 13:02:41.431012  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.431023  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:41.431031  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:41.431106  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:41.467228  451238 cri.go:89] found id: ""
	I0805 13:02:41.467261  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.467273  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:41.467281  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:41.467348  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:41.502105  451238 cri.go:89] found id: ""
	I0805 13:02:41.502153  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.502166  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:41.502175  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:41.502250  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:41.539286  451238 cri.go:89] found id: ""
	I0805 13:02:41.539314  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.539325  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:41.539338  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:41.539353  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:41.592135  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:41.592175  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:41.608151  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:41.608184  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:41.680096  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:41.680131  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:41.680148  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:41.759589  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:41.759628  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:44.300461  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:44.314310  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:44.314388  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:44.348516  451238 cri.go:89] found id: ""
	I0805 13:02:44.348549  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.348562  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:44.348570  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:44.348635  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:44.388256  451238 cri.go:89] found id: ""
	I0805 13:02:44.388289  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.388299  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:44.388309  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:44.388383  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:44.426743  451238 cri.go:89] found id: ""
	I0805 13:02:44.426778  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.426786  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:44.426792  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:44.426848  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:44.463008  451238 cri.go:89] found id: ""
	I0805 13:02:44.463044  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.463054  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:44.463062  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:44.463129  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:44.497662  451238 cri.go:89] found id: ""
	I0805 13:02:44.497696  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.497707  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:44.497715  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:44.497789  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:44.534253  451238 cri.go:89] found id: ""
	I0805 13:02:44.534281  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.534288  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:44.534294  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:44.534378  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:44.574350  451238 cri.go:89] found id: ""
	I0805 13:02:44.574380  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.574390  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:44.574398  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:44.574468  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:44.609984  451238 cri.go:89] found id: ""
	I0805 13:02:44.610018  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.610031  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:44.610044  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:44.610060  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:44.650363  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:44.650402  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:44.700997  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:44.701032  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:44.716841  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:44.716874  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:44.785482  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:44.785502  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:44.785517  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:47.365382  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:47.378779  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:47.378851  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:47.413615  451238 cri.go:89] found id: ""
	I0805 13:02:47.413636  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.413645  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:47.413651  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:47.413699  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:47.448536  451238 cri.go:89] found id: ""
	I0805 13:02:47.448563  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.448572  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:47.448578  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:47.448629  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:47.490817  451238 cri.go:89] found id: ""
	I0805 13:02:47.490847  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.490856  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:47.490862  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:47.490931  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:47.533151  451238 cri.go:89] found id: ""
	I0805 13:02:47.533179  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.533187  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:47.533193  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:47.533250  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:47.571991  451238 cri.go:89] found id: ""
	I0805 13:02:47.572022  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.572030  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:47.572036  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:47.572096  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:47.606943  451238 cri.go:89] found id: ""
	I0805 13:02:47.606976  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.606987  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:47.606995  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:47.607073  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:47.644704  451238 cri.go:89] found id: ""
	I0805 13:02:47.644741  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.644753  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:47.644762  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:47.644828  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:47.687361  451238 cri.go:89] found id: ""
	I0805 13:02:47.687395  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.687408  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:47.687427  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:47.687453  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:47.766572  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:47.766614  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:47.812209  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:47.812242  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:47.862948  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:47.862987  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:47.878697  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:47.878729  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:47.951680  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:50.452861  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:50.466370  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:50.466440  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:50.500001  451238 cri.go:89] found id: ""
	I0805 13:02:50.500031  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.500043  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:50.500051  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:50.500126  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:50.541752  451238 cri.go:89] found id: ""
	I0805 13:02:50.541786  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.541794  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:50.541800  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:50.541864  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:50.578889  451238 cri.go:89] found id: ""
	I0805 13:02:50.578915  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.578923  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:50.578930  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:50.578984  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:50.614865  451238 cri.go:89] found id: ""
	I0805 13:02:50.614896  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.614906  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:50.614912  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:50.614980  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:50.656169  451238 cri.go:89] found id: ""
	I0805 13:02:50.656195  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.656202  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:50.656209  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:50.656277  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:50.695050  451238 cri.go:89] found id: ""
	I0805 13:02:50.695082  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.695099  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:50.695108  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:50.695187  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:50.733205  451238 cri.go:89] found id: ""
	I0805 13:02:50.733233  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.733242  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:50.733249  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:50.733300  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:50.770654  451238 cri.go:89] found id: ""
	I0805 13:02:50.770683  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.770693  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:50.770706  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:50.770721  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:50.826521  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:50.826567  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:50.842153  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:50.842181  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:50.916445  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:50.916474  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:50.916487  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:50.999973  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:51.000020  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:53.539541  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:53.553804  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:53.553893  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:53.593075  451238 cri.go:89] found id: ""
	I0805 13:02:53.593105  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.593114  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:53.593121  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:53.593190  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:53.629967  451238 cri.go:89] found id: ""
	I0805 13:02:53.630001  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.630012  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:53.630020  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:53.630088  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:53.663535  451238 cri.go:89] found id: ""
	I0805 13:02:53.663564  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.663572  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:53.663577  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:53.663635  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:53.697650  451238 cri.go:89] found id: ""
	I0805 13:02:53.697676  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.697684  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:53.697690  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:53.697741  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:53.732845  451238 cri.go:89] found id: ""
	I0805 13:02:53.732873  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.732883  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:53.732891  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:53.732950  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:53.774673  451238 cri.go:89] found id: ""
	I0805 13:02:53.774703  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.774712  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:53.774719  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:53.774783  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:53.815368  451238 cri.go:89] found id: ""
	I0805 13:02:53.815401  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.815413  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:53.815423  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:53.815487  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:53.849726  451238 cri.go:89] found id: ""
	I0805 13:02:53.849760  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.849771  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:53.849785  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:53.849801  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:53.925356  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:53.925398  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:53.966721  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:53.966751  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:54.023096  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:54.023140  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:54.037634  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:54.037666  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:54.115159  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:56.616326  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:56.629665  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:56.629744  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:56.665665  451238 cri.go:89] found id: ""
	I0805 13:02:56.665701  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.665713  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:56.665722  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:56.665790  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:56.700446  451238 cri.go:89] found id: ""
	I0805 13:02:56.700473  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.700481  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:56.700488  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:56.700554  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:56.737152  451238 cri.go:89] found id: ""
	I0805 13:02:56.737190  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.737202  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:56.737210  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:56.737283  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:56.777909  451238 cri.go:89] found id: ""
	I0805 13:02:56.777942  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.777954  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:56.777961  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:56.778027  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:56.813503  451238 cri.go:89] found id: ""
	I0805 13:02:56.813537  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.813547  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:56.813556  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:56.813625  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:56.848964  451238 cri.go:89] found id: ""
	I0805 13:02:56.848993  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.849002  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:56.849008  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:56.849071  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:56.884310  451238 cri.go:89] found id: ""
	I0805 13:02:56.884339  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.884347  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:56.884356  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:56.884417  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:56.925895  451238 cri.go:89] found id: ""
	I0805 13:02:56.925926  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.925936  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:56.925948  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:56.925962  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:56.982847  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:56.982882  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:56.997703  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:56.997742  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:57.071130  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:57.071153  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:57.071174  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:57.152985  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:57.153029  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:59.697501  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:59.711799  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:59.711879  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:59.746992  451238 cri.go:89] found id: ""
	I0805 13:02:59.747024  451238 logs.go:276] 0 containers: []
	W0805 13:02:59.747035  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:59.747043  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:59.747115  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:59.780563  451238 cri.go:89] found id: ""
	I0805 13:02:59.780592  451238 logs.go:276] 0 containers: []
	W0805 13:02:59.780604  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:59.780611  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:59.780676  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:59.816973  451238 cri.go:89] found id: ""
	I0805 13:02:59.817007  451238 logs.go:276] 0 containers: []
	W0805 13:02:59.817019  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:59.817027  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:59.817098  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:59.851989  451238 cri.go:89] found id: ""
	I0805 13:02:59.852018  451238 logs.go:276] 0 containers: []
	W0805 13:02:59.852028  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:59.852035  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:59.852086  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:59.887491  451238 cri.go:89] found id: ""
	I0805 13:02:59.887517  451238 logs.go:276] 0 containers: []
	W0805 13:02:59.887525  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:59.887535  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:59.887587  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:59.924965  451238 cri.go:89] found id: ""
	I0805 13:02:59.924997  451238 logs.go:276] 0 containers: []
	W0805 13:02:59.925005  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:59.925012  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:59.925062  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:59.965830  451238 cri.go:89] found id: ""
	I0805 13:02:59.965860  451238 logs.go:276] 0 containers: []
	W0805 13:02:59.965868  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:59.965875  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:59.965932  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:03:00.003208  451238 cri.go:89] found id: ""
	I0805 13:03:00.003241  451238 logs.go:276] 0 containers: []
	W0805 13:03:00.003250  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:03:00.003260  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:00.003275  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:00.056865  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:00.056911  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:00.070563  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:00.070593  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:03:00.137931  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:03:00.137957  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:00.137976  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:00.221598  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:03:00.221649  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:03:02.761328  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:03:02.775836  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:03:02.775904  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:03:02.812714  451238 cri.go:89] found id: ""
	I0805 13:03:02.812752  451238 logs.go:276] 0 containers: []
	W0805 13:03:02.812764  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:03:02.812773  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:03:02.812848  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:03:02.850072  451238 cri.go:89] found id: ""
	I0805 13:03:02.850103  451238 logs.go:276] 0 containers: []
	W0805 13:03:02.850130  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:03:02.850138  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:03:02.850197  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:03:02.886956  451238 cri.go:89] found id: ""
	I0805 13:03:02.887081  451238 logs.go:276] 0 containers: []
	W0805 13:03:02.887103  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:03:02.887114  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:03:02.887188  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:03:02.924874  451238 cri.go:89] found id: ""
	I0805 13:03:02.924906  451238 logs.go:276] 0 containers: []
	W0805 13:03:02.924918  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:03:02.924925  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:03:02.924996  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:03:02.965965  451238 cri.go:89] found id: ""
	I0805 13:03:02.965996  451238 logs.go:276] 0 containers: []
	W0805 13:03:02.966007  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:03:02.966015  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:03:02.966101  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:03:03.001081  451238 cri.go:89] found id: ""
	I0805 13:03:03.001118  451238 logs.go:276] 0 containers: []
	W0805 13:03:03.001130  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:03:03.001140  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:03:03.001201  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:03:03.036194  451238 cri.go:89] found id: ""
	I0805 13:03:03.036223  451238 logs.go:276] 0 containers: []
	W0805 13:03:03.036234  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:03:03.036243  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:03:03.036303  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:03:03.071905  451238 cri.go:89] found id: ""
	I0805 13:03:03.071940  451238 logs.go:276] 0 containers: []
	W0805 13:03:03.071951  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:03:03.071964  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:03.071982  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:03.124400  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:03.124442  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:03.138492  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:03.138520  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:03:03.207300  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:03:03.207326  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:03.207342  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:03.294941  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:03:03.294983  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:03:05.836187  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:03:05.850504  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:03:05.850609  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:03:05.889692  451238 cri.go:89] found id: ""
	I0805 13:03:05.889718  451238 logs.go:276] 0 containers: []
	W0805 13:03:05.889729  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:03:05.889737  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:03:05.889804  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:03:05.924597  451238 cri.go:89] found id: ""
	I0805 13:03:05.924630  451238 logs.go:276] 0 containers: []
	W0805 13:03:05.924640  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:03:05.924647  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:03:05.924711  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:03:05.960373  451238 cri.go:89] found id: ""
	I0805 13:03:05.960404  451238 logs.go:276] 0 containers: []
	W0805 13:03:05.960413  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:03:05.960419  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:03:05.960471  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:03:05.996583  451238 cri.go:89] found id: ""
	I0805 13:03:05.996617  451238 logs.go:276] 0 containers: []
	W0805 13:03:05.996628  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:03:05.996636  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:03:05.996708  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:03:06.033539  451238 cri.go:89] found id: ""
	I0805 13:03:06.033567  451238 logs.go:276] 0 containers: []
	W0805 13:03:06.033575  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:03:06.033586  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:03:06.033655  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:03:06.069348  451238 cri.go:89] found id: ""
	I0805 13:03:06.069378  451238 logs.go:276] 0 containers: []
	W0805 13:03:06.069391  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:03:06.069401  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:03:06.069466  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:03:06.103570  451238 cri.go:89] found id: ""
	I0805 13:03:06.103599  451238 logs.go:276] 0 containers: []
	W0805 13:03:06.103607  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:03:06.103613  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:03:06.103665  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:03:06.140230  451238 cri.go:89] found id: ""
	I0805 13:03:06.140260  451238 logs.go:276] 0 containers: []
	W0805 13:03:06.140271  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:03:06.140284  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:06.140300  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:06.191073  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:06.191123  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:06.204825  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:06.204857  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:03:06.281309  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:03:06.281339  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:06.281358  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:06.361709  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:03:06.361749  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:03:08.903194  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:03:08.921602  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:03:08.921681  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:03:08.960916  451238 cri.go:89] found id: ""
	I0805 13:03:08.960945  451238 logs.go:276] 0 containers: []
	W0805 13:03:08.960975  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:03:08.960986  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:03:08.961055  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:03:08.996316  451238 cri.go:89] found id: ""
	I0805 13:03:08.996417  451238 logs.go:276] 0 containers: []
	W0805 13:03:08.996436  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:03:08.996448  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:03:08.996522  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:03:09.038536  451238 cri.go:89] found id: ""
	I0805 13:03:09.038572  451238 logs.go:276] 0 containers: []
	W0805 13:03:09.038584  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:03:09.038593  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:03:09.038664  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:03:09.075368  451238 cri.go:89] found id: ""
	I0805 13:03:09.075396  451238 logs.go:276] 0 containers: []
	W0805 13:03:09.075405  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:03:09.075412  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:03:09.075474  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:03:09.114232  451238 cri.go:89] found id: ""
	I0805 13:03:09.114262  451238 logs.go:276] 0 containers: []
	W0805 13:03:09.114272  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:03:09.114280  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:03:09.114353  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:03:09.161878  451238 cri.go:89] found id: ""
	I0805 13:03:09.161964  451238 logs.go:276] 0 containers: []
	W0805 13:03:09.161978  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:03:09.161988  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:03:09.162062  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:03:09.206694  451238 cri.go:89] found id: ""
	I0805 13:03:09.206727  451238 logs.go:276] 0 containers: []
	W0805 13:03:09.206739  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:03:09.206748  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:03:09.206890  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:03:09.257029  451238 cri.go:89] found id: ""
	I0805 13:03:09.257066  451238 logs.go:276] 0 containers: []
	W0805 13:03:09.257079  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:03:09.257090  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:09.257107  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:09.278638  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:09.278679  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:03:09.353760  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:03:09.353781  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:09.353793  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:09.438371  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:03:09.438419  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:03:09.487253  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:09.487297  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:12.042215  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:03:12.055721  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:03:12.055812  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:03:12.096936  451238 cri.go:89] found id: ""
	I0805 13:03:12.096965  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.096977  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:03:12.096985  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:03:12.097051  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:03:12.136149  451238 cri.go:89] found id: ""
	I0805 13:03:12.136181  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.136192  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:03:12.136199  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:03:12.136276  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:03:12.180568  451238 cri.go:89] found id: ""
	I0805 13:03:12.180606  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.180618  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:03:12.180626  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:03:12.180695  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:03:12.221759  451238 cri.go:89] found id: ""
	I0805 13:03:12.221794  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.221806  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:03:12.221815  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:03:12.221882  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:03:12.259460  451238 cri.go:89] found id: ""
	I0805 13:03:12.259490  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.259498  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:03:12.259508  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:03:12.259563  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:03:12.301245  451238 cri.go:89] found id: ""
	I0805 13:03:12.301277  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.301289  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:03:12.301297  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:03:12.301368  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:03:12.343640  451238 cri.go:89] found id: ""
	I0805 13:03:12.343678  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.343690  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:03:12.343698  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:03:12.343809  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:03:12.382729  451238 cri.go:89] found id: ""
	I0805 13:03:12.382762  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.382774  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:03:12.382787  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:12.382807  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:12.400862  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:12.400897  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:03:12.478755  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:03:12.478788  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:12.478807  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:12.566029  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:03:12.566080  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:03:12.611834  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:12.611929  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:15.171517  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:03:15.185569  451238 kubeadm.go:597] duration metric: took 4m3.737627997s to restartPrimaryControlPlane
	W0805 13:03:15.185662  451238 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0805 13:03:15.185697  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0805 13:03:15.669994  451238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 13:03:15.684794  451238 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 13:03:15.695088  451238 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 13:03:15.705403  451238 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 13:03:15.705427  451238 kubeadm.go:157] found existing configuration files:
	
	I0805 13:03:15.705488  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 13:03:15.714777  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 13:03:15.714833  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 13:03:15.724437  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 13:03:15.733263  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 13:03:15.733317  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 13:03:15.743004  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 13:03:15.752219  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 13:03:15.752278  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 13:03:15.761788  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 13:03:15.771193  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 13:03:15.771245  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 13:03:15.780964  451238 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 13:03:15.855628  451238 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0805 13:03:15.855751  451238 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 13:03:16.015686  451238 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 13:03:16.015880  451238 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 13:03:16.016041  451238 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 13:03:16.207054  451238 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 13:03:16.209133  451238 out.go:204]   - Generating certificates and keys ...
	I0805 13:03:16.209256  451238 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 13:03:16.209376  451238 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 13:03:16.209493  451238 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0805 13:03:16.209597  451238 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0805 13:03:16.209703  451238 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0805 13:03:16.211637  451238 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0805 13:03:16.211726  451238 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0805 13:03:16.211833  451238 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0805 13:03:16.211959  451238 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0805 13:03:16.212690  451238 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0805 13:03:16.212863  451238 kubeadm.go:310] [certs] Using the existing "sa" key
	I0805 13:03:16.212963  451238 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 13:03:16.283080  451238 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 13:03:16.609523  451238 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 13:03:16.765635  451238 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 13:03:16.934487  451238 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 13:03:16.955335  451238 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 13:03:16.956267  451238 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 13:03:16.956328  451238 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 13:03:17.088081  451238 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 13:03:17.090118  451238 out.go:204]   - Booting up control plane ...
	I0805 13:03:17.090264  451238 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 13:03:17.100902  451238 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 13:03:17.101263  451238 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 13:03:17.102210  451238 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 13:03:17.112522  451238 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0805 13:03:57.113870  451238 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0805 13:03:57.114408  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:03:57.114630  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:04:02.115811  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:04:02.116057  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:04:12.115990  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:04:12.116208  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:04:32.116734  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:04:32.117001  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:05:12.119196  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:05:12.119475  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:05:12.119502  451238 kubeadm.go:310] 
	I0805 13:05:12.119564  451238 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0805 13:05:12.119622  451238 kubeadm.go:310] 		timed out waiting for the condition
	I0805 13:05:12.119634  451238 kubeadm.go:310] 
	I0805 13:05:12.119680  451238 kubeadm.go:310] 	This error is likely caused by:
	I0805 13:05:12.119724  451238 kubeadm.go:310] 		- The kubelet is not running
	I0805 13:05:12.119880  451238 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0805 13:05:12.119898  451238 kubeadm.go:310] 
	I0805 13:05:12.120029  451238 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0805 13:05:12.120114  451238 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0805 13:05:12.120169  451238 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0805 13:05:12.120179  451238 kubeadm.go:310] 
	I0805 13:05:12.120321  451238 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0805 13:05:12.120445  451238 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0805 13:05:12.120455  451238 kubeadm.go:310] 
	I0805 13:05:12.120612  451238 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0805 13:05:12.120751  451238 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0805 13:05:12.120888  451238 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0805 13:05:12.121010  451238 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0805 13:05:12.121023  451238 kubeadm.go:310] 
	I0805 13:05:12.121325  451238 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 13:05:12.121458  451238 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0805 13:05:12.121545  451238 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0805 13:05:12.121714  451238 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0805 13:05:12.121782  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0805 13:05:12.587687  451238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 13:05:12.603422  451238 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 13:05:12.614302  451238 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 13:05:12.614330  451238 kubeadm.go:157] found existing configuration files:
	
	I0805 13:05:12.614391  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 13:05:12.625131  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 13:05:12.625199  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 13:05:12.635606  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 13:05:12.644896  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 13:05:12.644953  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 13:05:12.655178  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 13:05:12.664668  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 13:05:12.664753  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 13:05:12.675174  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 13:05:12.684765  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 13:05:12.684834  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 13:05:12.694762  451238 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 13:05:12.930906  451238 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 13:07:09.256859  451238 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0805 13:07:09.257016  451238 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0805 13:07:09.258511  451238 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0805 13:07:09.258579  451238 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 13:07:09.258710  451238 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 13:07:09.258881  451238 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 13:07:09.259022  451238 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 13:07:09.259125  451238 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 13:07:09.260912  451238 out.go:204]   - Generating certificates and keys ...
	I0805 13:07:09.261023  451238 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 13:07:09.261123  451238 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 13:07:09.261232  451238 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0805 13:07:09.261319  451238 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0805 13:07:09.261411  451238 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0805 13:07:09.261507  451238 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0805 13:07:09.261601  451238 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0805 13:07:09.261690  451238 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0805 13:07:09.261801  451238 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0805 13:07:09.261946  451238 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0805 13:07:09.262015  451238 kubeadm.go:310] [certs] Using the existing "sa" key
	I0805 13:07:09.262119  451238 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 13:07:09.262198  451238 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 13:07:09.262273  451238 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 13:07:09.262369  451238 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 13:07:09.262464  451238 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 13:07:09.262615  451238 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 13:07:09.262731  451238 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 13:07:09.262770  451238 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 13:07:09.262831  451238 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 13:07:09.264428  451238 out.go:204]   - Booting up control plane ...
	I0805 13:07:09.264537  451238 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 13:07:09.264663  451238 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 13:07:09.264774  451238 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 13:07:09.264896  451238 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 13:07:09.265144  451238 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0805 13:07:09.265224  451238 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0805 13:07:09.265318  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:07:09.265554  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:07:09.265630  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:07:09.265783  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:07:09.265886  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:07:09.266143  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:07:09.266221  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:07:09.266387  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:07:09.266472  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:07:09.266656  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:07:09.266673  451238 kubeadm.go:310] 
	I0805 13:07:09.266707  451238 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0805 13:07:09.266738  451238 kubeadm.go:310] 		timed out waiting for the condition
	I0805 13:07:09.266743  451238 kubeadm.go:310] 
	I0805 13:07:09.266788  451238 kubeadm.go:310] 	This error is likely caused by:
	I0805 13:07:09.266819  451238 kubeadm.go:310] 		- The kubelet is not running
	I0805 13:07:09.266924  451238 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0805 13:07:09.266932  451238 kubeadm.go:310] 
	I0805 13:07:09.267050  451238 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0805 13:07:09.267137  451238 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0805 13:07:09.267192  451238 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0805 13:07:09.267201  451238 kubeadm.go:310] 
	I0805 13:07:09.267316  451238 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0805 13:07:09.267435  451238 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0805 13:07:09.267445  451238 kubeadm.go:310] 
	I0805 13:07:09.267570  451238 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0805 13:07:09.267683  451238 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0805 13:07:09.267802  451238 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0805 13:07:09.267898  451238 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0805 13:07:09.267986  451238 kubeadm.go:310] 
	I0805 13:07:09.268003  451238 kubeadm.go:394] duration metric: took 7m57.870990174s to StartCluster
	I0805 13:07:09.268066  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:07:09.268158  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:07:09.311436  451238 cri.go:89] found id: ""
	I0805 13:07:09.311471  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.311497  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:07:09.311509  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:07:09.311573  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:07:09.347748  451238 cri.go:89] found id: ""
	I0805 13:07:09.347776  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.347784  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:07:09.347797  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:07:09.347860  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:07:09.385418  451238 cri.go:89] found id: ""
	I0805 13:07:09.385445  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.385453  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:07:09.385460  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:07:09.385517  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:07:09.427209  451238 cri.go:89] found id: ""
	I0805 13:07:09.427255  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.427268  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:07:09.427276  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:07:09.427360  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:07:09.461763  451238 cri.go:89] found id: ""
	I0805 13:07:09.461787  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.461795  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:07:09.461801  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:07:09.461854  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:07:09.498655  451238 cri.go:89] found id: ""
	I0805 13:07:09.498692  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.498705  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:07:09.498713  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:07:09.498782  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:07:09.534100  451238 cri.go:89] found id: ""
	I0805 13:07:09.534134  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.534143  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:07:09.534149  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:07:09.534207  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:07:09.570089  451238 cri.go:89] found id: ""
	I0805 13:07:09.570125  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.570137  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:07:09.570153  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:07:09.570176  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:07:09.625158  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:07:09.625199  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:07:09.640087  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:07:09.640119  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:07:09.719851  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:07:09.719879  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:07:09.719895  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:07:09.832717  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:07:09.832758  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0805 13:07:09.878585  451238 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0805 13:07:09.878653  451238 out.go:239] * 
	* 
	W0805 13:07:09.878739  451238 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0805 13:07:09.878767  451238 out.go:239] * 
	* 
	W0805 13:07:09.879755  451238 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 13:07:09.883027  451238 out.go:177] 
	W0805 13:07:09.884197  451238 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0805 13:07:09.884243  451238 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0805 13:07:09.884265  451238 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0805 13:07:09.885783  451238 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-635707 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-635707 -n old-k8s-version-635707
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-635707 -n old-k8s-version-635707: exit status 2 (236.030858ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-635707 logs -n 25
E0805 13:07:12.032910  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-635707 logs -n 25: (1.660007163s)
E0805 13:07:12.506143  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/auto-119870/client.crt: no such file or directory
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-119870 sudo cat                              | bridge-119870                | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-119870 sudo                                  | bridge-119870                | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-119870 sudo                                  | bridge-119870                | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-119870 sudo                                  | bridge-119870                | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-119870 sudo find                             | bridge-119870                | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-119870 sudo crio                             | bridge-119870                | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-119870                                       | bridge-119870                | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	| delete  | -p                                                     | disable-driver-mounts-130994 | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | disable-driver-mounts-130994                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-371585 | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:51 UTC |
	|         | default-k8s-diff-port-371585                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-321139            | embed-certs-321139           | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-321139                                  | embed-certs-321139           | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-669469             | no-preload-669469            | jenkins | v1.33.1 | 05 Aug 24 12:51 UTC | 05 Aug 24 12:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-669469                                   | no-preload-669469            | jenkins | v1.33.1 | 05 Aug 24 12:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-371585  | default-k8s-diff-port-371585 | jenkins | v1.33.1 | 05 Aug 24 12:51 UTC | 05 Aug 24 12:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-371585 | jenkins | v1.33.1 | 05 Aug 24 12:51 UTC |                     |
	|         | default-k8s-diff-port-371585                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-321139                 | embed-certs-321139           | jenkins | v1.33.1 | 05 Aug 24 12:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-635707        | old-k8s-version-635707       | jenkins | v1.33.1 | 05 Aug 24 12:53 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-321139                                  | embed-certs-321139           | jenkins | v1.33.1 | 05 Aug 24 12:53 UTC | 05 Aug 24 13:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-669469                  | no-preload-669469            | jenkins | v1.33.1 | 05 Aug 24 12:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-669469                                   | no-preload-669469            | jenkins | v1.33.1 | 05 Aug 24 12:53 UTC | 05 Aug 24 13:03 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-371585       | default-k8s-diff-port-371585 | jenkins | v1.33.1 | 05 Aug 24 12:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-371585 | jenkins | v1.33.1 | 05 Aug 24 12:54 UTC | 05 Aug 24 13:04 UTC |
	|         | default-k8s-diff-port-371585                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-635707                              | old-k8s-version-635707       | jenkins | v1.33.1 | 05 Aug 24 12:55 UTC | 05 Aug 24 12:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-635707             | old-k8s-version-635707       | jenkins | v1.33.1 | 05 Aug 24 12:55 UTC | 05 Aug 24 12:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-635707                              | old-k8s-version-635707       | jenkins | v1.33.1 | 05 Aug 24 12:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 12:55:11
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 12:55:11.960192  451238 out.go:291] Setting OutFile to fd 1 ...
	I0805 12:55:11.960471  451238 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:55:11.960479  451238 out.go:304] Setting ErrFile to fd 2...
	I0805 12:55:11.960484  451238 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:55:11.960646  451238 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
	I0805 12:55:11.961145  451238 out.go:298] Setting JSON to false
	I0805 12:55:11.962063  451238 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":9459,"bootTime":1722853053,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0805 12:55:11.962121  451238 start.go:139] virtualization: kvm guest
	I0805 12:55:11.964372  451238 out.go:177] * [old-k8s-version-635707] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0805 12:55:11.965770  451238 notify.go:220] Checking for updates...
	I0805 12:55:11.965787  451238 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 12:55:11.967106  451238 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 12:55:11.968790  451238 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 12:55:11.970181  451238 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 12:55:11.971500  451238 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0805 12:55:11.973243  451238 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 12:55:11.974825  451238 config.go:182] Loaded profile config "old-k8s-version-635707": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0805 12:55:11.975239  451238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:55:11.975319  451238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:55:11.990296  451238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40583
	I0805 12:55:11.990704  451238 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:55:11.991235  451238 main.go:141] libmachine: Using API Version  1
	I0805 12:55:11.991259  451238 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:55:11.991575  451238 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:55:11.991765  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:55:11.993484  451238 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0805 12:55:11.994687  451238 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 12:55:11.994952  451238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:55:11.994984  451238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:55:12.009528  451238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37395
	I0805 12:55:12.009879  451238 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:55:12.010353  451238 main.go:141] libmachine: Using API Version  1
	I0805 12:55:12.010375  451238 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:55:12.010670  451238 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:55:12.010857  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:55:12.044634  451238 out.go:177] * Using the kvm2 driver based on existing profile
	I0805 12:55:12.045859  451238 start.go:297] selected driver: kvm2
	I0805 12:55:12.045876  451238 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-635707 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-635707 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.41 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:55:12.045987  451238 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 12:55:12.046662  451238 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 12:55:12.046731  451238 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19377-383955/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0805 12:55:12.061918  451238 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0805 12:55:12.062400  451238 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 12:55:12.062484  451238 cni.go:84] Creating CNI manager for ""
	I0805 12:55:12.062502  451238 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:55:12.062572  451238 start.go:340] cluster config:
	{Name:old-k8s-version-635707 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-635707 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.41 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:55:12.062722  451238 iso.go:125] acquiring lock: {Name:mk78a4988ea0dfb86bb6f7367e362683a39fd912 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 12:55:12.064478  451238 out.go:177] * Starting "old-k8s-version-635707" primary control-plane node in "old-k8s-version-635707" cluster
	I0805 12:55:10.820047  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:13.892041  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:12.065640  451238 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0805 12:55:12.065680  451238 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0805 12:55:12.065701  451238 cache.go:56] Caching tarball of preloaded images
	I0805 12:55:12.065786  451238 preload.go:172] Found /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0805 12:55:12.065797  451238 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0805 12:55:12.065897  451238 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/config.json ...
	I0805 12:55:12.066073  451238 start.go:360] acquireMachinesLock for old-k8s-version-635707: {Name:mk3babe91d55c30c0b650587cdec6489eb3a7ed6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 12:55:19.971977  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:23.044092  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:29.124041  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:32.196124  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:38.276045  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:41.348117  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:47.428042  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:50.500022  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:56.580074  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:59.652091  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:05.732072  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:08.804128  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:14.884085  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:17.956073  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:24.036067  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:27.108059  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:33.188012  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:36.260134  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:42.340036  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:45.412038  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:51.492022  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:54.564068  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:00.644018  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:03.716112  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:09.796041  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:12.868080  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:18.948054  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:22.020023  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:28.100099  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:31.172076  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:37.251997  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:40.324080  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:46.404055  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:49.476072  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:55.556045  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:58.627984  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:58:01.632326  450576 start.go:364] duration metric: took 4m17.994768704s to acquireMachinesLock for "no-preload-669469"
	I0805 12:58:01.632391  450576 start.go:96] Skipping create...Using existing machine configuration
	I0805 12:58:01.632403  450576 fix.go:54] fixHost starting: 
	I0805 12:58:01.632845  450576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:58:01.632880  450576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:58:01.648358  450576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43013
	I0805 12:58:01.648860  450576 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:58:01.649387  450576 main.go:141] libmachine: Using API Version  1
	I0805 12:58:01.649410  450576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:58:01.649779  450576 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:58:01.649963  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 12:58:01.650176  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetState
	I0805 12:58:01.651681  450576 fix.go:112] recreateIfNeeded on no-preload-669469: state=Stopped err=<nil>
	I0805 12:58:01.651715  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	W0805 12:58:01.651903  450576 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 12:58:01.653860  450576 out.go:177] * Restarting existing kvm2 VM for "no-preload-669469" ...
	I0805 12:58:01.655338  450576 main.go:141] libmachine: (no-preload-669469) Calling .Start
	I0805 12:58:01.655475  450576 main.go:141] libmachine: (no-preload-669469) Ensuring networks are active...
	I0805 12:58:01.656224  450576 main.go:141] libmachine: (no-preload-669469) Ensuring network default is active
	I0805 12:58:01.656565  450576 main.go:141] libmachine: (no-preload-669469) Ensuring network mk-no-preload-669469 is active
	I0805 12:58:01.656898  450576 main.go:141] libmachine: (no-preload-669469) Getting domain xml...
	I0805 12:58:01.657537  450576 main.go:141] libmachine: (no-preload-669469) Creating domain...
	I0805 12:58:02.879809  450576 main.go:141] libmachine: (no-preload-669469) Waiting to get IP...
	I0805 12:58:02.880800  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:02.881194  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:02.881270  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:02.881175  451829 retry.go:31] will retry after 303.380177ms: waiting for machine to come up
	I0805 12:58:03.185834  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:03.186259  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:03.186288  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:03.186214  451829 retry.go:31] will retry after 263.494141ms: waiting for machine to come up
	I0805 12:58:03.451923  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:03.452263  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:03.452340  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:03.452217  451829 retry.go:31] will retry after 310.615163ms: waiting for machine to come up
	I0805 12:58:01.629832  450393 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 12:58:01.629873  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetMachineName
	I0805 12:58:01.630250  450393 buildroot.go:166] provisioning hostname "embed-certs-321139"
	I0805 12:58:01.630295  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetMachineName
	I0805 12:58:01.630511  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:58:01.632158  450393 machine.go:97] duration metric: took 4m37.422562602s to provisionDockerMachine
	I0805 12:58:01.632208  450393 fix.go:56] duration metric: took 4m37.444588707s for fixHost
	I0805 12:58:01.632226  450393 start.go:83] releasing machines lock for "embed-certs-321139", held for 4m37.44461751s
	W0805 12:58:01.632250  450393 start.go:714] error starting host: provision: host is not running
	W0805 12:58:01.632431  450393 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0805 12:58:01.632445  450393 start.go:729] Will try again in 5 seconds ...
	I0805 12:58:03.764803  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:03.765280  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:03.765305  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:03.765243  451829 retry.go:31] will retry after 570.955722ms: waiting for machine to come up
	I0805 12:58:04.338423  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:04.338863  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:04.338893  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:04.338811  451829 retry.go:31] will retry after 485.490715ms: waiting for machine to come up
	I0805 12:58:04.825511  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:04.825882  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:04.825911  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:04.825823  451829 retry.go:31] will retry after 671.109731ms: waiting for machine to come up
	I0805 12:58:05.498113  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:05.498529  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:05.498557  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:05.498467  451829 retry.go:31] will retry after 997.668856ms: waiting for machine to come up
	I0805 12:58:06.497843  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:06.498144  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:06.498161  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:06.498120  451829 retry.go:31] will retry after 996.614411ms: waiting for machine to come up
	I0805 12:58:07.496801  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:07.497298  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:07.497334  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:07.497249  451829 retry.go:31] will retry after 1.384682595s: waiting for machine to come up
	I0805 12:58:06.634410  450393 start.go:360] acquireMachinesLock for embed-certs-321139: {Name:mk3babe91d55c30c0b650587cdec6489eb3a7ed6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 12:58:08.883309  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:08.883701  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:08.883732  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:08.883642  451829 retry.go:31] will retry after 2.017073843s: waiting for machine to come up
	I0805 12:58:10.903852  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:10.904279  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:10.904310  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:10.904233  451829 retry.go:31] will retry after 2.485880433s: waiting for machine to come up
	I0805 12:58:13.392693  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:13.393169  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:13.393199  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:13.393116  451829 retry.go:31] will retry after 2.986076236s: waiting for machine to come up
	I0805 12:58:16.380921  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:16.381475  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:16.381508  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:16.381432  451829 retry.go:31] will retry after 4.291617536s: waiting for machine to come up
	I0805 12:58:21.948770  450884 start.go:364] duration metric: took 4m4.773878111s to acquireMachinesLock for "default-k8s-diff-port-371585"
	I0805 12:58:21.948843  450884 start.go:96] Skipping create...Using existing machine configuration
	I0805 12:58:21.948851  450884 fix.go:54] fixHost starting: 
	I0805 12:58:21.949291  450884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:58:21.949337  450884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:58:21.966933  450884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34223
	I0805 12:58:21.967356  450884 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:58:21.967874  450884 main.go:141] libmachine: Using API Version  1
	I0805 12:58:21.967899  450884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:58:21.968326  450884 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:58:21.968638  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 12:58:21.968874  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetState
	I0805 12:58:21.970608  450884 fix.go:112] recreateIfNeeded on default-k8s-diff-port-371585: state=Stopped err=<nil>
	I0805 12:58:21.970631  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	W0805 12:58:21.970789  450884 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 12:58:21.973235  450884 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-371585" ...
	I0805 12:58:21.974564  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .Start
	I0805 12:58:21.974751  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Ensuring networks are active...
	I0805 12:58:21.975581  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Ensuring network default is active
	I0805 12:58:21.976001  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Ensuring network mk-default-k8s-diff-port-371585 is active
	I0805 12:58:21.976376  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Getting domain xml...
	I0805 12:58:21.977078  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Creating domain...
	I0805 12:58:20.678231  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.678743  450576 main.go:141] libmachine: (no-preload-669469) Found IP for machine: 192.168.72.223
	I0805 12:58:20.678771  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has current primary IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.678786  450576 main.go:141] libmachine: (no-preload-669469) Reserving static IP address...
	I0805 12:58:20.679230  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "no-preload-669469", mac: "52:54:00:55:38:0a", ip: "192.168.72.223"} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:20.679266  450576 main.go:141] libmachine: (no-preload-669469) Reserved static IP address: 192.168.72.223
	I0805 12:58:20.679288  450576 main.go:141] libmachine: (no-preload-669469) DBG | skip adding static IP to network mk-no-preload-669469 - found existing host DHCP lease matching {name: "no-preload-669469", mac: "52:54:00:55:38:0a", ip: "192.168.72.223"}
	I0805 12:58:20.679302  450576 main.go:141] libmachine: (no-preload-669469) DBG | Getting to WaitForSSH function...
	I0805 12:58:20.679317  450576 main.go:141] libmachine: (no-preload-669469) Waiting for SSH to be available...
	I0805 12:58:20.681864  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.682263  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:20.682297  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.682447  450576 main.go:141] libmachine: (no-preload-669469) DBG | Using SSH client type: external
	I0805 12:58:20.682484  450576 main.go:141] libmachine: (no-preload-669469) DBG | Using SSH private key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa (-rw-------)
	I0805 12:58:20.682539  450576 main.go:141] libmachine: (no-preload-669469) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.223 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 12:58:20.682557  450576 main.go:141] libmachine: (no-preload-669469) DBG | About to run SSH command:
	I0805 12:58:20.682568  450576 main.go:141] libmachine: (no-preload-669469) DBG | exit 0
	I0805 12:58:20.807791  450576 main.go:141] libmachine: (no-preload-669469) DBG | SSH cmd err, output: <nil>: 
	I0805 12:58:20.808168  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetConfigRaw
	I0805 12:58:20.808767  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetIP
	I0805 12:58:20.811170  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.811486  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:20.811517  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.811738  450576 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469/config.json ...
	I0805 12:58:20.811957  450576 machine.go:94] provisionDockerMachine start ...
	I0805 12:58:20.811976  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 12:58:20.812203  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:20.814305  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.814656  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:20.814693  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.814823  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:20.814996  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:20.815156  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:20.815329  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:20.815503  450576 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:20.815871  450576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.223 22 <nil> <nil>}
	I0805 12:58:20.815887  450576 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 12:58:20.920311  450576 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 12:58:20.920344  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetMachineName
	I0805 12:58:20.920642  450576 buildroot.go:166] provisioning hostname "no-preload-669469"
	I0805 12:58:20.920695  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetMachineName
	I0805 12:58:20.920951  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:20.924029  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.924583  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:20.924611  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.924770  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:20.925001  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:20.925190  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:20.925334  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:20.925514  450576 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:20.925755  450576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.223 22 <nil> <nil>}
	I0805 12:58:20.925774  450576 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-669469 && echo "no-preload-669469" | sudo tee /etc/hostname
	I0805 12:58:21.046579  450576 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-669469
	
	I0805 12:58:21.046614  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:21.049322  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.049657  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.049687  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.049851  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:21.050049  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.050239  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.050412  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:21.050588  450576 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:21.050755  450576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.223 22 <nil> <nil>}
	I0805 12:58:21.050771  450576 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-669469' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-669469/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-669469' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 12:58:21.165100  450576 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 12:58:21.165134  450576 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19377-383955/.minikube CaCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19377-383955/.minikube}
	I0805 12:58:21.165170  450576 buildroot.go:174] setting up certificates
	I0805 12:58:21.165180  450576 provision.go:84] configureAuth start
	I0805 12:58:21.165191  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetMachineName
	I0805 12:58:21.165477  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetIP
	I0805 12:58:21.168018  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.168399  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.168443  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.168703  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:21.171168  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.171536  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.171565  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.171638  450576 provision.go:143] copyHostCerts
	I0805 12:58:21.171713  450576 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem, removing ...
	I0805 12:58:21.171724  450576 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 12:58:21.171807  450576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem (1082 bytes)
	I0805 12:58:21.171920  450576 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem, removing ...
	I0805 12:58:21.171930  450576 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 12:58:21.171955  450576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem (1123 bytes)
	I0805 12:58:21.172010  450576 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem, removing ...
	I0805 12:58:21.172016  450576 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 12:58:21.172037  450576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem (1675 bytes)
	I0805 12:58:21.172095  450576 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem org=jenkins.no-preload-669469 san=[127.0.0.1 192.168.72.223 localhost minikube no-preload-669469]
	I0805 12:58:21.287395  450576 provision.go:177] copyRemoteCerts
	I0805 12:58:21.287463  450576 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 12:58:21.287505  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:21.290416  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.290765  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.290796  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.290962  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:21.291169  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.291323  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:21.291460  450576 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa Username:docker}
	I0805 12:58:21.373992  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0805 12:58:21.398249  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 12:58:21.422950  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0805 12:58:21.446469  450576 provision.go:87] duration metric: took 281.275299ms to configureAuth
	I0805 12:58:21.446500  450576 buildroot.go:189] setting minikube options for container-runtime
	I0805 12:58:21.446688  450576 config.go:182] Loaded profile config "no-preload-669469": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0805 12:58:21.446813  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:21.449833  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.450219  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.450235  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.450526  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:21.450814  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.450993  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.451168  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:21.451342  450576 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:21.451515  450576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.223 22 <nil> <nil>}
	I0805 12:58:21.451532  450576 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 12:58:21.714813  450576 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 12:58:21.714842  450576 machine.go:97] duration metric: took 902.872257ms to provisionDockerMachine
	I0805 12:58:21.714858  450576 start.go:293] postStartSetup for "no-preload-669469" (driver="kvm2")
	I0805 12:58:21.714889  450576 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 12:58:21.714940  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 12:58:21.715304  450576 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 12:58:21.715333  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:21.717989  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.718405  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.718427  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.718597  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:21.718832  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.718993  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:21.719152  450576 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa Username:docker}
	I0805 12:58:21.802634  450576 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 12:58:21.806957  450576 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 12:58:21.806985  450576 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/addons for local assets ...
	I0805 12:58:21.807079  450576 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/files for local assets ...
	I0805 12:58:21.807186  450576 filesync.go:149] local asset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> 3912192.pem in /etc/ssl/certs
	I0805 12:58:21.807293  450576 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 12:58:21.816690  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:58:21.839848  450576 start.go:296] duration metric: took 124.973515ms for postStartSetup
	I0805 12:58:21.839903  450576 fix.go:56] duration metric: took 20.207499572s for fixHost
	I0805 12:58:21.839934  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:21.842548  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.842869  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.842893  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.843090  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:21.843310  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.843502  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.843640  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:21.843815  450576 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:21.844015  450576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.223 22 <nil> <nil>}
	I0805 12:58:21.844029  450576 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 12:58:21.948584  450576 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722862701.921979093
	
	I0805 12:58:21.948613  450576 fix.go:216] guest clock: 1722862701.921979093
	I0805 12:58:21.948623  450576 fix.go:229] Guest: 2024-08-05 12:58:21.921979093 +0000 UTC Remote: 2024-08-05 12:58:21.83991063 +0000 UTC m=+278.340267839 (delta=82.068463ms)
	I0805 12:58:21.948671  450576 fix.go:200] guest clock delta is within tolerance: 82.068463ms
	I0805 12:58:21.948680  450576 start.go:83] releasing machines lock for "no-preload-669469", held for 20.316310092s
	I0805 12:58:21.948713  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 12:58:21.948990  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetIP
	I0805 12:58:21.951624  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.952086  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.952136  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.952256  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 12:58:21.952797  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 12:58:21.952984  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 12:58:21.953065  450576 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 12:58:21.953113  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:21.953227  450576 ssh_runner.go:195] Run: cat /version.json
	I0805 12:58:21.953255  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:21.955837  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.956081  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.956200  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.956227  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.956370  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:21.956504  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.956528  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.956568  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.956670  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:21.956760  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:21.956857  450576 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa Username:docker}
	I0805 12:58:21.956906  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.957058  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:21.957205  450576 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa Username:docker}
	I0805 12:58:22.058847  450576 ssh_runner.go:195] Run: systemctl --version
	I0805 12:58:22.065110  450576 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 12:58:22.211415  450576 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 12:58:22.219405  450576 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 12:58:22.219492  450576 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 12:58:22.240631  450576 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 12:58:22.240659  450576 start.go:495] detecting cgroup driver to use...
	I0805 12:58:22.240764  450576 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 12:58:22.258777  450576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 12:58:22.273312  450576 docker.go:217] disabling cri-docker service (if available) ...
	I0805 12:58:22.273400  450576 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 12:58:22.288455  450576 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 12:58:22.305028  450576 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 12:58:22.428098  450576 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 12:58:22.586232  450576 docker.go:233] disabling docker service ...
	I0805 12:58:22.586318  450576 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 12:58:22.611888  450576 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 12:58:22.627393  450576 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 12:58:22.757335  450576 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 12:58:22.878168  450576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 12:58:22.896174  450576 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 12:58:22.914395  450576 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0805 12:58:23.229202  450576 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0805 12:58:23.229300  450576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:23.242180  450576 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 12:58:23.242262  450576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:23.254577  450576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:23.265805  450576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:23.276522  450576 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 12:58:23.287288  450576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:23.297863  450576 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:23.314322  450576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:23.324662  450576 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 12:58:23.334125  450576 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0805 12:58:23.334192  450576 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0805 12:58:23.346701  450576 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 12:58:23.356256  450576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:58:23.474046  450576 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 12:58:23.617276  450576 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 12:58:23.617363  450576 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 12:58:23.622001  450576 start.go:563] Will wait 60s for crictl version
	I0805 12:58:23.622047  450576 ssh_runner.go:195] Run: which crictl
	I0805 12:58:23.626041  450576 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 12:58:23.670186  450576 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 12:58:23.670267  450576 ssh_runner.go:195] Run: crio --version
	I0805 12:58:23.700616  450576 ssh_runner.go:195] Run: crio --version
	I0805 12:58:23.733411  450576 out.go:177] * Preparing Kubernetes v1.31.0-rc.0 on CRI-O 1.29.1 ...
	I0805 12:58:23.254293  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting to get IP...
	I0805 12:58:23.255331  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:23.255802  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:23.255880  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:23.255773  451963 retry.go:31] will retry after 245.269435ms: waiting for machine to come up
	I0805 12:58:23.502617  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:23.503105  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:23.503130  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:23.503068  451963 retry.go:31] will retry after 243.155673ms: waiting for machine to come up
	I0805 12:58:23.747498  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:23.747913  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:23.747950  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:23.747867  451963 retry.go:31] will retry after 459.286566ms: waiting for machine to come up
	I0805 12:58:24.208594  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:24.209076  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:24.209127  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:24.209003  451963 retry.go:31] will retry after 499.069946ms: waiting for machine to come up
	I0805 12:58:24.709128  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:24.709554  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:24.709577  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:24.709512  451963 retry.go:31] will retry after 732.735525ms: waiting for machine to come up
	I0805 12:58:25.443632  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:25.444185  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:25.444216  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:25.444125  451963 retry.go:31] will retry after 883.69375ms: waiting for machine to come up
	I0805 12:58:26.329477  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:26.330010  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:26.330045  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:26.329947  451963 retry.go:31] will retry after 1.157298734s: waiting for machine to come up
	I0805 12:58:23.734875  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetIP
	I0805 12:58:23.737945  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:23.738460  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:23.738487  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:23.738646  450576 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0805 12:58:23.742894  450576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:58:23.756164  450576 kubeadm.go:883] updating cluster {Name:no-preload-669469 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-rc.0 ClusterName:no-preload-669469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.223 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 12:58:23.756435  450576 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0805 12:58:24.035575  450576 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0805 12:58:24.352144  450576 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0805 12:58:24.657175  450576 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0805 12:58:24.657266  450576 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:58:24.694685  450576 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-rc.0". assuming images are not preloaded.
	I0805 12:58:24.694720  450576 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-rc.0 registry.k8s.io/kube-controller-manager:v1.31.0-rc.0 registry.k8s.io/kube-scheduler:v1.31.0-rc.0 registry.k8s.io/kube-proxy:v1.31.0-rc.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0805 12:58:24.694809  450576 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0805 12:58:24.694831  450576 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0805 12:58:24.694845  450576 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0805 12:58:24.694867  450576 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0805 12:58:24.694835  450576 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:58:24.694815  450576 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0805 12:58:24.694801  450576 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0805 12:58:24.694917  450576 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0805 12:58:24.696852  450576 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0805 12:58:24.696859  450576 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0805 12:58:24.696860  450576 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0805 12:58:24.696902  450576 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0805 12:58:24.696904  450576 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:58:24.696852  450576 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0805 12:58:24.696881  450576 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0805 12:58:24.696852  450576 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0805 12:58:24.864249  450576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0805 12:58:24.867334  450576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0805 12:58:24.905018  450576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0805 12:58:24.920294  450576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0805 12:58:24.925405  450576 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" does not exist at hash "fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c" in container runtime
	I0805 12:58:24.925440  450576 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" does not exist at hash "c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0" in container runtime
	I0805 12:58:24.925456  450576 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0805 12:58:24.925476  450576 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0805 12:58:24.925508  450576 ssh_runner.go:195] Run: which crictl
	I0805 12:58:24.925520  450576 ssh_runner.go:195] Run: which crictl
	I0805 12:58:24.973191  450576 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-rc.0" does not exist at hash "41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318" in container runtime
	I0805 12:58:24.973240  450576 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0805 12:58:24.973304  450576 ssh_runner.go:195] Run: which crictl
	I0805 12:58:24.986642  450576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0805 12:58:24.986685  450576 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0805 12:58:24.986706  450576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0805 12:58:24.986723  450576 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0805 12:58:24.986642  450576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0805 12:58:24.986772  450576 ssh_runner.go:195] Run: which crictl
	I0805 12:58:25.037012  450576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0
	I0805 12:58:25.037066  450576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0805 12:58:25.037132  450576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0805 12:58:25.067311  450576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0805 12:58:25.068528  450576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0805 12:58:25.073769  450576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0
	I0805 12:58:25.073831  450576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-rc.0
	I0805 12:58:25.073872  450576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0805 12:58:25.073933  450576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0805 12:58:25.082476  450576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0805 12:58:25.126044  450576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0 (exists)
	I0805 12:58:25.126080  450576 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0805 12:58:25.126127  450576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0805 12:58:25.126144  450576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0805 12:58:25.126230  450576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0805 12:58:25.149903  450576 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0805 12:58:25.149965  450576 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0805 12:58:25.150028  450576 ssh_runner.go:195] Run: which crictl
	I0805 12:58:25.196288  450576 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" does not exist at hash "0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c" in container runtime
	I0805 12:58:25.196336  450576 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0805 12:58:25.196388  450576 ssh_runner.go:195] Run: which crictl
	I0805 12:58:25.196416  450576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0 (exists)
	I0805 12:58:25.196510  450576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0 (exists)
	I0805 12:58:25.651632  450576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:58:27.532922  450576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0: (2.406747514s)
	I0805 12:58:27.532959  450576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 from cache
	I0805 12:58:27.532994  450576 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0805 12:58:27.533010  450576 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.406755032s)
	I0805 12:58:27.533048  450576 ssh_runner.go:235] Completed: which crictl: (2.383000552s)
	I0805 12:58:27.533050  450576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0805 12:58:27.533082  450576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0805 12:58:27.533082  450576 ssh_runner.go:235] Completed: which crictl: (2.336681164s)
	I0805 12:58:27.533095  450576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0805 12:58:27.533117  450576 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.88145852s)
	I0805 12:58:27.533139  450576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0805 12:58:27.533161  450576 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0805 12:58:27.533198  450576 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:58:27.533234  450576 ssh_runner.go:195] Run: which crictl
	I0805 12:58:27.488683  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:27.489080  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:27.489108  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:27.489027  451963 retry.go:31] will retry after 997.566168ms: waiting for machine to come up
	I0805 12:58:28.488397  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:28.488846  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:28.488878  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:28.488794  451963 retry.go:31] will retry after 1.327498575s: waiting for machine to come up
	I0805 12:58:29.818339  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:29.818705  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:29.818735  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:29.818660  451963 retry.go:31] will retry after 2.105158858s: waiting for machine to come up
	I0805 12:58:31.925036  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:31.925564  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:31.925601  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:31.925492  451963 retry.go:31] will retry after 2.860711737s: waiting for machine to come up
	I0805 12:58:29.629896  450576 ssh_runner.go:235] Completed: which crictl: (2.096633143s)
	I0805 12:58:29.630000  450576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:58:29.630084  450576 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0: (2.096969259s)
	I0805 12:58:29.630184  450576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0805 12:58:29.630102  450576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0: (2.09697893s)
	I0805 12:58:29.630255  450576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 from cache
	I0805 12:58:29.630121  450576 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-rc.0: (2.096957841s)
	I0805 12:58:29.630282  450576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.15-0
	I0805 12:58:29.630286  450576 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0805 12:58:29.630313  450576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0
	I0805 12:58:29.630322  450576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0805 12:58:29.630381  450576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0805 12:58:29.675831  450576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0805 12:58:29.675914  450576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0805 12:58:29.676019  450576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0805 12:58:31.695376  450576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0: (2.06501136s)
	I0805 12:58:31.695429  450576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 from cache
	I0805 12:58:31.695458  450576 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0805 12:58:31.695476  450576 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.019437866s)
	I0805 12:58:31.695382  450576 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0: (2.064967299s)
	I0805 12:58:31.695510  450576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0805 12:58:31.695523  450576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0 (exists)
	I0805 12:58:31.695536  450576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0805 12:58:34.789126  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:34.789644  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:34.789673  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:34.789592  451963 retry.go:31] will retry after 2.763937018s: waiting for machine to come up
	I0805 12:58:33.659147  450576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.963588438s)
	I0805 12:58:33.659183  450576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0805 12:58:33.659216  450576 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0805 12:58:33.659263  450576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0805 12:58:37.466579  450576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.807281649s)
	I0805 12:58:37.466623  450576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0805 12:58:37.466657  450576 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0805 12:58:37.466709  450576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0805 12:58:38.111584  450576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0805 12:58:38.111633  450576 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0805 12:58:38.111678  450576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0805 12:58:37.554827  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:37.555233  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:37.555263  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:37.555184  451963 retry.go:31] will retry after 3.143735106s: waiting for machine to come up
	I0805 12:58:40.701139  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.701615  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Found IP for machine: 192.168.50.228
	I0805 12:58:40.701649  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has current primary IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.701660  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Reserving static IP address...
	I0805 12:58:40.702105  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-371585", mac: "52:54:00:f4:9f:83", ip: "192.168.50.228"} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:40.702126  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Reserved static IP address: 192.168.50.228
	I0805 12:58:40.702146  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | skip adding static IP to network mk-default-k8s-diff-port-371585 - found existing host DHCP lease matching {name: "default-k8s-diff-port-371585", mac: "52:54:00:f4:9f:83", ip: "192.168.50.228"}
	I0805 12:58:40.702156  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for SSH to be available...
	I0805 12:58:40.702198  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | Getting to WaitForSSH function...
	I0805 12:58:40.704600  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.704920  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:40.704950  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.705091  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | Using SSH client type: external
	I0805 12:58:40.705129  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | Using SSH private key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa (-rw-------)
	I0805 12:58:40.705179  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.228 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 12:58:40.705200  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | About to run SSH command:
	I0805 12:58:40.705218  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | exit 0
	I0805 12:58:40.836818  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | SSH cmd err, output: <nil>: 
	I0805 12:58:40.837228  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetConfigRaw
	I0805 12:58:40.837884  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetIP
	I0805 12:58:40.840503  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.840843  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:40.840870  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.841129  450884 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585/config.json ...
	I0805 12:58:40.841353  450884 machine.go:94] provisionDockerMachine start ...
	I0805 12:58:40.841373  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 12:58:40.841587  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:40.843943  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.844308  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:40.844336  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.844448  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:40.844614  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:40.844782  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:40.844922  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:40.845067  450884 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:40.845322  450884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.228 22 <nil> <nil>}
	I0805 12:58:40.845333  450884 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 12:58:40.952367  450884 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 12:58:40.952410  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetMachineName
	I0805 12:58:40.952733  450884 buildroot.go:166] provisioning hostname "default-k8s-diff-port-371585"
	I0805 12:58:40.952762  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetMachineName
	I0805 12:58:40.952968  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:40.955642  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.956045  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:40.956077  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.956216  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:40.956493  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:40.956651  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:40.956804  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:40.957027  450884 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:40.957239  450884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.228 22 <nil> <nil>}
	I0805 12:58:40.957255  450884 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-371585 && echo "default-k8s-diff-port-371585" | sudo tee /etc/hostname
	I0805 12:58:41.077775  450884 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-371585
	
	I0805 12:58:41.077808  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:41.080777  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.081230  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:41.081273  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.081406  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:41.081631  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:41.081782  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:41.081963  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:41.082139  450884 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:41.082315  450884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.228 22 <nil> <nil>}
	I0805 12:58:41.082333  450884 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-371585' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-371585/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-371585' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 12:58:41.200835  450884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 12:58:41.200871  450884 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19377-383955/.minikube CaCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19377-383955/.minikube}
	I0805 12:58:41.200923  450884 buildroot.go:174] setting up certificates
	I0805 12:58:41.200934  450884 provision.go:84] configureAuth start
	I0805 12:58:41.200945  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetMachineName
	I0805 12:58:41.201284  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetIP
	I0805 12:58:41.204107  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.204460  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:41.204494  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.204631  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:41.206634  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.206948  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:41.206977  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.207048  450884 provision.go:143] copyHostCerts
	I0805 12:58:41.207139  450884 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem, removing ...
	I0805 12:58:41.207151  450884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 12:58:41.207215  450884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem (1082 bytes)
	I0805 12:58:41.207333  450884 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem, removing ...
	I0805 12:58:41.207345  450884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 12:58:41.207372  450884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem (1123 bytes)
	I0805 12:58:41.207451  450884 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem, removing ...
	I0805 12:58:41.207462  450884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 12:58:41.207502  450884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem (1675 bytes)
	I0805 12:58:41.207573  450884 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-371585 san=[127.0.0.1 192.168.50.228 default-k8s-diff-port-371585 localhost minikube]
	I0805 12:58:41.357243  450884 provision.go:177] copyRemoteCerts
	I0805 12:58:41.357344  450884 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 12:58:41.357386  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:41.360309  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.360697  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:41.360738  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.360933  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:41.361120  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:41.361295  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:41.361474  450884 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa Username:docker}
	I0805 12:58:41.454251  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 12:58:41.480595  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0805 12:58:41.506729  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 12:58:41.533349  450884 provision.go:87] duration metric: took 332.399026ms to configureAuth
	I0805 12:58:41.533402  450884 buildroot.go:189] setting minikube options for container-runtime
	I0805 12:58:41.533575  450884 config.go:182] Loaded profile config "default-k8s-diff-port-371585": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:58:41.533655  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:41.536469  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.536831  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:41.536862  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.537006  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:41.537197  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:41.537386  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:41.537541  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:41.537734  450884 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:41.537946  450884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.228 22 <nil> <nil>}
	I0805 12:58:41.537968  450884 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 12:58:41.827043  450884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 12:58:41.827078  450884 machine.go:97] duration metric: took 985.710155ms to provisionDockerMachine
	I0805 12:58:41.827095  450884 start.go:293] postStartSetup for "default-k8s-diff-port-371585" (driver="kvm2")
	I0805 12:58:41.827109  450884 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 12:58:41.827145  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 12:58:41.827564  450884 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 12:58:41.827605  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:41.830350  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.830724  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:41.830761  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.830853  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:41.831034  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:41.831206  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:41.831329  450884 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa Username:docker}
	I0805 12:58:41.915261  450884 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 12:58:41.919719  450884 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 12:58:41.919760  450884 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/addons for local assets ...
	I0805 12:58:41.919835  450884 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/files for local assets ...
	I0805 12:58:41.919930  450884 filesync.go:149] local asset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> 3912192.pem in /etc/ssl/certs
	I0805 12:58:41.920062  450884 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 12:58:41.929842  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:58:41.958933  450884 start.go:296] duration metric: took 131.820227ms for postStartSetup
	I0805 12:58:41.958981  450884 fix.go:56] duration metric: took 20.010130311s for fixHost
	I0805 12:58:41.959012  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:41.962092  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.962510  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:41.962540  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.962726  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:41.962968  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:41.963153  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:41.963309  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:41.963479  450884 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:41.963687  450884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.228 22 <nil> <nil>}
	I0805 12:58:41.963700  450884 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 12:58:42.080993  451238 start.go:364] duration metric: took 3m30.014883629s to acquireMachinesLock for "old-k8s-version-635707"
	I0805 12:58:42.081066  451238 start.go:96] Skipping create...Using existing machine configuration
	I0805 12:58:42.081076  451238 fix.go:54] fixHost starting: 
	I0805 12:58:42.081569  451238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:58:42.081611  451238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:58:42.101889  451238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43379
	I0805 12:58:42.102366  451238 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:58:42.102910  451238 main.go:141] libmachine: Using API Version  1
	I0805 12:58:42.102947  451238 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:58:42.103310  451238 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:58:42.103552  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:58:42.103718  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetState
	I0805 12:58:42.105465  451238 fix.go:112] recreateIfNeeded on old-k8s-version-635707: state=Stopped err=<nil>
	I0805 12:58:42.105504  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	W0805 12:58:42.105674  451238 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 12:58:42.107563  451238 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-635707" ...
	I0805 12:58:39.567840  450576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0: (1.456137011s)
	I0805 12:58:39.567879  450576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 from cache
	I0805 12:58:39.567905  450576 cache_images.go:123] Successfully loaded all cached images
	I0805 12:58:39.567911  450576 cache_images.go:92] duration metric: took 14.873174481s to LoadCachedImages
	I0805 12:58:39.567921  450576 kubeadm.go:934] updating node { 192.168.72.223 8443 v1.31.0-rc.0 crio true true} ...
	I0805 12:58:39.568053  450576 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-669469 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.223
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-669469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 12:58:39.568137  450576 ssh_runner.go:195] Run: crio config
	I0805 12:58:39.616607  450576 cni.go:84] Creating CNI manager for ""
	I0805 12:58:39.616634  450576 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:58:39.616660  450576 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 12:58:39.616683  450576 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.223 APIServerPort:8443 KubernetesVersion:v1.31.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-669469 NodeName:no-preload-669469 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.223"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.223 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 12:58:39.616822  450576 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.223
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-669469"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.223
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.223"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 12:58:39.616896  450576 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-rc.0
	I0805 12:58:39.627827  450576 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 12:58:39.627901  450576 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 12:58:39.637348  450576 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0805 12:58:39.653917  450576 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0805 12:58:39.670196  450576 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0805 12:58:39.686922  450576 ssh_runner.go:195] Run: grep 192.168.72.223	control-plane.minikube.internal$ /etc/hosts
	I0805 12:58:39.690804  450576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.223	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:58:39.703146  450576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:58:39.834718  450576 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 12:58:39.857015  450576 certs.go:68] Setting up /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469 for IP: 192.168.72.223
	I0805 12:58:39.857036  450576 certs.go:194] generating shared ca certs ...
	I0805 12:58:39.857057  450576 certs.go:226] acquiring lock for ca certs: {Name:mk0abfcaff3883fbb5243c47b487f9200d9166d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:58:39.857229  450576 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key
	I0805 12:58:39.857286  450576 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key
	I0805 12:58:39.857300  450576 certs.go:256] generating profile certs ...
	I0805 12:58:39.857431  450576 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469/client.key
	I0805 12:58:39.857489  450576 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469/apiserver.key.dd0884bb
	I0805 12:58:39.857535  450576 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469/proxy-client.key
	I0805 12:58:39.857683  450576 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem (1338 bytes)
	W0805 12:58:39.857723  450576 certs.go:480] ignoring /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219_empty.pem, impossibly tiny 0 bytes
	I0805 12:58:39.857739  450576 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 12:58:39.857769  450576 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem (1082 bytes)
	I0805 12:58:39.857834  450576 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem (1123 bytes)
	I0805 12:58:39.857872  450576 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem (1675 bytes)
	I0805 12:58:39.857923  450576 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:58:39.858695  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 12:58:39.895944  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 12:58:39.925816  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 12:58:39.960150  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 12:58:39.993307  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0805 12:58:40.027900  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0805 12:58:40.053492  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 12:58:40.077331  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 12:58:40.101010  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /usr/share/ca-certificates/3912192.pem (1708 bytes)
	I0805 12:58:40.123991  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 12:58:40.147563  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem --> /usr/share/ca-certificates/391219.pem (1338 bytes)
	I0805 12:58:40.170414  450576 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 12:58:40.188256  450576 ssh_runner.go:195] Run: openssl version
	I0805 12:58:40.193955  450576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3912192.pem && ln -fs /usr/share/ca-certificates/3912192.pem /etc/ssl/certs/3912192.pem"
	I0805 12:58:40.204793  450576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3912192.pem
	I0805 12:58:40.209061  450576 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 11:39 /usr/share/ca-certificates/3912192.pem
	I0805 12:58:40.209115  450576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3912192.pem
	I0805 12:58:40.214948  450576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3912192.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 12:58:40.226193  450576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 12:58:40.237723  450576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:58:40.241960  450576 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:58:40.242019  450576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:58:40.247502  450576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 12:58:40.258791  450576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391219.pem && ln -fs /usr/share/ca-certificates/391219.pem /etc/ssl/certs/391219.pem"
	I0805 12:58:40.270176  450576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/391219.pem
	I0805 12:58:40.274717  450576 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 11:39 /usr/share/ca-certificates/391219.pem
	I0805 12:58:40.274786  450576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391219.pem
	I0805 12:58:40.280457  450576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391219.pem /etc/ssl/certs/51391683.0"
	I0805 12:58:40.292091  450576 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 12:58:40.296842  450576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 12:58:40.303003  450576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 12:58:40.309009  450576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 12:58:40.314951  450576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 12:58:40.320674  450576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 12:58:40.326433  450576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 12:58:40.331848  450576 kubeadm.go:392] StartCluster: {Name:no-preload-669469 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-rc.0 ClusterName:no-preload-669469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.223 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:58:40.331938  450576 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 12:58:40.331975  450576 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:58:40.374390  450576 cri.go:89] found id: ""
	I0805 12:58:40.374482  450576 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 12:58:40.385467  450576 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0805 12:58:40.385485  450576 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0805 12:58:40.385531  450576 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0805 12:58:40.395411  450576 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0805 12:58:40.396455  450576 kubeconfig.go:125] found "no-preload-669469" server: "https://192.168.72.223:8443"
	I0805 12:58:40.400090  450576 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0805 12:58:40.410942  450576 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.223
	I0805 12:58:40.410971  450576 kubeadm.go:1160] stopping kube-system containers ...
	I0805 12:58:40.410985  450576 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0805 12:58:40.411032  450576 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:58:40.453021  450576 cri.go:89] found id: ""
	I0805 12:58:40.453115  450576 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0805 12:58:40.470389  450576 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 12:58:40.480421  450576 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 12:58:40.480445  450576 kubeadm.go:157] found existing configuration files:
	
	I0805 12:58:40.480502  450576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 12:58:40.489625  450576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 12:58:40.489672  450576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 12:58:40.499261  450576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 12:58:40.508571  450576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 12:58:40.508634  450576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 12:58:40.517811  450576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 12:58:40.526563  450576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 12:58:40.526620  450576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 12:58:40.535753  450576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 12:58:40.544981  450576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 12:58:40.545040  450576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 12:58:40.555237  450576 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 12:58:40.565180  450576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:40.683889  450576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:41.632122  450576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:41.866665  450576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:41.944022  450576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:42.048030  450576 api_server.go:52] waiting for apiserver process to appear ...
	I0805 12:58:42.048127  450576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:58:42.548995  450576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:58:43.048336  450576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:58:43.086457  450576 api_server.go:72] duration metric: took 1.038426772s to wait for apiserver process to appear ...
	I0805 12:58:43.086487  450576 api_server.go:88] waiting for apiserver healthz status ...
	I0805 12:58:43.086509  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:43.086982  450576 api_server.go:269] stopped: https://192.168.72.223:8443/healthz: Get "https://192.168.72.223:8443/healthz": dial tcp 192.168.72.223:8443: connect: connection refused
	I0805 12:58:42.080800  450884 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722862722.053648046
	
	I0805 12:58:42.080828  450884 fix.go:216] guest clock: 1722862722.053648046
	I0805 12:58:42.080839  450884 fix.go:229] Guest: 2024-08-05 12:58:42.053648046 +0000 UTC Remote: 2024-08-05 12:58:41.958987261 +0000 UTC m=+264.923354352 (delta=94.660785ms)
	I0805 12:58:42.080867  450884 fix.go:200] guest clock delta is within tolerance: 94.660785ms
	I0805 12:58:42.080876  450884 start.go:83] releasing machines lock for "default-k8s-diff-port-371585", held for 20.132054114s
	I0805 12:58:42.080916  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 12:58:42.081260  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetIP
	I0805 12:58:42.084196  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:42.084662  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:42.084695  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:42.084867  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 12:58:42.085589  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 12:58:42.085786  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 12:58:42.085875  450884 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 12:58:42.085925  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:42.086064  450884 ssh_runner.go:195] Run: cat /version.json
	I0805 12:58:42.086091  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:42.088693  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:42.089018  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:42.089042  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:42.089197  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:42.089260  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:42.089455  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:42.089729  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:42.089730  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:42.089785  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:42.089881  450884 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa Username:docker}
	I0805 12:58:42.089970  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:42.090128  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:42.090286  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:42.090457  450884 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa Username:docker}
	I0805 12:58:42.193160  450884 ssh_runner.go:195] Run: systemctl --version
	I0805 12:58:42.199341  450884 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 12:58:42.344713  450884 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 12:58:42.350944  450884 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 12:58:42.351026  450884 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 12:58:42.368162  450884 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 12:58:42.368196  450884 start.go:495] detecting cgroup driver to use...
	I0805 12:58:42.368260  450884 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 12:58:42.384477  450884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 12:58:42.401847  450884 docker.go:217] disabling cri-docker service (if available) ...
	I0805 12:58:42.401907  450884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 12:58:42.416318  450884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 12:58:42.430994  450884 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 12:58:42.545944  450884 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 12:58:42.721877  450884 docker.go:233] disabling docker service ...
	I0805 12:58:42.721961  450884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 12:58:42.743504  450884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 12:58:42.763111  450884 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 12:58:42.914270  450884 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 12:58:43.064816  450884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 12:58:43.090748  450884 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 12:58:43.115493  450884 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0805 12:58:43.115565  450884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:43.132497  450884 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 12:58:43.132583  450884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:43.146700  450884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:43.159880  450884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:43.175598  450884 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 12:58:43.191263  450884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:43.207573  450884 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:43.229567  450884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:43.248604  450884 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 12:58:43.261272  450884 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0805 12:58:43.261350  450884 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0805 12:58:43.276740  450884 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 12:58:43.288473  450884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:58:43.436066  450884 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 12:58:43.593264  450884 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 12:58:43.593355  450884 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 12:58:43.599342  450884 start.go:563] Will wait 60s for crictl version
	I0805 12:58:43.599419  450884 ssh_runner.go:195] Run: which crictl
	I0805 12:58:43.603681  450884 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 12:58:43.651181  450884 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 12:58:43.651296  450884 ssh_runner.go:195] Run: crio --version
	I0805 12:58:43.691418  450884 ssh_runner.go:195] Run: crio --version
	I0805 12:58:43.725036  450884 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0805 12:58:42.109016  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .Start
	I0805 12:58:42.109214  451238 main.go:141] libmachine: (old-k8s-version-635707) Ensuring networks are active...
	I0805 12:58:42.110192  451238 main.go:141] libmachine: (old-k8s-version-635707) Ensuring network default is active
	I0805 12:58:42.110686  451238 main.go:141] libmachine: (old-k8s-version-635707) Ensuring network mk-old-k8s-version-635707 is active
	I0805 12:58:42.111108  451238 main.go:141] libmachine: (old-k8s-version-635707) Getting domain xml...
	I0805 12:58:42.112194  451238 main.go:141] libmachine: (old-k8s-version-635707) Creating domain...
	I0805 12:58:43.453015  451238 main.go:141] libmachine: (old-k8s-version-635707) Waiting to get IP...
	I0805 12:58:43.453994  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:43.454435  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:43.454504  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:43.454435  452186 retry.go:31] will retry after 270.355403ms: waiting for machine to come up
	I0805 12:58:43.727101  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:43.727583  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:43.727641  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:43.727568  452186 retry.go:31] will retry after 313.75466ms: waiting for machine to come up
	I0805 12:58:44.043303  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:44.043954  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:44.043981  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:44.043855  452186 retry.go:31] will retry after 308.608573ms: waiting for machine to come up
	I0805 12:58:44.354830  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:44.355396  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:44.355421  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:44.355305  452186 retry.go:31] will retry after 510.256657ms: waiting for machine to come up
	I0805 12:58:44.866970  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:44.867534  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:44.867559  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:44.867424  452186 retry.go:31] will retry after 668.55006ms: waiting for machine to come up
	I0805 12:58:45.537377  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:45.537959  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:45.537989  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:45.537909  452186 retry.go:31] will retry after 677.549944ms: waiting for machine to come up
	I0805 12:58:46.217077  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:46.217591  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:46.217625  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:46.217483  452186 retry.go:31] will retry after 847.636867ms: waiting for machine to come up
	I0805 12:58:43.726277  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetIP
	I0805 12:58:43.729689  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:43.730162  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:43.730195  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:43.730391  450884 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0805 12:58:43.735448  450884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:58:43.749640  450884 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-371585 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-371585 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.228 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 12:58:43.749808  450884 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 12:58:43.749886  450884 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:58:43.798507  450884 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0805 12:58:43.798584  450884 ssh_runner.go:195] Run: which lz4
	I0805 12:58:43.803306  450884 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0805 12:58:43.809104  450884 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 12:58:43.809144  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0805 12:58:45.333758  450884 crio.go:462] duration metric: took 1.530500213s to copy over tarball
	I0805 12:58:45.333831  450884 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 12:58:43.587275  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:46.303995  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:46.304038  450576 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:46.304057  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:46.308815  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:46.308849  450576 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:46.587239  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:46.595116  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:46.595151  450576 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:47.087372  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:47.094319  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:47.094363  450576 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:47.586909  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:47.592210  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:47.592252  450576 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:48.086763  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:48.095151  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:48.095182  450576 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:48.586840  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:48.593834  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:48.593870  450576 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:49.087516  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:49.093647  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:49.093677  450576 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:49.587309  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:49.593592  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 200:
	ok
	I0805 12:58:49.602960  450576 api_server.go:141] control plane version: v1.31.0-rc.0
	I0805 12:58:49.603001  450576 api_server.go:131] duration metric: took 6.516505116s to wait for apiserver health ...
	I0805 12:58:49.603013  450576 cni.go:84] Creating CNI manager for ""
	I0805 12:58:49.603024  450576 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:58:49.851135  450576 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0805 12:58:47.067245  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:47.067895  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:47.067930  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:47.067838  452186 retry.go:31] will retry after 1.275228928s: waiting for machine to come up
	I0805 12:58:48.344881  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:48.345295  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:48.345319  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:48.345258  452186 retry.go:31] will retry after 1.826891386s: waiting for machine to come up
	I0805 12:58:50.174583  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:50.175111  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:50.175138  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:50.175074  452186 retry.go:31] will retry after 1.53756677s: waiting for machine to come up
	I0805 12:58:51.714025  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:51.714529  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:51.714553  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:51.714485  452186 retry.go:31] will retry after 2.762270002s: waiting for machine to come up
	I0805 12:58:47.908896  450884 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.575029516s)
	I0805 12:58:47.908929  450884 crio.go:469] duration metric: took 2.575138566s to extract the tarball
	I0805 12:58:47.908938  450884 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0805 12:58:47.964757  450884 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:58:48.013358  450884 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 12:58:48.013392  450884 cache_images.go:84] Images are preloaded, skipping loading
	I0805 12:58:48.013404  450884 kubeadm.go:934] updating node { 192.168.50.228 8444 v1.30.3 crio true true} ...
	I0805 12:58:48.013533  450884 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-371585 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.228
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-371585 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 12:58:48.013623  450884 ssh_runner.go:195] Run: crio config
	I0805 12:58:48.062183  450884 cni.go:84] Creating CNI manager for ""
	I0805 12:58:48.062219  450884 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:58:48.062238  450884 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 12:58:48.062274  450884 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.228 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-371585 NodeName:default-k8s-diff-port-371585 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.228"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.228 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 12:58:48.062474  450884 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.228
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-371585"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.228
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.228"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 12:58:48.062552  450884 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 12:58:48.076490  450884 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 12:58:48.076583  450884 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 12:58:48.090058  450884 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0805 12:58:48.110202  450884 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 12:58:48.131420  450884 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0805 12:58:48.151774  450884 ssh_runner.go:195] Run: grep 192.168.50.228	control-plane.minikube.internal$ /etc/hosts
	I0805 12:58:48.156904  450884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.228	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:58:48.172398  450884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:58:48.292999  450884 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 12:58:48.310331  450884 certs.go:68] Setting up /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585 for IP: 192.168.50.228
	I0805 12:58:48.310366  450884 certs.go:194] generating shared ca certs ...
	I0805 12:58:48.310389  450884 certs.go:226] acquiring lock for ca certs: {Name:mk0abfcaff3883fbb5243c47b487f9200d9166d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:58:48.310576  450884 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key
	I0805 12:58:48.310640  450884 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key
	I0805 12:58:48.310658  450884 certs.go:256] generating profile certs ...
	I0805 12:58:48.310803  450884 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585/client.key
	I0805 12:58:48.310881  450884 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585/apiserver.key.f7891227
	I0805 12:58:48.310946  450884 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585/proxy-client.key
	I0805 12:58:48.311231  450884 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem (1338 bytes)
	W0805 12:58:48.311317  450884 certs.go:480] ignoring /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219_empty.pem, impossibly tiny 0 bytes
	I0805 12:58:48.311354  450884 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 12:58:48.311408  450884 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem (1082 bytes)
	I0805 12:58:48.311447  450884 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem (1123 bytes)
	I0805 12:58:48.311485  450884 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem (1675 bytes)
	I0805 12:58:48.311545  450884 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:58:48.312365  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 12:58:48.363733  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 12:58:48.395662  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 12:58:48.450822  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 12:58:48.495611  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0805 12:58:48.529393  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0805 12:58:48.557543  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 12:58:48.584777  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 12:58:48.611987  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /usr/share/ca-certificates/3912192.pem (1708 bytes)
	I0805 12:58:48.637500  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 12:58:48.664469  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem --> /usr/share/ca-certificates/391219.pem (1338 bytes)
	I0805 12:58:48.690221  450884 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 12:58:48.709082  450884 ssh_runner.go:195] Run: openssl version
	I0805 12:58:48.716181  450884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3912192.pem && ln -fs /usr/share/ca-certificates/3912192.pem /etc/ssl/certs/3912192.pem"
	I0805 12:58:48.728455  450884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3912192.pem
	I0805 12:58:48.733395  450884 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 11:39 /usr/share/ca-certificates/3912192.pem
	I0805 12:58:48.733456  450884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3912192.pem
	I0805 12:58:48.739295  450884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3912192.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 12:58:48.750515  450884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 12:58:48.761506  450884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:58:48.765995  450884 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:58:48.766052  450884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:58:48.772121  450884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 12:58:48.783123  450884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391219.pem && ln -fs /usr/share/ca-certificates/391219.pem /etc/ssl/certs/391219.pem"
	I0805 12:58:48.794318  450884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/391219.pem
	I0805 12:58:48.798795  450884 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 11:39 /usr/share/ca-certificates/391219.pem
	I0805 12:58:48.798843  450884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391219.pem
	I0805 12:58:48.804878  450884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391219.pem /etc/ssl/certs/51391683.0"
	I0805 12:58:48.816757  450884 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 12:58:48.821686  450884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 12:58:48.828121  450884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 12:58:48.834386  450884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 12:58:48.840425  450884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 12:58:48.846218  450884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 12:58:48.852035  450884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 12:58:48.857997  450884 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-371585 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-371585 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.228 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:58:48.858131  450884 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 12:58:48.858179  450884 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:58:48.908402  450884 cri.go:89] found id: ""
	I0805 12:58:48.908471  450884 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 12:58:48.921185  450884 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0805 12:58:48.921207  450884 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0805 12:58:48.921258  450884 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0805 12:58:48.932907  450884 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0805 12:58:48.933927  450884 kubeconfig.go:125] found "default-k8s-diff-port-371585" server: "https://192.168.50.228:8444"
	I0805 12:58:48.936058  450884 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0805 12:58:48.947233  450884 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.228
	I0805 12:58:48.947262  450884 kubeadm.go:1160] stopping kube-system containers ...
	I0805 12:58:48.947273  450884 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0805 12:58:48.947313  450884 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:58:48.988179  450884 cri.go:89] found id: ""
	I0805 12:58:48.988281  450884 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0805 12:58:49.005901  450884 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 12:58:49.016576  450884 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 12:58:49.016597  450884 kubeadm.go:157] found existing configuration files:
	
	I0805 12:58:49.016648  450884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0805 12:58:49.029718  450884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 12:58:49.029822  450884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 12:58:49.041670  450884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0805 12:58:49.051650  450884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 12:58:49.051724  450884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 12:58:49.061671  450884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0805 12:58:49.071671  450884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 12:58:49.071755  450884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 12:58:49.082022  450884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0805 12:58:49.092013  450884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 12:58:49.092103  450884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 12:58:49.105446  450884 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 12:58:49.118581  450884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:49.233260  450884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:50.199462  450884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:50.418823  450884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:50.500350  450884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:50.594991  450884 api_server.go:52] waiting for apiserver process to appear ...
	I0805 12:58:50.595109  450884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:58:51.096171  450884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:58:51.596111  450884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:58:51.633309  450884 api_server.go:72] duration metric: took 1.038316986s to wait for apiserver process to appear ...
	I0805 12:58:51.633350  450884 api_server.go:88] waiting for apiserver healthz status ...
	I0805 12:58:51.633377  450884 api_server.go:253] Checking apiserver healthz at https://192.168.50.228:8444/healthz ...
	I0805 12:58:51.634005  450884 api_server.go:269] stopped: https://192.168.50.228:8444/healthz: Get "https://192.168.50.228:8444/healthz": dial tcp 192.168.50.228:8444: connect: connection refused
	I0805 12:58:50.021635  450576 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0805 12:58:50.036338  450576 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0805 12:58:50.060746  450576 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 12:58:50.159670  450576 system_pods.go:59] 8 kube-system pods found
	I0805 12:58:50.159724  450576 system_pods.go:61] "coredns-6f6b679f8f-nkv88" [ee7e59fb-2500-4d7a-9537-e38e08fb2445] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0805 12:58:50.159737  450576 system_pods.go:61] "etcd-no-preload-669469" [095df0f1-069a-419f-815b-ddbec3a2291f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0805 12:58:50.159762  450576 system_pods.go:61] "kube-apiserver-no-preload-669469" [20b45902-b807-457a-93b3-d2b9b76d2598] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0805 12:58:50.159772  450576 system_pods.go:61] "kube-controller-manager-no-preload-669469" [122a47ed-7f6f-4b2e-980a-45f41b997dda] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0805 12:58:50.159780  450576 system_pods.go:61] "kube-proxy-cwq69" [78e0333b-a0f4-40a6-a04d-6971bb4d09a8] Running
	I0805 12:58:50.159788  450576 system_pods.go:61] "kube-scheduler-no-preload-669469" [88010c2b-b32f-4fe1-952d-262e881b76dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0805 12:58:50.159796  450576 system_pods.go:61] "metrics-server-6867b74b74-p7b2r" [7e4dd805-07c8-4339-bf1a-57a98fd674cd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 12:58:50.159808  450576 system_pods.go:61] "storage-provisioner" [207c46c5-c3c0-4f0b-b3ea-9b42b9e5f761] Running
	I0805 12:58:50.159817  450576 system_pods.go:74] duration metric: took 99.038765ms to wait for pod list to return data ...
	I0805 12:58:50.159830  450576 node_conditions.go:102] verifying NodePressure condition ...
	I0805 12:58:50.163888  450576 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 12:58:50.163923  450576 node_conditions.go:123] node cpu capacity is 2
	I0805 12:58:50.163956  450576 node_conditions.go:105] duration metric: took 4.11869ms to run NodePressure ...
	I0805 12:58:50.163980  450576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:50.849885  450576 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0805 12:58:50.854483  450576 kubeadm.go:739] kubelet initialised
	I0805 12:58:50.854505  450576 kubeadm.go:740] duration metric: took 4.588388ms waiting for restarted kubelet to initialise ...
	I0805 12:58:50.854514  450576 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 12:58:50.861245  450576 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-nkv88" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:52.869370  450576 pod_ready.go:102] pod "coredns-6f6b679f8f-nkv88" in "kube-system" namespace has status "Ready":"False"
	I0805 12:58:52.134427  450884 api_server.go:253] Checking apiserver healthz at https://192.168.50.228:8444/healthz ...
	I0805 12:58:54.933253  450884 api_server.go:279] https://192.168.50.228:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0805 12:58:54.933288  450884 api_server.go:103] status: https://192.168.50.228:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0805 12:58:54.933305  450884 api_server.go:253] Checking apiserver healthz at https://192.168.50.228:8444/healthz ...
	I0805 12:58:54.970883  450884 api_server.go:279] https://192.168.50.228:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0805 12:58:54.970928  450884 api_server.go:103] status: https://192.168.50.228:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0805 12:58:55.134250  450884 api_server.go:253] Checking apiserver healthz at https://192.168.50.228:8444/healthz ...
	I0805 12:58:55.139762  450884 api_server.go:279] https://192.168.50.228:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:55.139798  450884 api_server.go:103] status: https://192.168.50.228:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:55.634499  450884 api_server.go:253] Checking apiserver healthz at https://192.168.50.228:8444/healthz ...
	I0805 12:58:55.644495  450884 api_server.go:279] https://192.168.50.228:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:55.644532  450884 api_server.go:103] status: https://192.168.50.228:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:56.134123  450884 api_server.go:253] Checking apiserver healthz at https://192.168.50.228:8444/healthz ...
	I0805 12:58:56.141958  450884 api_server.go:279] https://192.168.50.228:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:56.142002  450884 api_server.go:103] status: https://192.168.50.228:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:56.633573  450884 api_server.go:253] Checking apiserver healthz at https://192.168.50.228:8444/healthz ...
	I0805 12:58:56.640578  450884 api_server.go:279] https://192.168.50.228:8444/healthz returned 200:
	ok
	I0805 12:58:56.649624  450884 api_server.go:141] control plane version: v1.30.3
	I0805 12:58:56.649659  450884 api_server.go:131] duration metric: took 5.016299114s to wait for apiserver health ...
	I0805 12:58:56.649671  450884 cni.go:84] Creating CNI manager for ""
	I0805 12:58:56.649681  450884 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:58:56.651587  450884 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0805 12:58:54.478201  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:54.478619  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:54.478650  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:54.478579  452186 retry.go:31] will retry after 2.992766963s: waiting for machine to come up
	I0805 12:58:56.652853  450884 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0805 12:58:56.663878  450884 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0805 12:58:56.699765  450884 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 12:58:56.715040  450884 system_pods.go:59] 8 kube-system pods found
	I0805 12:58:56.715078  450884 system_pods.go:61] "coredns-7db6d8ff4d-8rzb7" [df42e41d-4544-493f-a09d-678df1fb5258] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0805 12:58:56.715085  450884 system_pods.go:61] "etcd-default-k8s-diff-port-371585" [1ab6cd59-432a-44b8-95f2-948c585d9bbf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0805 12:58:56.715092  450884 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-371585" [c9173b98-c77e-4ad0-aea5-c894c045e0c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0805 12:58:56.715101  450884 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-371585" [283737ec-1afa-4994-9cee-b655a8397a37] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0805 12:58:56.715105  450884 system_pods.go:61] "kube-proxy-5dr9v" [767ccb8b-2db0-4b59-b3b0-e099185bc725] Running
	I0805 12:58:56.715111  450884 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-371585" [fb3cfdea-9370-4842-a5ab-5ac24804f59e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0805 12:58:56.715116  450884 system_pods.go:61] "metrics-server-569cc877fc-dsrqr" [0d4c79e4-aa6c-42f5-840b-91b9d714d078] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 12:58:56.715125  450884 system_pods.go:61] "storage-provisioner" [2dba6f50-5cdc-4195-8daf-c19dac38f488] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0805 12:58:56.715133  450884 system_pods.go:74] duration metric: took 15.343284ms to wait for pod list to return data ...
	I0805 12:58:56.715144  450884 node_conditions.go:102] verifying NodePressure condition ...
	I0805 12:58:56.720006  450884 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 12:58:56.720031  450884 node_conditions.go:123] node cpu capacity is 2
	I0805 12:58:56.720042  450884 node_conditions.go:105] duration metric: took 4.893566ms to run NodePressure ...
	I0805 12:58:56.720059  450884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:56.985822  450884 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0805 12:58:56.990461  450884 kubeadm.go:739] kubelet initialised
	I0805 12:58:56.990484  450884 kubeadm.go:740] duration metric: took 4.636814ms waiting for restarted kubelet to initialise ...
	I0805 12:58:56.990493  450884 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 12:58:56.996266  450884 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-8rzb7" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:57.001407  450884 pod_ready.go:97] node "default-k8s-diff-port-371585" hosting pod "coredns-7db6d8ff4d-8rzb7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-371585" has status "Ready":"False"
	I0805 12:58:57.001434  450884 pod_ready.go:81] duration metric: took 5.140963ms for pod "coredns-7db6d8ff4d-8rzb7" in "kube-system" namespace to be "Ready" ...
	E0805 12:58:57.001446  450884 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-371585" hosting pod "coredns-7db6d8ff4d-8rzb7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-371585" has status "Ready":"False"
	I0805 12:58:57.001456  450884 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:57.005437  450884 pod_ready.go:97] node "default-k8s-diff-port-371585" hosting pod "etcd-default-k8s-diff-port-371585" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-371585" has status "Ready":"False"
	I0805 12:58:57.005473  450884 pod_ready.go:81] duration metric: took 3.995646ms for pod "etcd-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	E0805 12:58:57.005486  450884 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-371585" hosting pod "etcd-default-k8s-diff-port-371585" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-371585" has status "Ready":"False"
	I0805 12:58:57.005495  450884 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:57.009923  450884 pod_ready.go:97] node "default-k8s-diff-port-371585" hosting pod "kube-apiserver-default-k8s-diff-port-371585" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-371585" has status "Ready":"False"
	I0805 12:58:57.009943  450884 pod_ready.go:81] duration metric: took 4.439871ms for pod "kube-apiserver-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	E0805 12:58:57.009952  450884 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-371585" hosting pod "kube-apiserver-default-k8s-diff-port-371585" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-371585" has status "Ready":"False"
	I0805 12:58:57.009958  450884 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:54.869534  450576 pod_ready.go:102] pod "coredns-6f6b679f8f-nkv88" in "kube-system" namespace has status "Ready":"False"
	I0805 12:58:56.370007  450576 pod_ready.go:92] pod "coredns-6f6b679f8f-nkv88" in "kube-system" namespace has status "Ready":"True"
	I0805 12:58:56.370035  450576 pod_ready.go:81] duration metric: took 5.508756413s for pod "coredns-6f6b679f8f-nkv88" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:56.370045  450576 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.376357  450576 pod_ready.go:92] pod "etcd-no-preload-669469" in "kube-system" namespace has status "Ready":"True"
	I0805 12:58:58.376386  450576 pod_ready.go:81] duration metric: took 2.006334873s for pod "etcd-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.376396  450576 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:57.473094  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:57.473555  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:57.473587  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:57.473495  452186 retry.go:31] will retry after 4.27138033s: waiting for machine to come up
	I0805 12:59:01.750111  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.750558  451238 main.go:141] libmachine: (old-k8s-version-635707) Found IP for machine: 192.168.61.41
	I0805 12:59:01.750586  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has current primary IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.750593  451238 main.go:141] libmachine: (old-k8s-version-635707) Reserving static IP address...
	I0805 12:59:01.751003  451238 main.go:141] libmachine: (old-k8s-version-635707) Reserved static IP address: 192.168.61.41
	I0805 12:59:01.751061  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "old-k8s-version-635707", mac: "52:54:00:2a:da:c5", ip: "192.168.61.41"} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:01.751081  451238 main.go:141] libmachine: (old-k8s-version-635707) Waiting for SSH to be available...
	I0805 12:59:01.751112  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | skip adding static IP to network mk-old-k8s-version-635707 - found existing host DHCP lease matching {name: "old-k8s-version-635707", mac: "52:54:00:2a:da:c5", ip: "192.168.61.41"}
	I0805 12:59:01.751130  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | Getting to WaitForSSH function...
	I0805 12:59:01.753240  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.753634  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:01.753672  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.753810  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | Using SSH client type: external
	I0805 12:59:01.753854  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | Using SSH private key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/id_rsa (-rw-------)
	I0805 12:59:01.753900  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.41 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 12:59:01.753919  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | About to run SSH command:
	I0805 12:59:01.753933  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | exit 0
	I0805 12:59:01.875919  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | SSH cmd err, output: <nil>: 
	I0805 12:59:01.876298  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetConfigRaw
	I0805 12:59:01.877028  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetIP
	I0805 12:59:01.879644  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.880120  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:01.880164  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.880508  451238 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/config.json ...
	I0805 12:59:01.880778  451238 machine.go:94] provisionDockerMachine start ...
	I0805 12:59:01.880805  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:59:01.881039  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:01.882998  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.883362  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:01.883389  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.883553  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:01.883755  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:01.883900  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:01.884012  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:01.884248  451238 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:01.884496  451238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I0805 12:59:01.884511  451238 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 12:58:57.103049  450884 pod_ready.go:97] node "default-k8s-diff-port-371585" hosting pod "kube-controller-manager-default-k8s-diff-port-371585" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-371585" has status "Ready":"False"
	I0805 12:58:57.103095  450884 pod_ready.go:81] duration metric: took 93.113727ms for pod "kube-controller-manager-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	E0805 12:58:57.103109  450884 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-371585" hosting pod "kube-controller-manager-default-k8s-diff-port-371585" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-371585" has status "Ready":"False"
	I0805 12:58:57.103116  450884 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5dr9v" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:57.503531  450884 pod_ready.go:92] pod "kube-proxy-5dr9v" in "kube-system" namespace has status "Ready":"True"
	I0805 12:58:57.503556  450884 pod_ready.go:81] duration metric: took 400.433562ms for pod "kube-proxy-5dr9v" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:57.503565  450884 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:59.514591  450884 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:02.011308  450884 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:03.148902  450393 start.go:364] duration metric: took 56.514427046s to acquireMachinesLock for "embed-certs-321139"
	I0805 12:59:03.148967  450393 start.go:96] Skipping create...Using existing machine configuration
	I0805 12:59:03.148976  450393 fix.go:54] fixHost starting: 
	I0805 12:59:03.149432  450393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:59:03.149473  450393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:59:03.166485  450393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43007
	I0805 12:59:03.166934  450393 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:59:03.167443  450393 main.go:141] libmachine: Using API Version  1
	I0805 12:59:03.167469  450393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:59:03.167808  450393 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:59:03.168062  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:03.168258  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetState
	I0805 12:59:03.170011  450393 fix.go:112] recreateIfNeeded on embed-certs-321139: state=Stopped err=<nil>
	I0805 12:59:03.170036  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	W0805 12:59:03.170221  450393 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 12:59:03.172109  450393 out.go:177] * Restarting existing kvm2 VM for "embed-certs-321139" ...
	I0805 12:58:58.886766  450576 pod_ready.go:92] pod "kube-apiserver-no-preload-669469" in "kube-system" namespace has status "Ready":"True"
	I0805 12:58:58.886792  450576 pod_ready.go:81] duration metric: took 510.389529ms for pod "kube-apiserver-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.886804  450576 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.891878  450576 pod_ready.go:92] pod "kube-controller-manager-no-preload-669469" in "kube-system" namespace has status "Ready":"True"
	I0805 12:58:58.891907  450576 pod_ready.go:81] duration metric: took 5.094036ms for pod "kube-controller-manager-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.891919  450576 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cwq69" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.896953  450576 pod_ready.go:92] pod "kube-proxy-cwq69" in "kube-system" namespace has status "Ready":"True"
	I0805 12:58:58.896981  450576 pod_ready.go:81] duration metric: took 5.054422ms for pod "kube-proxy-cwq69" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.896995  450576 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.902437  450576 pod_ready.go:92] pod "kube-scheduler-no-preload-669469" in "kube-system" namespace has status "Ready":"True"
	I0805 12:58:58.902456  450576 pod_ready.go:81] duration metric: took 5.453487ms for pod "kube-scheduler-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.902465  450576 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:00.909633  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:03.410487  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:03.173728  450393 main.go:141] libmachine: (embed-certs-321139) Calling .Start
	I0805 12:59:03.173932  450393 main.go:141] libmachine: (embed-certs-321139) Ensuring networks are active...
	I0805 12:59:03.174932  450393 main.go:141] libmachine: (embed-certs-321139) Ensuring network default is active
	I0805 12:59:03.175441  450393 main.go:141] libmachine: (embed-certs-321139) Ensuring network mk-embed-certs-321139 is active
	I0805 12:59:03.176102  450393 main.go:141] libmachine: (embed-certs-321139) Getting domain xml...
	I0805 12:59:03.176848  450393 main.go:141] libmachine: (embed-certs-321139) Creating domain...
	I0805 12:59:01.984198  451238 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 12:59:01.984237  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetMachineName
	I0805 12:59:01.984501  451238 buildroot.go:166] provisioning hostname "old-k8s-version-635707"
	I0805 12:59:01.984534  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetMachineName
	I0805 12:59:01.984750  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:01.987690  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.988085  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:01.988115  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.988240  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:01.988470  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:01.988782  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:01.988945  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:01.989173  451238 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:01.989407  451238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I0805 12:59:01.989425  451238 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-635707 && echo "old-k8s-version-635707" | sudo tee /etc/hostname
	I0805 12:59:02.108368  451238 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-635707
	
	I0805 12:59:02.108406  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:02.111301  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.111669  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:02.111712  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.111837  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:02.112027  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:02.112212  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:02.112393  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:02.112563  451238 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:02.112797  451238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I0805 12:59:02.112824  451238 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-635707' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-635707/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-635707' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 12:59:02.225638  451238 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 12:59:02.225681  451238 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19377-383955/.minikube CaCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19377-383955/.minikube}
	I0805 12:59:02.225731  451238 buildroot.go:174] setting up certificates
	I0805 12:59:02.225745  451238 provision.go:84] configureAuth start
	I0805 12:59:02.225760  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetMachineName
	I0805 12:59:02.226099  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetIP
	I0805 12:59:02.229252  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.229643  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:02.229671  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.229885  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:02.232479  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.232912  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:02.232951  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.233125  451238 provision.go:143] copyHostCerts
	I0805 12:59:02.233188  451238 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem, removing ...
	I0805 12:59:02.233201  451238 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 12:59:02.233271  451238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem (1123 bytes)
	I0805 12:59:02.233412  451238 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem, removing ...
	I0805 12:59:02.233426  451238 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 12:59:02.233459  451238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem (1675 bytes)
	I0805 12:59:02.233543  451238 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem, removing ...
	I0805 12:59:02.233553  451238 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 12:59:02.233581  451238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem (1082 bytes)
	I0805 12:59:02.233661  451238 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-635707 san=[127.0.0.1 192.168.61.41 localhost minikube old-k8s-version-635707]
	I0805 12:59:02.470213  451238 provision.go:177] copyRemoteCerts
	I0805 12:59:02.470328  451238 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 12:59:02.470369  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:02.473450  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.473791  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:02.473829  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.473964  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:02.474173  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:02.474313  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:02.474429  451238 sshutil.go:53] new ssh client: &{IP:192.168.61.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/id_rsa Username:docker}
	I0805 12:59:02.558831  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 12:59:02.583652  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0805 12:59:02.609154  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0805 12:59:02.635827  451238 provision.go:87] duration metric: took 410.067115ms to configureAuth
	I0805 12:59:02.635862  451238 buildroot.go:189] setting minikube options for container-runtime
	I0805 12:59:02.636109  451238 config.go:182] Loaded profile config "old-k8s-version-635707": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0805 12:59:02.636357  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:02.638964  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.639466  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:02.639489  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.639644  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:02.639953  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:02.640197  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:02.640454  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:02.640733  451238 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:02.640975  451238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I0805 12:59:02.641000  451238 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 12:59:02.917466  451238 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 12:59:02.917499  451238 machine.go:97] duration metric: took 1.036701572s to provisionDockerMachine
	I0805 12:59:02.917512  451238 start.go:293] postStartSetup for "old-k8s-version-635707" (driver="kvm2")
	I0805 12:59:02.917522  451238 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 12:59:02.917539  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:59:02.917946  451238 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 12:59:02.917979  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:02.920900  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.921383  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:02.921426  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.921552  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:02.921773  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:02.921958  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:02.922220  451238 sshutil.go:53] new ssh client: &{IP:192.168.61.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/id_rsa Username:docker}
	I0805 12:59:03.003670  451238 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 12:59:03.008348  451238 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 12:59:03.008384  451238 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/addons for local assets ...
	I0805 12:59:03.008468  451238 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/files for local assets ...
	I0805 12:59:03.008588  451238 filesync.go:149] local asset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> 3912192.pem in /etc/ssl/certs
	I0805 12:59:03.008727  451238 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 12:59:03.019098  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:59:03.042969  451238 start.go:296] duration metric: took 125.441712ms for postStartSetup
	I0805 12:59:03.043011  451238 fix.go:56] duration metric: took 20.961935899s for fixHost
	I0805 12:59:03.043034  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:03.045667  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.046030  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:03.046062  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.046254  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:03.046508  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:03.046701  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:03.046824  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:03.047002  451238 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:03.047182  451238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I0805 12:59:03.047192  451238 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 12:59:03.148773  451238 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722862743.120260193
	
	I0805 12:59:03.148798  451238 fix.go:216] guest clock: 1722862743.120260193
	I0805 12:59:03.148807  451238 fix.go:229] Guest: 2024-08-05 12:59:03.120260193 +0000 UTC Remote: 2024-08-05 12:59:03.043015059 +0000 UTC m=+231.118249223 (delta=77.245134ms)
	I0805 12:59:03.148831  451238 fix.go:200] guest clock delta is within tolerance: 77.245134ms
	I0805 12:59:03.148836  451238 start.go:83] releasing machines lock for "old-k8s-version-635707", held for 21.067801046s
	I0805 12:59:03.148857  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:59:03.149131  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetIP
	I0805 12:59:03.152026  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.152444  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:03.152475  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.152645  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:59:03.153237  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:59:03.153423  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:59:03.153495  451238 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 12:59:03.153551  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:03.153860  451238 ssh_runner.go:195] Run: cat /version.json
	I0805 12:59:03.153895  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:03.156566  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.156903  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:03.156963  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.156994  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.157187  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:03.157411  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:03.157479  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:03.157508  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.157594  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:03.157770  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:03.157782  451238 sshutil.go:53] new ssh client: &{IP:192.168.61.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/id_rsa Username:docker}
	I0805 12:59:03.157924  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:03.158107  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:03.158344  451238 sshutil.go:53] new ssh client: &{IP:192.168.61.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/id_rsa Username:docker}
	I0805 12:59:03.254162  451238 ssh_runner.go:195] Run: systemctl --version
	I0805 12:59:03.260684  451238 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 12:59:03.409837  451238 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 12:59:03.416010  451238 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 12:59:03.416093  451238 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 12:59:03.433548  451238 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 12:59:03.433584  451238 start.go:495] detecting cgroup driver to use...
	I0805 12:59:03.433667  451238 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 12:59:03.450756  451238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 12:59:03.467281  451238 docker.go:217] disabling cri-docker service (if available) ...
	I0805 12:59:03.467341  451238 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 12:59:03.482537  451238 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 12:59:03.498623  451238 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 12:59:03.621224  451238 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 12:59:03.781777  451238 docker.go:233] disabling docker service ...
	I0805 12:59:03.781842  451238 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 12:59:03.798020  451238 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 12:59:03.818262  451238 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 12:59:03.940897  451238 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 12:59:04.075622  451238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 12:59:04.092487  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 12:59:04.112699  451238 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0805 12:59:04.112769  451238 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:04.124102  451238 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 12:59:04.124181  451238 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:04.136339  451238 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:04.147689  451238 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:04.158552  451238 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 12:59:04.171412  451238 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 12:59:04.183284  451238 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0805 12:59:04.183336  451238 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0805 12:59:04.199465  451238 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 12:59:04.215571  451238 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:59:04.342540  451238 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 12:59:04.521705  451238 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 12:59:04.521786  451238 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 12:59:04.526734  451238 start.go:563] Will wait 60s for crictl version
	I0805 12:59:04.526795  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:04.530528  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 12:59:04.572468  451238 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 12:59:04.572557  451238 ssh_runner.go:195] Run: crio --version
	I0805 12:59:04.602411  451238 ssh_runner.go:195] Run: crio --version
	I0805 12:59:04.636641  451238 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0805 12:59:04.638062  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetIP
	I0805 12:59:04.641240  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:04.641734  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:04.641763  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:04.641991  451238 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0805 12:59:04.646446  451238 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:59:04.659876  451238 kubeadm.go:883] updating cluster {Name:old-k8s-version-635707 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-635707 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.41 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 12:59:04.660037  451238 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0805 12:59:04.660105  451238 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:59:04.709636  451238 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0805 12:59:04.709725  451238 ssh_runner.go:195] Run: which lz4
	I0805 12:59:04.714439  451238 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0805 12:59:04.719014  451238 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 12:59:04.719047  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0805 12:59:06.414858  451238 crio.go:462] duration metric: took 1.70045694s to copy over tarball
	I0805 12:59:06.414950  451238 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 12:59:04.513198  450884 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:07.018197  450884 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:05.911274  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:07.911405  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:04.478626  450393 main.go:141] libmachine: (embed-certs-321139) Waiting to get IP...
	I0805 12:59:04.479615  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:04.480147  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:04.480209  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:04.480103  452359 retry.go:31] will retry after 236.369287ms: waiting for machine to come up
	I0805 12:59:04.718716  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:04.719184  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:04.719209  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:04.719125  452359 retry.go:31] will retry after 296.553947ms: waiting for machine to come up
	I0805 12:59:05.017667  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:05.018198  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:05.018235  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:05.018143  452359 retry.go:31] will retry after 427.78496ms: waiting for machine to come up
	I0805 12:59:05.447507  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:05.448075  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:05.448105  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:05.448038  452359 retry.go:31] will retry after 469.229133ms: waiting for machine to come up
	I0805 12:59:05.918469  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:05.919013  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:05.919047  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:05.918998  452359 retry.go:31] will retry after 720.005641ms: waiting for machine to come up
	I0805 12:59:06.641103  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:06.641679  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:06.641708  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:06.641634  452359 retry.go:31] will retry after 591.439327ms: waiting for machine to come up
	I0805 12:59:07.234573  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:07.235179  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:07.235207  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:07.235063  452359 retry.go:31] will retry after 1.087958168s: waiting for machine to come up
	I0805 12:59:08.324599  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:08.325179  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:08.325212  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:08.325129  452359 retry.go:31] will retry after 1.316276197s: waiting for machine to come up
	I0805 12:59:09.473711  451238 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.058718584s)
	I0805 12:59:09.473740  451238 crio.go:469] duration metric: took 3.058854233s to extract the tarball
	I0805 12:59:09.473748  451238 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0805 12:59:09.524420  451238 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:59:09.562003  451238 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0805 12:59:09.562035  451238 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0805 12:59:09.562107  451238 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:59:09.562159  451238 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0805 12:59:09.562156  451238 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0805 12:59:09.562194  451238 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0805 12:59:09.562228  451238 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0805 12:59:09.562256  451238 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0805 12:59:09.562374  451238 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0805 12:59:09.562274  451238 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0805 12:59:09.563981  451238 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0805 12:59:09.563993  451238 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0805 12:59:09.564007  451238 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0805 12:59:09.564015  451238 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0805 12:59:09.564032  451238 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0805 12:59:09.564041  451238 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0805 12:59:09.564076  451238 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:59:09.564075  451238 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0805 12:59:09.727888  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0805 12:59:09.732060  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0805 12:59:09.732150  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0805 12:59:09.736408  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0805 12:59:09.748051  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0805 12:59:09.753579  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0805 12:59:09.762561  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0805 12:59:09.822623  451238 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0805 12:59:09.822681  451238 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0805 12:59:09.822742  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:09.824314  451238 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0805 12:59:09.824360  451238 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0805 12:59:09.824404  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:09.905619  451238 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0805 12:59:09.905778  451238 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0805 12:59:09.905738  451238 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0805 12:59:09.905944  451238 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0805 12:59:09.905998  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:09.905851  451238 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0805 12:59:09.906075  451238 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0805 12:59:09.906133  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:09.905861  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:09.916767  451238 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0805 12:59:09.916796  451238 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0805 12:59:09.916812  451238 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0805 12:59:09.916830  451238 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0805 12:59:09.916864  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:09.916868  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:09.916905  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0805 12:59:09.916958  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0805 12:59:09.918683  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0805 12:59:09.918718  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0805 12:59:09.918776  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0805 12:59:10.007687  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0805 12:59:10.007721  451238 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0805 12:59:10.007871  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0805 12:59:10.042432  451238 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0805 12:59:10.061343  451238 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0805 12:59:10.061400  451238 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0805 12:59:10.061469  451238 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0805 12:59:10.073852  451238 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0805 12:59:10.084957  451238 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0805 12:59:10.423355  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:59:10.563992  451238 cache_images.go:92] duration metric: took 1.001937985s to LoadCachedImages
	W0805 12:59:10.564184  451238 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0805 12:59:10.564211  451238 kubeadm.go:934] updating node { 192.168.61.41 8443 v1.20.0 crio true true} ...
	I0805 12:59:10.564345  451238 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-635707 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.41
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-635707 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 12:59:10.564427  451238 ssh_runner.go:195] Run: crio config
	I0805 12:59:10.612146  451238 cni.go:84] Creating CNI manager for ""
	I0805 12:59:10.612180  451238 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:59:10.612197  451238 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 12:59:10.612226  451238 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.41 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-635707 NodeName:old-k8s-version-635707 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.41"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.41 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0805 12:59:10.612415  451238 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.41
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-635707"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.41
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.41"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 12:59:10.612507  451238 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0805 12:59:10.623036  451238 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 12:59:10.623121  451238 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 12:59:10.633484  451238 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0805 12:59:10.652444  451238 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 12:59:10.673192  451238 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0805 12:59:10.694533  451238 ssh_runner.go:195] Run: grep 192.168.61.41	control-plane.minikube.internal$ /etc/hosts
	I0805 12:59:10.699901  451238 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.41	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:59:10.714251  451238 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:59:10.838992  451238 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 12:59:10.857248  451238 certs.go:68] Setting up /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707 for IP: 192.168.61.41
	I0805 12:59:10.857279  451238 certs.go:194] generating shared ca certs ...
	I0805 12:59:10.857303  451238 certs.go:226] acquiring lock for ca certs: {Name:mk0abfcaff3883fbb5243c47b487f9200d9166d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:59:10.857515  451238 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key
	I0805 12:59:10.857587  451238 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key
	I0805 12:59:10.857602  451238 certs.go:256] generating profile certs ...
	I0805 12:59:10.857746  451238 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/client.key
	I0805 12:59:10.857847  451238 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/apiserver.key.3f42c485
	I0805 12:59:10.857907  451238 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/proxy-client.key
	I0805 12:59:10.858072  451238 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem (1338 bytes)
	W0805 12:59:10.858122  451238 certs.go:480] ignoring /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219_empty.pem, impossibly tiny 0 bytes
	I0805 12:59:10.858143  451238 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 12:59:10.858177  451238 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem (1082 bytes)
	I0805 12:59:10.858207  451238 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem (1123 bytes)
	I0805 12:59:10.858235  451238 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem (1675 bytes)
	I0805 12:59:10.858294  451238 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:59:10.859247  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 12:59:10.908518  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 12:59:10.949310  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 12:59:10.981447  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 12:59:11.008085  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0805 12:59:11.035539  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0805 12:59:11.071371  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 12:59:11.099842  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 12:59:11.135629  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 12:59:11.164194  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem --> /usr/share/ca-certificates/391219.pem (1338 bytes)
	I0805 12:59:11.190595  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /usr/share/ca-certificates/3912192.pem (1708 bytes)
	I0805 12:59:11.219765  451238 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 12:59:11.240836  451238 ssh_runner.go:195] Run: openssl version
	I0805 12:59:11.247516  451238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3912192.pem && ln -fs /usr/share/ca-certificates/3912192.pem /etc/ssl/certs/3912192.pem"
	I0805 12:59:11.260736  451238 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3912192.pem
	I0805 12:59:11.266004  451238 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 11:39 /usr/share/ca-certificates/3912192.pem
	I0805 12:59:11.266100  451238 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3912192.pem
	I0805 12:59:11.273012  451238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3912192.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 12:59:11.285453  451238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 12:59:11.296934  451238 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:59:11.301588  451238 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:59:11.301655  451238 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:59:11.307459  451238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 12:59:11.318833  451238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391219.pem && ln -fs /usr/share/ca-certificates/391219.pem /etc/ssl/certs/391219.pem"
	I0805 12:59:11.330224  451238 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/391219.pem
	I0805 12:59:11.334864  451238 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 11:39 /usr/share/ca-certificates/391219.pem
	I0805 12:59:11.334917  451238 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391219.pem
	I0805 12:59:11.341338  451238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391219.pem /etc/ssl/certs/51391683.0"
	I0805 12:59:11.353084  451238 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 12:59:11.358532  451238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 12:59:11.365419  451238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 12:59:11.371581  451238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 12:59:11.378308  451238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 12:59:11.384640  451238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 12:59:11.390622  451238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 12:59:11.397027  451238 kubeadm.go:392] StartCluster: {Name:old-k8s-version-635707 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-635707 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.41 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:59:11.397199  451238 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 12:59:11.397286  451238 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:59:11.436612  451238 cri.go:89] found id: ""
	I0805 12:59:11.436689  451238 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 12:59:11.447906  451238 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0805 12:59:11.447927  451238 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0805 12:59:11.447984  451238 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0805 12:59:11.459282  451238 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0805 12:59:11.460548  451238 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-635707" does not appear in /home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 12:59:11.461355  451238 kubeconfig.go:62] /home/jenkins/minikube-integration/19377-383955/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-635707" cluster setting kubeconfig missing "old-k8s-version-635707" context setting]
	I0805 12:59:11.462324  451238 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/kubeconfig: {Name:mkf2ea766e58530103015ce4ba9d1ed3336f3926 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:59:11.476306  451238 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0805 12:59:11.487869  451238 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.41
	I0805 12:59:11.487911  451238 kubeadm.go:1160] stopping kube-system containers ...
	I0805 12:59:11.487927  451238 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0805 12:59:11.487988  451238 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:59:11.526601  451238 cri.go:89] found id: ""
	I0805 12:59:11.526674  451238 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0805 12:59:11.545429  451238 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 12:59:11.556725  451238 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 12:59:11.556755  451238 kubeadm.go:157] found existing configuration files:
	
	I0805 12:59:11.556820  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 12:59:11.566564  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 12:59:11.566648  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 12:59:11.576859  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 12:59:11.586237  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 12:59:11.586329  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 12:59:11.596721  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 12:59:11.607239  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 12:59:11.607340  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 12:59:11.617626  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 12:59:11.627179  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 12:59:11.627251  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 12:59:11.637566  451238 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 12:59:11.648889  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:11.780270  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:08.018320  450884 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"True"
	I0805 12:59:08.018363  450884 pod_ready.go:81] duration metric: took 10.514788401s for pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:08.018379  450884 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:10.270876  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:10.409419  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:12.410565  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:09.643077  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:09.643655  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:09.643692  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:09.643554  452359 retry.go:31] will retry after 1.473183692s: waiting for machine to come up
	I0805 12:59:11.118468  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:11.119005  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:11.119035  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:11.118943  452359 retry.go:31] will retry after 2.036333626s: waiting for machine to come up
	I0805 12:59:13.156866  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:13.157390  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:13.157419  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:13.157339  452359 retry.go:31] will retry after 2.095065362s: waiting for machine to come up
	I0805 12:59:12.549918  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:12.781853  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:12.877381  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:12.978141  451238 api_server.go:52] waiting for apiserver process to appear ...
	I0805 12:59:12.978250  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:13.479242  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:13.978456  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:14.478575  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:14.978783  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:15.479342  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:15.978307  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:16.479180  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:12.526543  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:15.027362  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:14.909480  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:16.911090  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:15.253589  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:15.254081  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:15.254111  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:15.254020  452359 retry.go:31] will retry after 2.859783781s: waiting for machine to come up
	I0805 12:59:18.116972  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:18.117528  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:18.117559  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:18.117486  452359 retry.go:31] will retry after 4.456427854s: waiting for machine to come up
	I0805 12:59:16.978915  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:17.479019  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:17.978574  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:18.478343  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:18.978820  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:19.478488  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:19.978335  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:20.478945  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:20.979040  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:21.479324  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:17.525332  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:19.525407  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:22.025092  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:19.410416  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:21.908646  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:22.576842  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.577261  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has current primary IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.577291  450393 main.go:141] libmachine: (embed-certs-321139) Found IP for machine: 192.168.39.196
	I0805 12:59:22.577306  450393 main.go:141] libmachine: (embed-certs-321139) Reserving static IP address...
	I0805 12:59:22.577834  450393 main.go:141] libmachine: (embed-certs-321139) Reserved static IP address: 192.168.39.196
	I0805 12:59:22.577877  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "embed-certs-321139", mac: "52:54:00:6c:ad:fd", ip: "192.168.39.196"} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:22.577893  450393 main.go:141] libmachine: (embed-certs-321139) Waiting for SSH to be available...
	I0805 12:59:22.577915  450393 main.go:141] libmachine: (embed-certs-321139) DBG | skip adding static IP to network mk-embed-certs-321139 - found existing host DHCP lease matching {name: "embed-certs-321139", mac: "52:54:00:6c:ad:fd", ip: "192.168.39.196"}
	I0805 12:59:22.577922  450393 main.go:141] libmachine: (embed-certs-321139) DBG | Getting to WaitForSSH function...
	I0805 12:59:22.580080  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.580520  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:22.580552  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.580707  450393 main.go:141] libmachine: (embed-certs-321139) DBG | Using SSH client type: external
	I0805 12:59:22.580742  450393 main.go:141] libmachine: (embed-certs-321139) DBG | Using SSH private key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa (-rw-------)
	I0805 12:59:22.580764  450393 main.go:141] libmachine: (embed-certs-321139) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.196 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 12:59:22.580778  450393 main.go:141] libmachine: (embed-certs-321139) DBG | About to run SSH command:
	I0805 12:59:22.580793  450393 main.go:141] libmachine: (embed-certs-321139) DBG | exit 0
	I0805 12:59:22.703872  450393 main.go:141] libmachine: (embed-certs-321139) DBG | SSH cmd err, output: <nil>: 
	I0805 12:59:22.704333  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetConfigRaw
	I0805 12:59:22.705046  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetIP
	I0805 12:59:22.707544  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.707919  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:22.707951  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.708240  450393 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139/config.json ...
	I0805 12:59:22.708474  450393 machine.go:94] provisionDockerMachine start ...
	I0805 12:59:22.708501  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:22.708755  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:22.711177  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.711488  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:22.711510  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.711639  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:22.711842  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:22.711998  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:22.712157  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:22.712378  450393 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:22.712581  450393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0805 12:59:22.712595  450393 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 12:59:22.816371  450393 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 12:59:22.816433  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetMachineName
	I0805 12:59:22.816708  450393 buildroot.go:166] provisioning hostname "embed-certs-321139"
	I0805 12:59:22.816743  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetMachineName
	I0805 12:59:22.816959  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:22.819715  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.820085  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:22.820108  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.820321  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:22.820510  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:22.820656  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:22.820794  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:22.820952  450393 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:22.821203  450393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0805 12:59:22.821229  450393 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-321139 && echo "embed-certs-321139" | sudo tee /etc/hostname
	I0805 12:59:22.938845  450393 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-321139
	
	I0805 12:59:22.938888  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:22.942264  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.942651  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:22.942684  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.942904  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:22.943161  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:22.943383  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:22.943568  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:22.943777  450393 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:22.943987  450393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0805 12:59:22.944011  450393 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-321139' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-321139/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-321139' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 12:59:23.062700  450393 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 12:59:23.062734  450393 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19377-383955/.minikube CaCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19377-383955/.minikube}
	I0805 12:59:23.062762  450393 buildroot.go:174] setting up certificates
	I0805 12:59:23.062774  450393 provision.go:84] configureAuth start
	I0805 12:59:23.062800  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetMachineName
	I0805 12:59:23.063142  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetIP
	I0805 12:59:23.065839  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.066140  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.066175  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.066359  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:23.069214  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.069562  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.069597  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.069746  450393 provision.go:143] copyHostCerts
	I0805 12:59:23.069813  450393 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem, removing ...
	I0805 12:59:23.069827  450393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 12:59:23.069897  450393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem (1082 bytes)
	I0805 12:59:23.070014  450393 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem, removing ...
	I0805 12:59:23.070025  450393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 12:59:23.070083  450393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem (1123 bytes)
	I0805 12:59:23.070185  450393 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem, removing ...
	I0805 12:59:23.070197  450393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 12:59:23.070226  450393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem (1675 bytes)
	I0805 12:59:23.070308  450393 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem org=jenkins.embed-certs-321139 san=[127.0.0.1 192.168.39.196 embed-certs-321139 localhost minikube]
	I0805 12:59:23.223660  450393 provision.go:177] copyRemoteCerts
	I0805 12:59:23.223759  450393 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 12:59:23.223799  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:23.226548  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.226980  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.227014  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.227195  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:23.227449  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:23.227624  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:23.227801  450393 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa Username:docker}
	I0805 12:59:23.311952  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0805 12:59:23.336888  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0805 12:59:23.363397  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 12:59:23.388197  450393 provision.go:87] duration metric: took 325.408192ms to configureAuth
	I0805 12:59:23.388234  450393 buildroot.go:189] setting minikube options for container-runtime
	I0805 12:59:23.388470  450393 config.go:182] Loaded profile config "embed-certs-321139": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:59:23.388596  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:23.391247  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.391597  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.391626  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.391843  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:23.392054  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:23.392240  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:23.392371  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:23.392528  450393 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:23.392825  450393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0805 12:59:23.392853  450393 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 12:59:23.675427  450393 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 12:59:23.675459  450393 machine.go:97] duration metric: took 966.969142ms to provisionDockerMachine
	I0805 12:59:23.675472  450393 start.go:293] postStartSetup for "embed-certs-321139" (driver="kvm2")
	I0805 12:59:23.675484  450393 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 12:59:23.675515  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:23.675885  450393 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 12:59:23.675912  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:23.678780  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.679100  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.679152  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.679333  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:23.679524  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:23.679657  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:23.679860  450393 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa Username:docker}
	I0805 12:59:23.764372  450393 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 12:59:23.769059  450393 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 12:59:23.769088  450393 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/addons for local assets ...
	I0805 12:59:23.769162  450393 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/files for local assets ...
	I0805 12:59:23.769231  450393 filesync.go:149] local asset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> 3912192.pem in /etc/ssl/certs
	I0805 12:59:23.769334  450393 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 12:59:23.781287  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:59:23.808609  450393 start.go:296] duration metric: took 133.117086ms for postStartSetup
	I0805 12:59:23.808665  450393 fix.go:56] duration metric: took 20.659690035s for fixHost
	I0805 12:59:23.808694  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:23.811519  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.811948  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.811978  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.812164  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:23.812366  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:23.812539  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:23.812708  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:23.812897  450393 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:23.813137  450393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0805 12:59:23.813151  450393 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 12:59:23.916498  450393 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722862763.883942670
	
	I0805 12:59:23.916521  450393 fix.go:216] guest clock: 1722862763.883942670
	I0805 12:59:23.916536  450393 fix.go:229] Guest: 2024-08-05 12:59:23.88394267 +0000 UTC Remote: 2024-08-05 12:59:23.8086712 +0000 UTC m=+359.764794687 (delta=75.27147ms)
	I0805 12:59:23.916570  450393 fix.go:200] guest clock delta is within tolerance: 75.27147ms
	I0805 12:59:23.916578  450393 start.go:83] releasing machines lock for "embed-certs-321139", held for 20.767637373s
	I0805 12:59:23.916598  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:23.916867  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetIP
	I0805 12:59:23.919570  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.919972  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.919999  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.920142  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:23.920666  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:23.920837  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:23.920930  450393 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 12:59:23.920981  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:23.921063  450393 ssh_runner.go:195] Run: cat /version.json
	I0805 12:59:23.921083  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:23.924176  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.924209  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.924557  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.924588  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.924613  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.924635  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.924749  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:23.924936  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:23.925021  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:23.925127  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:23.925219  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:23.925286  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:23.925369  450393 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa Username:docker}
	I0805 12:59:23.925454  450393 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa Username:docker}
	I0805 12:59:24.000693  450393 ssh_runner.go:195] Run: systemctl --version
	I0805 12:59:24.023194  450393 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 12:59:24.178807  450393 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 12:59:24.184954  450393 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 12:59:24.185031  450393 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 12:59:24.201420  450393 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 12:59:24.201453  450393 start.go:495] detecting cgroup driver to use...
	I0805 12:59:24.201543  450393 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 12:59:24.218603  450393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 12:59:24.233928  450393 docker.go:217] disabling cri-docker service (if available) ...
	I0805 12:59:24.233999  450393 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 12:59:24.248455  450393 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 12:59:24.263355  450393 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 12:59:24.386806  450393 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 12:59:24.565128  450393 docker.go:233] disabling docker service ...
	I0805 12:59:24.565229  450393 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 12:59:24.581053  450393 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 12:59:24.594297  450393 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 12:59:24.716615  450393 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 12:59:24.835687  450393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 12:59:24.850666  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 12:59:24.870993  450393 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0805 12:59:24.871055  450393 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:24.881731  450393 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 12:59:24.881815  450393 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:24.893156  450393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:24.903802  450393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:24.915189  450393 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 12:59:24.926967  450393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:24.938008  450393 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:24.956033  450393 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:24.967863  450393 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 12:59:24.977758  450393 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0805 12:59:24.977822  450393 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0805 12:59:24.993837  450393 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 12:59:25.005009  450393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:59:25.135856  450393 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 12:59:25.277425  450393 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 12:59:25.277513  450393 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 12:59:25.282628  450393 start.go:563] Will wait 60s for crictl version
	I0805 12:59:25.282704  450393 ssh_runner.go:195] Run: which crictl
	I0805 12:59:25.287324  450393 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 12:59:25.335315  450393 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 12:59:25.335396  450393 ssh_runner.go:195] Run: crio --version
	I0805 12:59:25.367574  450393 ssh_runner.go:195] Run: crio --version
	I0805 12:59:25.398926  450393 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0805 12:59:21.979289  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:22.478367  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:22.978424  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:23.478877  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:23.978841  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:24.478635  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:24.978824  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:25.479076  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:25.979222  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:26.478928  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:24.025234  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:26.028817  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:23.909428  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:25.910877  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:27.911235  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:25.400219  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetIP
	I0805 12:59:25.403052  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:25.403508  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:25.403552  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:25.403849  450393 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0805 12:59:25.408402  450393 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:59:25.423146  450393 kubeadm.go:883] updating cluster {Name:embed-certs-321139 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-321139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 12:59:25.423301  450393 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 12:59:25.423368  450393 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:59:25.460713  450393 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0805 12:59:25.460795  450393 ssh_runner.go:195] Run: which lz4
	I0805 12:59:25.464997  450393 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0805 12:59:25.469397  450393 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 12:59:25.469452  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0805 12:59:26.966110  450393 crio.go:462] duration metric: took 1.501152522s to copy over tarball
	I0805 12:59:26.966207  450393 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 12:59:26.978648  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:27.478951  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:27.978405  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:28.479008  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:28.978521  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:29.479199  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:29.979288  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:30.479030  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:30.978372  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:31.479194  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:28.525888  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:31.025690  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:30.410973  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:32.910889  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:29.287605  450393 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.321364872s)
	I0805 12:59:29.287636  450393 crio.go:469] duration metric: took 2.321487153s to extract the tarball
	I0805 12:59:29.287647  450393 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0805 12:59:29.329182  450393 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:59:29.372183  450393 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 12:59:29.372211  450393 cache_images.go:84] Images are preloaded, skipping loading
	I0805 12:59:29.372220  450393 kubeadm.go:934] updating node { 192.168.39.196 8443 v1.30.3 crio true true} ...
	I0805 12:59:29.372349  450393 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-321139 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.196
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-321139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 12:59:29.372433  450393 ssh_runner.go:195] Run: crio config
	I0805 12:59:29.426003  450393 cni.go:84] Creating CNI manager for ""
	I0805 12:59:29.426025  450393 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:59:29.426036  450393 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 12:59:29.426059  450393 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.196 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-321139 NodeName:embed-certs-321139 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.196"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.196 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 12:59:29.426192  450393 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.196
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-321139"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.196
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.196"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 12:59:29.426250  450393 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 12:59:29.436248  450393 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 12:59:29.436315  450393 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 12:59:29.445844  450393 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0805 12:59:29.463125  450393 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 12:59:29.479685  450393 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0805 12:59:29.499033  450393 ssh_runner.go:195] Run: grep 192.168.39.196	control-plane.minikube.internal$ /etc/hosts
	I0805 12:59:29.503175  450393 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.196	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:59:29.516141  450393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:59:29.645914  450393 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 12:59:29.664578  450393 certs.go:68] Setting up /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139 for IP: 192.168.39.196
	I0805 12:59:29.664608  450393 certs.go:194] generating shared ca certs ...
	I0805 12:59:29.664626  450393 certs.go:226] acquiring lock for ca certs: {Name:mk0abfcaff3883fbb5243c47b487f9200d9166d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:59:29.664853  450393 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key
	I0805 12:59:29.664922  450393 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key
	I0805 12:59:29.664939  450393 certs.go:256] generating profile certs ...
	I0805 12:59:29.665058  450393 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139/client.key
	I0805 12:59:29.665143  450393 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139/apiserver.key.ce53eda3
	I0805 12:59:29.665183  450393 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139/proxy-client.key
	I0805 12:59:29.665293  450393 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem (1338 bytes)
	W0805 12:59:29.665324  450393 certs.go:480] ignoring /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219_empty.pem, impossibly tiny 0 bytes
	I0805 12:59:29.665331  450393 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 12:59:29.665360  450393 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem (1082 bytes)
	I0805 12:59:29.665382  450393 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem (1123 bytes)
	I0805 12:59:29.665405  450393 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem (1675 bytes)
	I0805 12:59:29.665442  450393 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:59:29.666287  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 12:59:29.705969  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 12:59:29.752700  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 12:59:29.779819  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 12:59:29.806578  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0805 12:59:29.832277  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0805 12:59:29.861682  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 12:59:29.888113  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0805 12:59:29.915023  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem --> /usr/share/ca-certificates/391219.pem (1338 bytes)
	I0805 12:59:29.942582  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /usr/share/ca-certificates/3912192.pem (1708 bytes)
	I0805 12:59:29.971225  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 12:59:29.999278  450393 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 12:59:30.018294  450393 ssh_runner.go:195] Run: openssl version
	I0805 12:59:30.024645  450393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 12:59:30.035446  450393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:59:30.040216  450393 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:59:30.040279  450393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:59:30.046151  450393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 12:59:30.057664  450393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391219.pem && ln -fs /usr/share/ca-certificates/391219.pem /etc/ssl/certs/391219.pem"
	I0805 12:59:30.068822  450393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/391219.pem
	I0805 12:59:30.074073  450393 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 11:39 /usr/share/ca-certificates/391219.pem
	I0805 12:59:30.074138  450393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391219.pem
	I0805 12:59:30.080126  450393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391219.pem /etc/ssl/certs/51391683.0"
	I0805 12:59:30.091168  450393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3912192.pem && ln -fs /usr/share/ca-certificates/3912192.pem /etc/ssl/certs/3912192.pem"
	I0805 12:59:30.103171  450393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3912192.pem
	I0805 12:59:30.108840  450393 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 11:39 /usr/share/ca-certificates/3912192.pem
	I0805 12:59:30.108924  450393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3912192.pem
	I0805 12:59:30.115469  450393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3912192.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 12:59:30.126742  450393 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 12:59:30.132008  450393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 12:59:30.138285  450393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 12:59:30.144251  450393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 12:59:30.150718  450393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 12:59:30.157183  450393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 12:59:30.163709  450393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 12:59:30.170852  450393 kubeadm.go:392] StartCluster: {Name:embed-certs-321139 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-321139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:59:30.170987  450393 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 12:59:30.171055  450393 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:59:30.216014  450393 cri.go:89] found id: ""
	I0805 12:59:30.216103  450393 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 12:59:30.234046  450393 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0805 12:59:30.234076  450393 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0805 12:59:30.234151  450393 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0805 12:59:30.245861  450393 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0805 12:59:30.247434  450393 kubeconfig.go:125] found "embed-certs-321139" server: "https://192.168.39.196:8443"
	I0805 12:59:30.250024  450393 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0805 12:59:30.261066  450393 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.196
	I0805 12:59:30.261116  450393 kubeadm.go:1160] stopping kube-system containers ...
	I0805 12:59:30.261140  450393 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0805 12:59:30.261201  450393 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:59:30.306587  450393 cri.go:89] found id: ""
	I0805 12:59:30.306678  450393 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0805 12:59:30.326818  450393 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 12:59:30.336908  450393 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 12:59:30.336931  450393 kubeadm.go:157] found existing configuration files:
	
	I0805 12:59:30.336984  450393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 12:59:30.346004  450393 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 12:59:30.346105  450393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 12:59:30.355979  450393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 12:59:30.366124  450393 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 12:59:30.366185  450393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 12:59:30.376923  450393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 12:59:30.386526  450393 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 12:59:30.386599  450393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 12:59:30.396661  450393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 12:59:30.406693  450393 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 12:59:30.406765  450393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 12:59:30.417789  450393 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 12:59:30.428214  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:30.554777  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:31.703579  450393 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.14876196s)
	I0805 12:59:31.703620  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:31.925724  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:31.999840  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:32.089948  450393 api_server.go:52] waiting for apiserver process to appear ...
	I0805 12:59:32.090084  450393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:32.590152  450393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:33.090222  450393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:33.115351  450393 api_server.go:72] duration metric: took 1.025404322s to wait for apiserver process to appear ...
	I0805 12:59:33.115385  450393 api_server.go:88] waiting for apiserver healthz status ...
	I0805 12:59:33.115411  450393 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0805 12:59:33.115983  450393 api_server.go:269] stopped: https://192.168.39.196:8443/healthz: Get "https://192.168.39.196:8443/healthz": dial tcp 192.168.39.196:8443: connect: connection refused
	I0805 12:59:33.616210  450393 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0805 12:59:31.978481  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:32.479031  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:32.978796  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:33.478677  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:33.979377  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:34.478595  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:34.979227  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:35.478695  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:35.978911  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:36.479327  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:33.027363  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:35.525528  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:36.274855  450393 api_server.go:279] https://192.168.39.196:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0805 12:59:36.274895  450393 api_server.go:103] status: https://192.168.39.196:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0805 12:59:36.274912  450393 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0805 12:59:36.314290  450393 api_server.go:279] https://192.168.39.196:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0805 12:59:36.314325  450393 api_server.go:103] status: https://192.168.39.196:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0805 12:59:36.615566  450393 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0805 12:59:36.620594  450393 api_server.go:279] https://192.168.39.196:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:59:36.620626  450393 api_server.go:103] status: https://192.168.39.196:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:59:37.116251  450393 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0805 12:59:37.120719  450393 api_server.go:279] https://192.168.39.196:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:59:37.120749  450393 api_server.go:103] status: https://192.168.39.196:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:59:37.616330  450393 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0805 12:59:37.620778  450393 api_server.go:279] https://192.168.39.196:8443/healthz returned 200:
	ok
	I0805 12:59:37.627608  450393 api_server.go:141] control plane version: v1.30.3
	I0805 12:59:37.627640  450393 api_server.go:131] duration metric: took 4.512246076s to wait for apiserver health ...
	I0805 12:59:37.627652  450393 cni.go:84] Creating CNI manager for ""
	I0805 12:59:37.627661  450393 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:59:37.628987  450393 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0805 12:59:35.410070  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:37.411719  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:37.630068  450393 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0805 12:59:37.650034  450393 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0805 12:59:37.691891  450393 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 12:59:37.704810  450393 system_pods.go:59] 8 kube-system pods found
	I0805 12:59:37.704855  450393 system_pods.go:61] "coredns-7db6d8ff4d-wm7lh" [e3851d79-431c-4629-bfdc-ed9615cd46aa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0805 12:59:37.704866  450393 system_pods.go:61] "etcd-embed-certs-321139" [98de664b-92d7-432d-9881-496dd8edd9f3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0805 12:59:37.704887  450393 system_pods.go:61] "kube-apiserver-embed-certs-321139" [2d93e6df-1933-4ac1-82f6-d0d8f74f6d4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0805 12:59:37.704900  450393 system_pods.go:61] "kube-controller-manager-embed-certs-321139" [84165f78-f74b-4714-81b9-eeac2771b86b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0805 12:59:37.704916  450393 system_pods.go:61] "kube-proxy-shgv2" [a19c5991-505f-4105-8c20-7afd63dd8e61] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0805 12:59:37.704928  450393 system_pods.go:61] "kube-scheduler-embed-certs-321139" [961a5013-fd55-48a2-adc2-acde33f6aed5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0805 12:59:37.704946  450393 system_pods.go:61] "metrics-server-569cc877fc-k8mrt" [6d400b20-5de5-4046-b773-39766c67cdb4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 12:59:37.704956  450393 system_pods.go:61] "storage-provisioner" [8b2db057-5262-4648-93ea-f2f0ed51a19b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0805 12:59:37.704967  450393 system_pods.go:74] duration metric: took 13.04358ms to wait for pod list to return data ...
	I0805 12:59:37.704980  450393 node_conditions.go:102] verifying NodePressure condition ...
	I0805 12:59:37.710340  450393 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 12:59:37.710367  450393 node_conditions.go:123] node cpu capacity is 2
	I0805 12:59:37.710382  450393 node_conditions.go:105] duration metric: took 5.392102ms to run NodePressure ...
	I0805 12:59:37.710402  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:37.995945  450393 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0805 12:59:38.000274  450393 kubeadm.go:739] kubelet initialised
	I0805 12:59:38.000295  450393 kubeadm.go:740] duration metric: took 4.323835ms waiting for restarted kubelet to initialise ...
	I0805 12:59:38.000302  450393 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 12:59:38.006122  450393 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-wm7lh" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:38.012368  450393 pod_ready.go:97] node "embed-certs-321139" hosting pod "coredns-7db6d8ff4d-wm7lh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.012392  450393 pod_ready.go:81] duration metric: took 6.243837ms for pod "coredns-7db6d8ff4d-wm7lh" in "kube-system" namespace to be "Ready" ...
	E0805 12:59:38.012400  450393 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-321139" hosting pod "coredns-7db6d8ff4d-wm7lh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.012406  450393 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:38.016338  450393 pod_ready.go:97] node "embed-certs-321139" hosting pod "etcd-embed-certs-321139" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.016357  450393 pod_ready.go:81] duration metric: took 3.943012ms for pod "etcd-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	E0805 12:59:38.016364  450393 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-321139" hosting pod "etcd-embed-certs-321139" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.016369  450393 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:38.021019  450393 pod_ready.go:97] node "embed-certs-321139" hosting pod "kube-apiserver-embed-certs-321139" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.021044  450393 pod_ready.go:81] duration metric: took 4.667242ms for pod "kube-apiserver-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	E0805 12:59:38.021055  450393 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-321139" hosting pod "kube-apiserver-embed-certs-321139" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.021063  450393 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:38.096303  450393 pod_ready.go:97] node "embed-certs-321139" hosting pod "kube-controller-manager-embed-certs-321139" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.096334  450393 pod_ready.go:81] duration metric: took 75.253785ms for pod "kube-controller-manager-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	E0805 12:59:38.096345  450393 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-321139" hosting pod "kube-controller-manager-embed-certs-321139" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.096351  450393 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-shgv2" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:38.495648  450393 pod_ready.go:97] node "embed-certs-321139" hosting pod "kube-proxy-shgv2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.495677  450393 pod_ready.go:81] duration metric: took 399.318117ms for pod "kube-proxy-shgv2" in "kube-system" namespace to be "Ready" ...
	E0805 12:59:38.495687  450393 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-321139" hosting pod "kube-proxy-shgv2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.495694  450393 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:38.896066  450393 pod_ready.go:97] node "embed-certs-321139" hosting pod "kube-scheduler-embed-certs-321139" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.896091  450393 pod_ready.go:81] duration metric: took 400.39101ms for pod "kube-scheduler-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	E0805 12:59:38.896101  450393 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-321139" hosting pod "kube-scheduler-embed-certs-321139" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.896108  450393 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:39.295587  450393 pod_ready.go:97] node "embed-certs-321139" hosting pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:39.295618  450393 pod_ready.go:81] duration metric: took 399.499354ms for pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace to be "Ready" ...
	E0805 12:59:39.295632  450393 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-321139" hosting pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:39.295653  450393 pod_ready.go:38] duration metric: took 1.295340252s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 12:59:39.295675  450393 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 12:59:39.308136  450393 ops.go:34] apiserver oom_adj: -16
	I0805 12:59:39.308161  450393 kubeadm.go:597] duration metric: took 9.07407738s to restartPrimaryControlPlane
	I0805 12:59:39.308170  450393 kubeadm.go:394] duration metric: took 9.137335392s to StartCluster
	I0805 12:59:39.308188  450393 settings.go:142] acquiring lock: {Name:mkef693333292ed53a03690c72ec170ce2e26d3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:59:39.308272  450393 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 12:59:39.310750  450393 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/kubeconfig: {Name:mkf2ea766e58530103015ce4ba9d1ed3336f3926 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:59:39.311015  450393 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 12:59:39.311149  450393 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 12:59:39.311240  450393 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-321139"
	I0805 12:59:39.311289  450393 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-321139"
	W0805 12:59:39.311303  450393 addons.go:243] addon storage-provisioner should already be in state true
	I0805 12:59:39.311301  450393 addons.go:69] Setting metrics-server=true in profile "embed-certs-321139"
	I0805 12:59:39.311305  450393 addons.go:69] Setting default-storageclass=true in profile "embed-certs-321139"
	I0805 12:59:39.311351  450393 host.go:66] Checking if "embed-certs-321139" exists ...
	I0805 12:59:39.311360  450393 addons.go:234] Setting addon metrics-server=true in "embed-certs-321139"
	W0805 12:59:39.311371  450393 addons.go:243] addon metrics-server should already be in state true
	I0805 12:59:39.311371  450393 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-321139"
	I0805 12:59:39.311454  450393 host.go:66] Checking if "embed-certs-321139" exists ...
	I0805 12:59:39.311287  450393 config.go:182] Loaded profile config "embed-certs-321139": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:59:39.311848  450393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:59:39.311897  450393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:59:39.311906  450393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:59:39.311912  450393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:59:39.311964  450393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:59:39.312115  450393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:59:39.313050  450393 out.go:177] * Verifying Kubernetes components...
	I0805 12:59:39.314390  450393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:59:39.327427  450393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36355
	I0805 12:59:39.327687  450393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39217
	I0805 12:59:39.328016  450393 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:59:39.328155  450393 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:59:39.328609  450393 main.go:141] libmachine: Using API Version  1
	I0805 12:59:39.328649  450393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:59:39.328735  450393 main.go:141] libmachine: Using API Version  1
	I0805 12:59:39.328786  450393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:59:39.329013  450393 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:59:39.329086  450393 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:59:39.329560  450393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:59:39.329599  450393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:59:39.329676  450393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:59:39.329721  450393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:59:39.330884  450393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34247
	I0805 12:59:39.331381  450393 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:59:39.331878  450393 main.go:141] libmachine: Using API Version  1
	I0805 12:59:39.331902  450393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:59:39.332289  450393 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:59:39.332529  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetState
	I0805 12:59:39.336244  450393 addons.go:234] Setting addon default-storageclass=true in "embed-certs-321139"
	W0805 12:59:39.336269  450393 addons.go:243] addon default-storageclass should already be in state true
	I0805 12:59:39.336305  450393 host.go:66] Checking if "embed-certs-321139" exists ...
	I0805 12:59:39.336688  450393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:59:39.336735  450393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:59:39.347255  450393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41715
	I0805 12:59:39.347411  450393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43729
	I0805 12:59:39.347776  450393 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:59:39.347910  450393 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:59:39.348271  450393 main.go:141] libmachine: Using API Version  1
	I0805 12:59:39.348291  450393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:59:39.348464  450393 main.go:141] libmachine: Using API Version  1
	I0805 12:59:39.348476  450393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:59:39.348603  450393 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:59:39.348760  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetState
	I0805 12:59:39.348817  450393 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:59:39.348955  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetState
	I0805 12:59:39.350697  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:39.350906  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:39.352896  450393 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:59:39.352895  450393 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0805 12:59:39.354185  450393 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0805 12:59:39.354207  450393 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0805 12:59:39.354224  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:39.354266  450393 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 12:59:39.354277  450393 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 12:59:39.354292  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:39.356641  450393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41381
	I0805 12:59:39.357213  450393 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:59:39.357546  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:39.357791  450393 main.go:141] libmachine: Using API Version  1
	I0805 12:59:39.357814  450393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:59:39.357867  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:39.358001  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:39.358020  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:39.359294  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:39.359322  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:39.359337  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:39.359345  450393 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:59:39.359353  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:39.359488  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:39.359624  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:39.359669  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:39.359783  450393 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa Username:docker}
	I0805 12:59:39.359977  450393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:59:39.360009  450393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:59:39.360077  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:39.360210  450393 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa Username:docker}
	I0805 12:59:39.380935  450393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33787
	I0805 12:59:39.381394  450393 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:59:39.381987  450393 main.go:141] libmachine: Using API Version  1
	I0805 12:59:39.382029  450393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:59:39.382362  450393 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:59:39.382603  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetState
	I0805 12:59:39.384225  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:39.384497  450393 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 12:59:39.384515  450393 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 12:59:39.384536  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:39.389471  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:39.389972  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:39.390001  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:39.390124  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:39.390303  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:39.390604  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:39.390791  450393 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa Username:docker}
	I0805 12:59:39.513696  450393 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 12:59:39.533291  450393 node_ready.go:35] waiting up to 6m0s for node "embed-certs-321139" to be "Ready" ...
	I0805 12:59:39.597816  450393 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 12:59:39.700234  450393 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 12:59:39.719936  450393 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0805 12:59:39.719958  450393 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0805 12:59:39.760405  450393 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0805 12:59:39.760441  450393 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0805 12:59:39.808765  450393 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0805 12:59:39.808794  450393 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0805 12:59:39.833073  450393 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0805 12:59:39.946594  450393 main.go:141] libmachine: Making call to close driver server
	I0805 12:59:39.946633  450393 main.go:141] libmachine: (embed-certs-321139) Calling .Close
	I0805 12:59:39.946968  450393 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:59:39.946995  450393 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:59:39.947052  450393 main.go:141] libmachine: (embed-certs-321139) DBG | Closing plugin on server side
	I0805 12:59:39.947121  450393 main.go:141] libmachine: Making call to close driver server
	I0805 12:59:39.947137  450393 main.go:141] libmachine: (embed-certs-321139) Calling .Close
	I0805 12:59:39.947456  450393 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:59:39.947477  450393 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:59:39.947490  450393 main.go:141] libmachine: (embed-certs-321139) DBG | Closing plugin on server side
	I0805 12:59:39.953919  450393 main.go:141] libmachine: Making call to close driver server
	I0805 12:59:39.953942  450393 main.go:141] libmachine: (embed-certs-321139) Calling .Close
	I0805 12:59:39.954189  450393 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:59:39.954209  450393 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:59:40.636249  450393 main.go:141] libmachine: Making call to close driver server
	I0805 12:59:40.636274  450393 main.go:141] libmachine: (embed-certs-321139) Calling .Close
	I0805 12:59:40.636638  450393 main.go:141] libmachine: (embed-certs-321139) DBG | Closing plugin on server side
	I0805 12:59:40.636715  450393 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:59:40.636729  450393 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:59:40.636745  450393 main.go:141] libmachine: Making call to close driver server
	I0805 12:59:40.636757  450393 main.go:141] libmachine: (embed-certs-321139) Calling .Close
	I0805 12:59:40.636989  450393 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:59:40.637008  450393 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:59:40.671789  450393 main.go:141] libmachine: Making call to close driver server
	I0805 12:59:40.671819  450393 main.go:141] libmachine: (embed-certs-321139) Calling .Close
	I0805 12:59:40.672189  450393 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:59:40.672207  450393 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:59:40.672217  450393 main.go:141] libmachine: Making call to close driver server
	I0805 12:59:40.672225  450393 main.go:141] libmachine: (embed-certs-321139) Calling .Close
	I0805 12:59:40.672468  450393 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:59:40.672485  450393 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:59:40.672499  450393 addons.go:475] Verifying addon metrics-server=true in "embed-certs-321139"
	I0805 12:59:40.674497  450393 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0805 12:59:36.978361  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:37.478380  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:37.978354  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:38.478283  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:38.979257  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:39.478407  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:39.978772  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:40.478395  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:40.979309  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:41.478302  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:38.026001  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:40.026706  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:39.909336  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:41.910240  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:40.675778  450393 addons.go:510] duration metric: took 1.364642066s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0805 12:59:41.537321  450393 node_ready.go:53] node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:44.037571  450393 node_ready.go:53] node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:41.978791  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:42.478841  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:42.979289  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:43.478344  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:43.978613  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:44.478756  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:44.978392  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:45.478363  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:45.978354  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:46.478417  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:42.524568  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:45.024950  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:47.025453  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:44.408846  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:46.410085  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:46.537183  450393 node_ready.go:53] node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:47.037178  450393 node_ready.go:49] node "embed-certs-321139" has status "Ready":"True"
	I0805 12:59:47.037206  450393 node_ready.go:38] duration metric: took 7.503884334s for node "embed-certs-321139" to be "Ready" ...
	I0805 12:59:47.037221  450393 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 12:59:47.043159  450393 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wm7lh" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:47.048037  450393 pod_ready.go:92] pod "coredns-7db6d8ff4d-wm7lh" in "kube-system" namespace has status "Ready":"True"
	I0805 12:59:47.048088  450393 pod_ready.go:81] duration metric: took 4.901694ms for pod "coredns-7db6d8ff4d-wm7lh" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:47.048102  450393 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.055429  450393 pod_ready.go:92] pod "etcd-embed-certs-321139" in "kube-system" namespace has status "Ready":"True"
	I0805 12:59:49.055454  450393 pod_ready.go:81] duration metric: took 2.007345086s for pod "etcd-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.055464  450393 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.060072  450393 pod_ready.go:92] pod "kube-apiserver-embed-certs-321139" in "kube-system" namespace has status "Ready":"True"
	I0805 12:59:49.060095  450393 pod_ready.go:81] duration metric: took 4.624968ms for pod "kube-apiserver-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.060103  450393 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.065663  450393 pod_ready.go:92] pod "kube-controller-manager-embed-certs-321139" in "kube-system" namespace has status "Ready":"True"
	I0805 12:59:49.065689  450393 pod_ready.go:81] duration metric: took 5.578205ms for pod "kube-controller-manager-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.065708  450393 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-shgv2" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.071143  450393 pod_ready.go:92] pod "kube-proxy-shgv2" in "kube-system" namespace has status "Ready":"True"
	I0805 12:59:49.071166  450393 pod_ready.go:81] duration metric: took 5.450104ms for pod "kube-proxy-shgv2" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.071174  450393 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:46.978356  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:47.478322  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:47.978417  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:48.478966  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:48.979317  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:49.478449  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:49.978364  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:50.479294  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:50.978435  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:51.478614  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:49.028075  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:51.524299  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:48.908177  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:50.908490  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:52.909257  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:49.438002  450393 pod_ready.go:92] pod "kube-scheduler-embed-certs-321139" in "kube-system" namespace has status "Ready":"True"
	I0805 12:59:49.438032  450393 pod_ready.go:81] duration metric: took 366.851004ms for pod "kube-scheduler-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.438042  450393 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:51.443490  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:53.444534  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:51.978526  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:52.479187  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:52.979090  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:53.478733  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:53.978571  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:54.478525  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:54.979125  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:55.478711  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:55.979266  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:56.478956  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:53.525369  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:55.526660  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:54.909757  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:57.409489  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:55.445189  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:57.944983  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:56.979226  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:57.479019  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:57.978634  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:58.478338  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:58.978987  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:59.479290  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:59.978383  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:00.478373  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:00.978412  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:01.479312  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:57.527240  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:00.024177  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:02.024749  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:59.908362  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:01.909101  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:00.445471  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:02.944535  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:01.978392  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:02.479119  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:02.978313  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:03.478401  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:03.979029  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:04.478963  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:04.978393  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:05.478418  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:05.978381  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:06.479229  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:04.028522  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:06.525385  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:04.409119  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:06.409863  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:05.444313  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:07.452452  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:06.979172  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:07.479251  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:07.979183  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:08.478722  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:08.979248  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:09.478527  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:09.978581  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:10.478499  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:10.978520  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:11.478843  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:09.025651  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:11.525086  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:08.909528  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:11.408408  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:13.410472  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:09.945614  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:12.443723  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:11.978536  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:12.478504  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:12.979179  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:12.979258  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:13.022653  451238 cri.go:89] found id: ""
	I0805 13:00:13.022680  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.022689  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:13.022696  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:13.022766  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:13.059292  451238 cri.go:89] found id: ""
	I0805 13:00:13.059326  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.059336  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:13.059343  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:13.059399  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:13.098750  451238 cri.go:89] found id: ""
	I0805 13:00:13.098782  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.098793  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:13.098802  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:13.098866  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:13.133307  451238 cri.go:89] found id: ""
	I0805 13:00:13.133338  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.133346  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:13.133353  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:13.133420  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:13.171124  451238 cri.go:89] found id: ""
	I0805 13:00:13.171160  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.171170  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:13.171177  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:13.171237  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:13.209200  451238 cri.go:89] found id: ""
	I0805 13:00:13.209235  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.209247  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:13.209254  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:13.209312  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:13.244261  451238 cri.go:89] found id: ""
	I0805 13:00:13.244302  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.244313  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:13.244324  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:13.244397  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:13.283295  451238 cri.go:89] found id: ""
	I0805 13:00:13.283331  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.283342  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:13.283356  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:13.283372  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:13.344134  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:13.344174  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:13.384084  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:13.384119  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:13.433784  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:13.433821  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:13.449756  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:13.449786  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:13.573090  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:16.074053  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:16.087817  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:16.087900  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:16.130938  451238 cri.go:89] found id: ""
	I0805 13:00:16.130970  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.130981  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:16.130989  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:16.131058  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:16.184208  451238 cri.go:89] found id: ""
	I0805 13:00:16.184245  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.184259  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:16.184269  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:16.184346  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:16.230959  451238 cri.go:89] found id: ""
	I0805 13:00:16.230998  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.231011  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:16.231020  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:16.231100  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:16.282886  451238 cri.go:89] found id: ""
	I0805 13:00:16.282940  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.282954  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:16.282963  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:16.283024  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:16.320345  451238 cri.go:89] found id: ""
	I0805 13:00:16.320381  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.320397  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:16.320404  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:16.320521  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:16.356390  451238 cri.go:89] found id: ""
	I0805 13:00:16.356427  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.356439  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:16.356447  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:16.356503  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:16.400477  451238 cri.go:89] found id: ""
	I0805 13:00:16.400510  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.400529  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:16.400539  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:16.400612  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:16.440634  451238 cri.go:89] found id: ""
	I0805 13:00:16.440662  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.440673  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:16.440685  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:16.440702  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:16.510879  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:16.510922  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:16.554294  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:16.554332  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:16.607798  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:16.607853  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:16.622618  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:16.622655  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:16.702599  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:14.025025  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:16.025182  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:15.909245  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:18.409729  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:14.445222  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:16.445451  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:18.944533  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:19.202789  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:19.215776  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:19.215851  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:19.250503  451238 cri.go:89] found id: ""
	I0805 13:00:19.250540  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.250551  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:19.250558  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:19.250630  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:19.287358  451238 cri.go:89] found id: ""
	I0805 13:00:19.287392  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.287403  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:19.287412  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:19.287484  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:19.322167  451238 cri.go:89] found id: ""
	I0805 13:00:19.322195  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.322203  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:19.322209  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:19.322262  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:19.356874  451238 cri.go:89] found id: ""
	I0805 13:00:19.356905  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.356923  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:19.356931  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:19.357006  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:19.395172  451238 cri.go:89] found id: ""
	I0805 13:00:19.395206  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.395217  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:19.395227  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:19.395294  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:19.438404  451238 cri.go:89] found id: ""
	I0805 13:00:19.438431  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.438439  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:19.438445  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:19.438510  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:19.474727  451238 cri.go:89] found id: ""
	I0805 13:00:19.474755  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.474762  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:19.474769  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:19.474832  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:19.513906  451238 cri.go:89] found id: ""
	I0805 13:00:19.513945  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.513953  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:19.513963  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:19.513977  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:19.528337  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:19.528378  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:19.601135  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:19.601168  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:19.601185  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:19.676792  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:19.676844  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:19.716861  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:19.716894  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:18.025634  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:20.027525  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:20.909150  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:22.910153  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:20.945009  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:23.444529  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:22.266971  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:22.280346  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:22.280422  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:22.314788  451238 cri.go:89] found id: ""
	I0805 13:00:22.314816  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.314824  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:22.314831  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:22.314884  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:22.357357  451238 cri.go:89] found id: ""
	I0805 13:00:22.357394  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.357405  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:22.357414  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:22.357483  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:22.393254  451238 cri.go:89] found id: ""
	I0805 13:00:22.393288  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.393296  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:22.393302  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:22.393366  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:22.434766  451238 cri.go:89] found id: ""
	I0805 13:00:22.434796  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.434807  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:22.434815  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:22.434887  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:22.475649  451238 cri.go:89] found id: ""
	I0805 13:00:22.475676  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.475684  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:22.475690  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:22.475754  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:22.515633  451238 cri.go:89] found id: ""
	I0805 13:00:22.515662  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.515670  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:22.515677  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:22.515757  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:22.550716  451238 cri.go:89] found id: ""
	I0805 13:00:22.550749  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.550759  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:22.550767  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:22.550849  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:22.588537  451238 cri.go:89] found id: ""
	I0805 13:00:22.588571  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.588583  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:22.588595  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:22.588609  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:22.638535  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:22.638577  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:22.654879  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:22.654919  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:22.721482  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:22.721513  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:22.721529  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:22.801442  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:22.801489  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:25.343805  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:25.358068  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:25.358176  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:25.393734  451238 cri.go:89] found id: ""
	I0805 13:00:25.393767  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.393778  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:25.393785  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:25.393849  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:25.428217  451238 cri.go:89] found id: ""
	I0805 13:00:25.428244  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.428252  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:25.428257  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:25.428316  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:25.462826  451238 cri.go:89] found id: ""
	I0805 13:00:25.462858  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.462869  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:25.462877  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:25.462961  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:25.502960  451238 cri.go:89] found id: ""
	I0805 13:00:25.502989  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.502998  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:25.503006  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:25.503072  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:25.538859  451238 cri.go:89] found id: ""
	I0805 13:00:25.538888  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.538897  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:25.538902  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:25.538964  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:25.577850  451238 cri.go:89] found id: ""
	I0805 13:00:25.577883  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.577894  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:25.577901  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:25.577988  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:25.611728  451238 cri.go:89] found id: ""
	I0805 13:00:25.611773  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.611785  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:25.611793  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:25.611865  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:25.654987  451238 cri.go:89] found id: ""
	I0805 13:00:25.655018  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.655027  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:25.655039  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:25.655052  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:25.669124  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:25.669160  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:25.747354  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:25.747380  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:25.747398  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:25.825198  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:25.825241  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:25.865511  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:25.865546  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:22.526638  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:25.024414  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:27.025393  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:25.409361  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:27.411148  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:25.444607  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:27.447460  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:28.418263  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:28.431831  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:28.431895  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:28.470249  451238 cri.go:89] found id: ""
	I0805 13:00:28.470280  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.470291  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:28.470301  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:28.470373  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:28.506935  451238 cri.go:89] found id: ""
	I0805 13:00:28.506968  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.506977  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:28.506985  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:28.507053  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:28.546621  451238 cri.go:89] found id: ""
	I0805 13:00:28.546652  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.546663  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:28.546671  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:28.546749  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:28.584699  451238 cri.go:89] found id: ""
	I0805 13:00:28.584734  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.584745  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:28.584753  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:28.584820  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:28.620693  451238 cri.go:89] found id: ""
	I0805 13:00:28.620726  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.620736  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:28.620744  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:28.620814  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:28.657340  451238 cri.go:89] found id: ""
	I0805 13:00:28.657370  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.657379  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:28.657385  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:28.657438  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:28.695126  451238 cri.go:89] found id: ""
	I0805 13:00:28.695156  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.695166  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:28.695174  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:28.695239  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:28.729757  451238 cri.go:89] found id: ""
	I0805 13:00:28.729808  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.729821  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:28.729834  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:28.729852  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:28.769642  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:28.769675  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:28.818076  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:28.818114  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:28.831466  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:28.831496  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:28.902788  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:28.902818  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:28.902836  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:31.482482  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:31.497767  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:31.497867  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:31.536922  451238 cri.go:89] found id: ""
	I0805 13:00:31.536948  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.536960  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:31.536969  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:31.537040  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:31.572422  451238 cri.go:89] found id: ""
	I0805 13:00:31.572456  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.572466  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:31.572472  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:31.572531  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:31.607961  451238 cri.go:89] found id: ""
	I0805 13:00:31.607996  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.608008  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:31.608016  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:31.608082  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:31.641771  451238 cri.go:89] found id: ""
	I0805 13:00:31.641800  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.641822  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:31.641830  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:31.641904  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:31.681661  451238 cri.go:89] found id: ""
	I0805 13:00:31.681695  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.681707  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:31.681715  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:31.681791  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:31.723777  451238 cri.go:89] found id: ""
	I0805 13:00:31.723814  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.723823  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:31.723829  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:31.723922  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:31.759898  451238 cri.go:89] found id: ""
	I0805 13:00:31.759935  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.759948  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:31.759957  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:31.760022  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:31.798433  451238 cri.go:89] found id: ""
	I0805 13:00:31.798462  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.798470  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:31.798480  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:31.798497  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:31.872005  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:31.872030  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:31.872045  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:31.952201  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:31.952240  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:29.524445  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:31.525646  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:29.909901  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:32.408826  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:29.944170  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:31.944427  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:31.995920  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:31.995955  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:32.047453  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:32.047493  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:34.562369  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:34.576644  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:34.576708  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:34.613002  451238 cri.go:89] found id: ""
	I0805 13:00:34.613036  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.613047  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:34.613056  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:34.613127  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:34.650723  451238 cri.go:89] found id: ""
	I0805 13:00:34.650757  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.650769  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:34.650777  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:34.650851  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:34.689047  451238 cri.go:89] found id: ""
	I0805 13:00:34.689073  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.689081  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:34.689088  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:34.689148  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:34.727552  451238 cri.go:89] found id: ""
	I0805 13:00:34.727592  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.727604  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:34.727612  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:34.727683  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:34.761661  451238 cri.go:89] found id: ""
	I0805 13:00:34.761696  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.761707  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:34.761715  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:34.761791  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:34.800062  451238 cri.go:89] found id: ""
	I0805 13:00:34.800116  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.800128  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:34.800137  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:34.800198  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:34.833536  451238 cri.go:89] found id: ""
	I0805 13:00:34.833566  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.833578  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:34.833586  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:34.833654  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:34.868079  451238 cri.go:89] found id: ""
	I0805 13:00:34.868117  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.868126  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:34.868135  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:34.868149  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:34.920092  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:34.920124  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:34.934484  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:34.934510  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:35.007716  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:35.007751  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:35.007768  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:35.088183  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:35.088233  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:34.024704  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:36.025754  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:34.409917  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:36.409993  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:34.444842  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:36.943985  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:38.944649  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:37.633443  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:37.647405  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:37.647470  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:37.684682  451238 cri.go:89] found id: ""
	I0805 13:00:37.684711  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.684720  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:37.684727  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:37.684779  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:37.723413  451238 cri.go:89] found id: ""
	I0805 13:00:37.723442  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.723449  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:37.723455  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:37.723506  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:37.758388  451238 cri.go:89] found id: ""
	I0805 13:00:37.758418  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.758428  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:37.758437  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:37.758501  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:37.797846  451238 cri.go:89] found id: ""
	I0805 13:00:37.797879  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.797890  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:37.797901  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:37.797971  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:37.837053  451238 cri.go:89] found id: ""
	I0805 13:00:37.837082  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.837092  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:37.837104  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:37.837163  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:37.876185  451238 cri.go:89] found id: ""
	I0805 13:00:37.876211  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.876220  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:37.876226  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:37.876294  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:37.915318  451238 cri.go:89] found id: ""
	I0805 13:00:37.915350  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.915362  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:37.915370  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:37.915429  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:37.953916  451238 cri.go:89] found id: ""
	I0805 13:00:37.953944  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.953954  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:37.953964  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:37.953976  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:37.991116  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:37.991154  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:38.043796  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:38.043838  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:38.058636  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:38.058669  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:38.143022  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:38.143051  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:38.143067  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:40.721468  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:40.735679  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:40.735774  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:40.773583  451238 cri.go:89] found id: ""
	I0805 13:00:40.773609  451238 logs.go:276] 0 containers: []
	W0805 13:00:40.773617  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:40.773626  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:40.773685  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:40.819857  451238 cri.go:89] found id: ""
	I0805 13:00:40.819886  451238 logs.go:276] 0 containers: []
	W0805 13:00:40.819895  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:40.819901  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:40.819963  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:40.857156  451238 cri.go:89] found id: ""
	I0805 13:00:40.857184  451238 logs.go:276] 0 containers: []
	W0805 13:00:40.857192  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:40.857198  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:40.857251  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:40.892933  451238 cri.go:89] found id: ""
	I0805 13:00:40.892970  451238 logs.go:276] 0 containers: []
	W0805 13:00:40.892981  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:40.892990  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:40.893046  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:40.927128  451238 cri.go:89] found id: ""
	I0805 13:00:40.927163  451238 logs.go:276] 0 containers: []
	W0805 13:00:40.927173  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:40.927182  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:40.927237  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:40.961790  451238 cri.go:89] found id: ""
	I0805 13:00:40.961817  451238 logs.go:276] 0 containers: []
	W0805 13:00:40.961826  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:40.961832  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:40.961886  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:40.996249  451238 cri.go:89] found id: ""
	I0805 13:00:40.996282  451238 logs.go:276] 0 containers: []
	W0805 13:00:40.996293  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:40.996300  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:40.996371  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:41.032305  451238 cri.go:89] found id: ""
	I0805 13:00:41.032332  451238 logs.go:276] 0 containers: []
	W0805 13:00:41.032342  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:41.032358  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:41.032375  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:41.075993  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:41.076027  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:41.126020  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:41.126057  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:41.140263  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:41.140288  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:41.216648  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:41.216670  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:41.216683  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:38.524812  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:41.024597  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:38.909518  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:40.910256  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:43.410062  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:41.443930  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:43.945026  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:43.796367  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:43.810086  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:43.810162  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:43.844373  451238 cri.go:89] found id: ""
	I0805 13:00:43.844410  451238 logs.go:276] 0 containers: []
	W0805 13:00:43.844422  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:43.844430  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:43.844502  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:43.880249  451238 cri.go:89] found id: ""
	I0805 13:00:43.880285  451238 logs.go:276] 0 containers: []
	W0805 13:00:43.880295  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:43.880303  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:43.880376  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:43.921279  451238 cri.go:89] found id: ""
	I0805 13:00:43.921313  451238 logs.go:276] 0 containers: []
	W0805 13:00:43.921323  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:43.921329  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:43.921382  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:43.963736  451238 cri.go:89] found id: ""
	I0805 13:00:43.963782  451238 logs.go:276] 0 containers: []
	W0805 13:00:43.963794  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:43.963803  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:43.963869  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:44.009001  451238 cri.go:89] found id: ""
	I0805 13:00:44.009038  451238 logs.go:276] 0 containers: []
	W0805 13:00:44.009050  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:44.009057  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:44.009128  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:44.059484  451238 cri.go:89] found id: ""
	I0805 13:00:44.059514  451238 logs.go:276] 0 containers: []
	W0805 13:00:44.059526  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:44.059534  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:44.059605  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:44.102043  451238 cri.go:89] found id: ""
	I0805 13:00:44.102075  451238 logs.go:276] 0 containers: []
	W0805 13:00:44.102088  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:44.102094  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:44.102170  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:44.137518  451238 cri.go:89] found id: ""
	I0805 13:00:44.137558  451238 logs.go:276] 0 containers: []
	W0805 13:00:44.137569  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:44.137584  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:44.137600  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:44.188139  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:44.188175  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:44.202544  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:44.202588  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:44.278486  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:44.278508  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:44.278521  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:44.363419  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:44.363458  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:46.905665  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:46.922141  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:46.922206  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:43.025461  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:45.523997  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:45.908437  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:48.409410  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:46.445919  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:48.944243  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:46.963468  451238 cri.go:89] found id: ""
	I0805 13:00:46.963494  451238 logs.go:276] 0 containers: []
	W0805 13:00:46.963502  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:46.963508  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:46.963557  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:47.003445  451238 cri.go:89] found id: ""
	I0805 13:00:47.003472  451238 logs.go:276] 0 containers: []
	W0805 13:00:47.003480  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:47.003486  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:47.003537  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:47.043271  451238 cri.go:89] found id: ""
	I0805 13:00:47.043306  451238 logs.go:276] 0 containers: []
	W0805 13:00:47.043318  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:47.043326  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:47.043394  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:47.079843  451238 cri.go:89] found id: ""
	I0805 13:00:47.079874  451238 logs.go:276] 0 containers: []
	W0805 13:00:47.079884  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:47.079893  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:47.079954  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:47.116819  451238 cri.go:89] found id: ""
	I0805 13:00:47.116847  451238 logs.go:276] 0 containers: []
	W0805 13:00:47.116856  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:47.116861  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:47.116917  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:47.156302  451238 cri.go:89] found id: ""
	I0805 13:00:47.156331  451238 logs.go:276] 0 containers: []
	W0805 13:00:47.156340  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:47.156353  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:47.156410  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:47.200419  451238 cri.go:89] found id: ""
	I0805 13:00:47.200449  451238 logs.go:276] 0 containers: []
	W0805 13:00:47.200463  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:47.200469  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:47.200533  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:47.237483  451238 cri.go:89] found id: ""
	I0805 13:00:47.237515  451238 logs.go:276] 0 containers: []
	W0805 13:00:47.237522  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:47.237532  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:47.237545  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:47.251598  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:47.251632  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:47.326457  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:47.326483  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:47.326501  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:47.410413  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:47.410455  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:47.452696  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:47.452732  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:50.005335  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:50.019610  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:50.019679  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:50.057401  451238 cri.go:89] found id: ""
	I0805 13:00:50.057435  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.057447  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:50.057456  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:50.057516  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:50.101710  451238 cri.go:89] found id: ""
	I0805 13:00:50.101743  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.101751  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:50.101758  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:50.101822  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:50.139624  451238 cri.go:89] found id: ""
	I0805 13:00:50.139658  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.139669  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:50.139677  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:50.139761  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:50.176004  451238 cri.go:89] found id: ""
	I0805 13:00:50.176031  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.176039  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:50.176045  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:50.176123  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:50.219319  451238 cri.go:89] found id: ""
	I0805 13:00:50.219352  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.219362  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:50.219369  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:50.219437  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:50.287443  451238 cri.go:89] found id: ""
	I0805 13:00:50.287478  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.287489  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:50.287498  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:50.287582  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:50.321018  451238 cri.go:89] found id: ""
	I0805 13:00:50.321047  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.321056  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:50.321063  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:50.321124  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:50.354559  451238 cri.go:89] found id: ""
	I0805 13:00:50.354597  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.354610  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:50.354625  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:50.354642  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:50.398621  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:50.398657  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:50.451693  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:50.451735  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:50.466810  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:50.466851  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:50.542431  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:50.542461  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:50.542482  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:47.525977  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:50.025280  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:52.025760  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:50.410198  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:52.908466  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:50.946086  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:53.445962  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:53.128466  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:53.144139  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:53.144216  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:53.178383  451238 cri.go:89] found id: ""
	I0805 13:00:53.178427  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.178438  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:53.178447  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:53.178516  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:53.220312  451238 cri.go:89] found id: ""
	I0805 13:00:53.220348  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.220358  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:53.220365  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:53.220432  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:53.255352  451238 cri.go:89] found id: ""
	I0805 13:00:53.255380  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.255390  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:53.255398  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:53.255473  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:53.293254  451238 cri.go:89] found id: ""
	I0805 13:00:53.293292  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.293311  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:53.293320  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:53.293395  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:53.329407  451238 cri.go:89] found id: ""
	I0805 13:00:53.329436  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.329448  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:53.329455  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:53.329523  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:53.362838  451238 cri.go:89] found id: ""
	I0805 13:00:53.362868  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.362876  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:53.362883  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:53.362957  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:53.399283  451238 cri.go:89] found id: ""
	I0805 13:00:53.399313  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.399324  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:53.399332  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:53.399405  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:53.438527  451238 cri.go:89] found id: ""
	I0805 13:00:53.438558  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.438567  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:53.438578  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:53.438597  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:53.492709  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:53.492760  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:53.507522  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:53.507555  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:53.581690  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:53.581710  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:53.581724  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:53.664402  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:53.664451  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:56.209640  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:56.224403  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:56.224487  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:56.266214  451238 cri.go:89] found id: ""
	I0805 13:00:56.266243  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.266254  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:56.266263  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:56.266328  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:56.304034  451238 cri.go:89] found id: ""
	I0805 13:00:56.304070  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.304082  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:56.304091  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:56.304172  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:56.342133  451238 cri.go:89] found id: ""
	I0805 13:00:56.342159  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.342167  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:56.342173  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:56.342225  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:56.378549  451238 cri.go:89] found id: ""
	I0805 13:00:56.378588  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.378599  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:56.378606  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:56.378667  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:56.415613  451238 cri.go:89] found id: ""
	I0805 13:00:56.415641  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.415651  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:56.415657  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:56.415715  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:56.451915  451238 cri.go:89] found id: ""
	I0805 13:00:56.451944  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.451953  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:56.451960  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:56.452021  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:56.492219  451238 cri.go:89] found id: ""
	I0805 13:00:56.492255  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.492267  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:56.492275  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:56.492347  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:56.534564  451238 cri.go:89] found id: ""
	I0805 13:00:56.534606  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.534618  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:56.534632  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:56.534652  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:56.548772  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:56.548813  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:56.625649  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:56.625678  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:56.625695  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:56.716735  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:56.716787  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:56.771881  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:56.771910  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:54.525355  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:57.025659  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:54.908805  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:56.909601  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:55.943885  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:57.945233  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:59.325624  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:59.338796  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:59.338869  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:59.375002  451238 cri.go:89] found id: ""
	I0805 13:00:59.375039  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.375050  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:59.375059  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:59.375138  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:59.410778  451238 cri.go:89] found id: ""
	I0805 13:00:59.410800  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.410810  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:59.410817  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:59.410873  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:59.453728  451238 cri.go:89] found id: ""
	I0805 13:00:59.453760  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.453771  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:59.453779  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:59.453845  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:59.492968  451238 cri.go:89] found id: ""
	I0805 13:00:59.493002  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.493013  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:59.493021  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:59.493091  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:59.533342  451238 cri.go:89] found id: ""
	I0805 13:00:59.533372  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.533383  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:59.533390  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:59.533445  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:59.569677  451238 cri.go:89] found id: ""
	I0805 13:00:59.569705  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.569715  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:59.569722  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:59.569789  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:59.605106  451238 cri.go:89] found id: ""
	I0805 13:00:59.605139  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.605150  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:59.605158  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:59.605228  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:59.639948  451238 cri.go:89] found id: ""
	I0805 13:00:59.639980  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.639989  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:59.640000  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:59.640016  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:59.679926  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:59.679956  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:59.731545  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:59.731591  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:59.746286  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:59.746320  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:59.828398  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:59.828420  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:59.828439  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:59.524365  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:01.525092  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:59.410713  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:01.909619  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:59.945483  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:02.445780  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:02.412560  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:02.429633  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:02.429718  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:02.475916  451238 cri.go:89] found id: ""
	I0805 13:01:02.475951  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.475963  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:02.475971  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:02.476061  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:02.528807  451238 cri.go:89] found id: ""
	I0805 13:01:02.528837  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.528849  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:02.528856  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:02.528924  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:02.575164  451238 cri.go:89] found id: ""
	I0805 13:01:02.575194  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.575210  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:02.575218  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:02.575286  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:02.614709  451238 cri.go:89] found id: ""
	I0805 13:01:02.614800  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.614815  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:02.614824  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:02.614902  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:02.654941  451238 cri.go:89] found id: ""
	I0805 13:01:02.654979  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.654990  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:02.654997  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:02.655069  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:02.690552  451238 cri.go:89] found id: ""
	I0805 13:01:02.690586  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.690595  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:02.690602  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:02.690657  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:02.725607  451238 cri.go:89] found id: ""
	I0805 13:01:02.725644  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.725656  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:02.725665  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:02.725745  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:02.760180  451238 cri.go:89] found id: ""
	I0805 13:01:02.760211  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.760223  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:02.760244  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:02.760262  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:02.813071  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:02.813128  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:02.828633  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:02.828665  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:02.898049  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:02.898074  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:02.898087  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:02.988077  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:02.988124  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:05.532719  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:05.546423  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:05.546489  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:05.590978  451238 cri.go:89] found id: ""
	I0805 13:01:05.591006  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.591013  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:05.591019  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:05.591071  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:05.631251  451238 cri.go:89] found id: ""
	I0805 13:01:05.631287  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.631298  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:05.631306  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:05.631391  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:05.671826  451238 cri.go:89] found id: ""
	I0805 13:01:05.671863  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.671875  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:05.671883  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:05.671951  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:05.708147  451238 cri.go:89] found id: ""
	I0805 13:01:05.708176  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.708186  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:05.708194  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:05.708262  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:05.741962  451238 cri.go:89] found id: ""
	I0805 13:01:05.741994  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.742006  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:05.742015  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:05.742087  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:05.777930  451238 cri.go:89] found id: ""
	I0805 13:01:05.777965  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.777976  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:05.777985  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:05.778061  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:05.813066  451238 cri.go:89] found id: ""
	I0805 13:01:05.813099  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.813111  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:05.813119  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:05.813189  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:05.849382  451238 cri.go:89] found id: ""
	I0805 13:01:05.849410  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.849418  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:05.849428  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:05.849440  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:05.903376  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:05.903423  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:05.918540  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:05.918575  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:05.990608  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:05.990637  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:05.990658  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:06.072524  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:06.072571  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:04.025528  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:06.525325  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:04.409190  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:06.409231  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:04.944649  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:07.445278  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:08.617528  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:08.631637  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:08.631713  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:08.669999  451238 cri.go:89] found id: ""
	I0805 13:01:08.670039  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.670050  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:08.670065  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:08.670147  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:08.705322  451238 cri.go:89] found id: ""
	I0805 13:01:08.705356  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.705365  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:08.705370  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:08.705442  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:08.744884  451238 cri.go:89] found id: ""
	I0805 13:01:08.744915  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.744927  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:08.744936  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:08.745018  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:08.782394  451238 cri.go:89] found id: ""
	I0805 13:01:08.782428  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.782440  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:08.782448  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:08.782518  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:08.816989  451238 cri.go:89] found id: ""
	I0805 13:01:08.817018  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.817027  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:08.817034  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:08.817106  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:08.856389  451238 cri.go:89] found id: ""
	I0805 13:01:08.856420  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.856431  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:08.856439  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:08.856506  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:08.891942  451238 cri.go:89] found id: ""
	I0805 13:01:08.891975  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.891986  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:08.891995  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:08.892064  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:08.930329  451238 cri.go:89] found id: ""
	I0805 13:01:08.930364  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.930375  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:08.930389  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:08.930406  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:08.972574  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:08.972610  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:09.026194  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:09.026228  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:09.040973  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:09.041002  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:09.115094  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:09.115121  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:09.115143  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:11.698322  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:11.711841  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:11.711927  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:11.749152  451238 cri.go:89] found id: ""
	I0805 13:01:11.749187  451238 logs.go:276] 0 containers: []
	W0805 13:01:11.749199  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:11.749207  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:11.749274  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:11.785395  451238 cri.go:89] found id: ""
	I0805 13:01:11.785430  451238 logs.go:276] 0 containers: []
	W0805 13:01:11.785441  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:11.785449  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:11.785516  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:11.822240  451238 cri.go:89] found id: ""
	I0805 13:01:11.822282  451238 logs.go:276] 0 containers: []
	W0805 13:01:11.822293  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:11.822302  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:11.822372  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:11.858755  451238 cri.go:89] found id: ""
	I0805 13:01:11.858794  451238 logs.go:276] 0 containers: []
	W0805 13:01:11.858805  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:11.858814  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:11.858884  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:11.893064  451238 cri.go:89] found id: ""
	I0805 13:01:11.893101  451238 logs.go:276] 0 containers: []
	W0805 13:01:11.893113  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:11.893121  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:11.893195  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:11.930965  451238 cri.go:89] found id: ""
	I0805 13:01:11.931003  451238 logs.go:276] 0 containers: []
	W0805 13:01:11.931015  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:11.931025  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:11.931089  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:09.025566  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:11.525069  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:08.910618  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:11.409157  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:09.944797  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:12.445029  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:11.967594  451238 cri.go:89] found id: ""
	I0805 13:01:11.967620  451238 logs.go:276] 0 containers: []
	W0805 13:01:11.967630  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:11.967638  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:11.967697  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:12.004978  451238 cri.go:89] found id: ""
	I0805 13:01:12.005007  451238 logs.go:276] 0 containers: []
	W0805 13:01:12.005015  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:12.005025  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:12.005037  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:12.087476  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:12.087500  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:12.087515  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:12.177690  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:12.177757  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:12.222858  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:12.222889  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:12.273322  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:12.273362  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:14.788210  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:14.802351  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:14.802426  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:14.837705  451238 cri.go:89] found id: ""
	I0805 13:01:14.837736  451238 logs.go:276] 0 containers: []
	W0805 13:01:14.837746  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:14.837755  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:14.837824  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:14.873389  451238 cri.go:89] found id: ""
	I0805 13:01:14.873420  451238 logs.go:276] 0 containers: []
	W0805 13:01:14.873430  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:14.873438  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:14.873506  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:14.913969  451238 cri.go:89] found id: ""
	I0805 13:01:14.913999  451238 logs.go:276] 0 containers: []
	W0805 13:01:14.914009  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:14.914018  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:14.914081  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:14.953478  451238 cri.go:89] found id: ""
	I0805 13:01:14.953510  451238 logs.go:276] 0 containers: []
	W0805 13:01:14.953521  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:14.953528  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:14.953584  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:14.992166  451238 cri.go:89] found id: ""
	I0805 13:01:14.992197  451238 logs.go:276] 0 containers: []
	W0805 13:01:14.992206  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:14.992212  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:14.992291  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:15.031258  451238 cri.go:89] found id: ""
	I0805 13:01:15.031285  451238 logs.go:276] 0 containers: []
	W0805 13:01:15.031293  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:15.031300  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:15.031353  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:15.068944  451238 cri.go:89] found id: ""
	I0805 13:01:15.068972  451238 logs.go:276] 0 containers: []
	W0805 13:01:15.068980  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:15.068986  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:15.069042  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:15.105413  451238 cri.go:89] found id: ""
	I0805 13:01:15.105443  451238 logs.go:276] 0 containers: []
	W0805 13:01:15.105454  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:15.105467  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:15.105489  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:15.161925  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:15.161969  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:15.177174  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:15.177206  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:15.257950  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:15.257975  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:15.257989  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:15.336672  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:15.336716  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:13.526088  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:16.025513  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:13.908773  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:15.908817  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:17.910431  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:14.945842  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:17.444869  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:17.876314  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:17.889842  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:17.889909  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:17.928050  451238 cri.go:89] found id: ""
	I0805 13:01:17.928077  451238 logs.go:276] 0 containers: []
	W0805 13:01:17.928086  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:17.928092  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:17.928150  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:17.965713  451238 cri.go:89] found id: ""
	I0805 13:01:17.965751  451238 logs.go:276] 0 containers: []
	W0805 13:01:17.965762  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:17.965770  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:17.965837  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:18.002938  451238 cri.go:89] found id: ""
	I0805 13:01:18.002972  451238 logs.go:276] 0 containers: []
	W0805 13:01:18.002984  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:18.002992  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:18.003062  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:18.040140  451238 cri.go:89] found id: ""
	I0805 13:01:18.040178  451238 logs.go:276] 0 containers: []
	W0805 13:01:18.040190  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:18.040198  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:18.040269  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:18.075427  451238 cri.go:89] found id: ""
	I0805 13:01:18.075463  451238 logs.go:276] 0 containers: []
	W0805 13:01:18.075475  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:18.075490  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:18.075558  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:18.113469  451238 cri.go:89] found id: ""
	I0805 13:01:18.113507  451238 logs.go:276] 0 containers: []
	W0805 13:01:18.113521  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:18.113528  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:18.113587  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:18.152626  451238 cri.go:89] found id: ""
	I0805 13:01:18.152662  451238 logs.go:276] 0 containers: []
	W0805 13:01:18.152672  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:18.152678  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:18.152745  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:18.189540  451238 cri.go:89] found id: ""
	I0805 13:01:18.189577  451238 logs.go:276] 0 containers: []
	W0805 13:01:18.189590  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:18.189602  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:18.189618  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:18.244314  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:18.244353  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:18.257912  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:18.257939  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:18.339659  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:18.339682  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:18.339699  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:18.425391  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:18.425449  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:20.975889  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:20.989798  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:20.989868  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:21.030858  451238 cri.go:89] found id: ""
	I0805 13:01:21.030894  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.030906  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:21.030915  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:21.030979  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:21.067367  451238 cri.go:89] found id: ""
	I0805 13:01:21.067402  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.067411  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:21.067419  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:21.067476  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:21.104307  451238 cri.go:89] found id: ""
	I0805 13:01:21.104337  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.104352  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:21.104361  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:21.104424  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:21.141486  451238 cri.go:89] found id: ""
	I0805 13:01:21.141519  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.141531  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:21.141539  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:21.141606  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:21.179247  451238 cri.go:89] found id: ""
	I0805 13:01:21.179305  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.179317  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:21.179330  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:21.179406  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:21.215030  451238 cri.go:89] found id: ""
	I0805 13:01:21.215065  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.215075  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:21.215083  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:21.215152  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:21.252982  451238 cri.go:89] found id: ""
	I0805 13:01:21.253008  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.253016  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:21.253022  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:21.253097  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:21.290256  451238 cri.go:89] found id: ""
	I0805 13:01:21.290292  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.290302  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:21.290325  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:21.290343  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:21.342809  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:21.342855  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:21.357959  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:21.358000  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:21.433087  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:21.433120  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:21.433143  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:21.514261  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:21.514312  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:18.025965  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:20.524832  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:20.409943  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:22.909233  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:19.445074  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:21.445547  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:23.445637  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:24.060402  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:24.076056  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:24.076131  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:24.115976  451238 cri.go:89] found id: ""
	I0805 13:01:24.116009  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.116022  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:24.116031  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:24.116111  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:24.158411  451238 cri.go:89] found id: ""
	I0805 13:01:24.158440  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.158448  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:24.158454  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:24.158520  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:24.194589  451238 cri.go:89] found id: ""
	I0805 13:01:24.194624  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.194635  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:24.194644  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:24.194720  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:24.231528  451238 cri.go:89] found id: ""
	I0805 13:01:24.231562  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.231569  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:24.231576  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:24.231649  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:24.268491  451238 cri.go:89] found id: ""
	I0805 13:01:24.268523  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.268532  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:24.268538  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:24.268602  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:24.306718  451238 cri.go:89] found id: ""
	I0805 13:01:24.306752  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.306763  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:24.306772  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:24.306839  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:24.343552  451238 cri.go:89] found id: ""
	I0805 13:01:24.343578  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.343586  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:24.343593  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:24.343649  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:24.384555  451238 cri.go:89] found id: ""
	I0805 13:01:24.384590  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.384602  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:24.384615  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:24.384633  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:24.430256  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:24.430298  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:24.484616  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:24.484661  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:24.500926  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:24.500958  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:24.581379  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:24.581410  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:24.581424  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:22.525806  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:24.526411  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:27.024452  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:25.408887  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:27.409717  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:25.945113  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:28.444740  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:27.167538  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:27.181959  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:27.182035  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:27.223243  451238 cri.go:89] found id: ""
	I0805 13:01:27.223282  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.223293  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:27.223301  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:27.223374  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:27.257806  451238 cri.go:89] found id: ""
	I0805 13:01:27.257843  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.257856  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:27.257864  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:27.257940  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:27.304306  451238 cri.go:89] found id: ""
	I0805 13:01:27.304342  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.304353  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:27.304370  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:27.304439  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:27.342595  451238 cri.go:89] found id: ""
	I0805 13:01:27.342623  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.342631  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:27.342638  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:27.342707  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:27.385628  451238 cri.go:89] found id: ""
	I0805 13:01:27.385661  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.385670  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:27.385677  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:27.385760  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:27.425059  451238 cri.go:89] found id: ""
	I0805 13:01:27.425091  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.425100  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:27.425106  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:27.425175  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:27.465739  451238 cri.go:89] found id: ""
	I0805 13:01:27.465783  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.465794  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:27.465807  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:27.465869  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:27.506431  451238 cri.go:89] found id: ""
	I0805 13:01:27.506460  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.506468  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:27.506477  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:27.506494  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:27.586440  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:27.586467  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:27.586482  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:27.667826  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:27.667869  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:27.710458  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:27.710496  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:27.763057  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:27.763100  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:30.278799  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:30.293788  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:30.293874  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:30.336209  451238 cri.go:89] found id: ""
	I0805 13:01:30.336240  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.336248  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:30.336255  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:30.336323  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:30.371593  451238 cri.go:89] found id: ""
	I0805 13:01:30.371627  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.371642  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:30.371649  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:30.371714  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:30.408266  451238 cri.go:89] found id: ""
	I0805 13:01:30.408298  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.408317  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:30.408325  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:30.408388  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:30.448841  451238 cri.go:89] found id: ""
	I0805 13:01:30.448864  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.448872  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:30.448878  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:30.448940  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:30.488367  451238 cri.go:89] found id: ""
	I0805 13:01:30.488403  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.488411  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:30.488418  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:30.488485  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:30.527131  451238 cri.go:89] found id: ""
	I0805 13:01:30.527163  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.527173  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:30.527181  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:30.527249  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:30.568089  451238 cri.go:89] found id: ""
	I0805 13:01:30.568122  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.568131  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:30.568138  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:30.568203  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:30.605952  451238 cri.go:89] found id: ""
	I0805 13:01:30.605990  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.606007  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:30.606021  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:30.606041  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:30.656449  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:30.656491  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:30.710124  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:30.710164  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:30.724417  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:30.724455  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:30.820639  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:30.820669  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:30.820687  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:29.025377  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:31.525340  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:29.909043  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:32.410359  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:30.445047  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:32.445931  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:33.403497  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:33.419581  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:33.419651  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:33.462011  451238 cri.go:89] found id: ""
	I0805 13:01:33.462042  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.462051  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:33.462057  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:33.462126  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:33.502476  451238 cri.go:89] found id: ""
	I0805 13:01:33.502509  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.502519  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:33.502527  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:33.502601  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:33.547392  451238 cri.go:89] found id: ""
	I0805 13:01:33.547421  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.547430  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:33.547437  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:33.547490  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:33.584013  451238 cri.go:89] found id: ""
	I0805 13:01:33.584040  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.584048  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:33.584054  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:33.584125  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:33.617325  451238 cri.go:89] found id: ""
	I0805 13:01:33.617359  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.617367  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:33.617374  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:33.617429  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:33.651922  451238 cri.go:89] found id: ""
	I0805 13:01:33.651959  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.651971  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:33.651980  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:33.652049  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:33.689487  451238 cri.go:89] found id: ""
	I0805 13:01:33.689515  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.689522  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:33.689529  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:33.689580  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:33.723220  451238 cri.go:89] found id: ""
	I0805 13:01:33.723251  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.723260  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:33.723270  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:33.723282  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:33.777271  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:33.777311  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:33.792497  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:33.792532  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:33.866801  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:33.866826  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:33.866842  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:33.946739  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:33.946774  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:36.486108  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:36.501316  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:36.501397  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:36.542082  451238 cri.go:89] found id: ""
	I0805 13:01:36.542118  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.542130  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:36.542139  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:36.542217  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:36.581005  451238 cri.go:89] found id: ""
	I0805 13:01:36.581047  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.581059  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:36.581068  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:36.581148  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:36.623945  451238 cri.go:89] found id: ""
	I0805 13:01:36.623974  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.623982  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:36.623987  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:36.624041  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:36.661632  451238 cri.go:89] found id: ""
	I0805 13:01:36.661665  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.661673  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:36.661680  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:36.661738  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:36.701808  451238 cri.go:89] found id: ""
	I0805 13:01:36.701839  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.701850  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:36.701857  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:36.701941  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:36.742287  451238 cri.go:89] found id: ""
	I0805 13:01:36.742320  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.742331  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:36.742340  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:36.742410  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:36.794581  451238 cri.go:89] found id: ""
	I0805 13:01:36.794610  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.794621  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:36.794629  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:36.794690  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:36.833271  451238 cri.go:89] found id: ""
	I0805 13:01:36.833301  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.833311  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:36.833325  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:36.833346  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:36.921427  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:36.921467  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:34.024353  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:36.025557  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:34.909401  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:36.909529  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:34.945077  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:36.945632  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:36.965468  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:36.965503  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:37.018475  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:37.018515  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:37.033671  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:37.033697  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:37.105339  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:39.606042  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:39.619215  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:39.619296  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:39.655614  451238 cri.go:89] found id: ""
	I0805 13:01:39.655648  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.655660  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:39.655668  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:39.655760  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:39.691489  451238 cri.go:89] found id: ""
	I0805 13:01:39.691523  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.691535  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:39.691543  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:39.691610  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:39.726394  451238 cri.go:89] found id: ""
	I0805 13:01:39.726427  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.726438  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:39.726446  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:39.726518  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:39.759847  451238 cri.go:89] found id: ""
	I0805 13:01:39.759897  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.759909  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:39.759918  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:39.759988  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:39.795011  451238 cri.go:89] found id: ""
	I0805 13:01:39.795043  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.795051  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:39.795057  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:39.795120  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:39.831302  451238 cri.go:89] found id: ""
	I0805 13:01:39.831336  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.831346  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:39.831356  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:39.831432  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:39.866506  451238 cri.go:89] found id: ""
	I0805 13:01:39.866540  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.866547  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:39.866554  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:39.866622  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:39.898083  451238 cri.go:89] found id: ""
	I0805 13:01:39.898108  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.898115  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:39.898128  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:39.898147  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:39.912192  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:39.912221  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:39.989216  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:39.989246  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:39.989262  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:40.069702  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:40.069746  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:40.118390  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:40.118428  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:38.525929  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:40.527120  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:38.909905  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:41.408953  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:43.409966  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:39.445474  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:41.944704  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:43.944956  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:42.669421  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:42.682287  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:42.682359  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:42.722933  451238 cri.go:89] found id: ""
	I0805 13:01:42.722961  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.722969  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:42.722975  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:42.723037  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:42.757604  451238 cri.go:89] found id: ""
	I0805 13:01:42.757635  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.757646  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:42.757654  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:42.757723  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:42.795825  451238 cri.go:89] found id: ""
	I0805 13:01:42.795852  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.795863  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:42.795871  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:42.795939  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:42.831749  451238 cri.go:89] found id: ""
	I0805 13:01:42.831779  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.831791  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:42.831800  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:42.831862  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:42.866280  451238 cri.go:89] found id: ""
	I0805 13:01:42.866310  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.866322  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:42.866330  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:42.866390  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:42.904393  451238 cri.go:89] found id: ""
	I0805 13:01:42.904427  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.904436  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:42.904445  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:42.904510  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:42.943175  451238 cri.go:89] found id: ""
	I0805 13:01:42.943204  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.943215  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:42.943223  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:42.943292  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:42.979117  451238 cri.go:89] found id: ""
	I0805 13:01:42.979144  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.979152  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:42.979174  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:42.979191  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:43.032032  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:43.032070  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:43.046285  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:43.046315  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:43.120300  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:43.120327  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:43.120347  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:43.209800  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:43.209851  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:45.759057  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:45.771984  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:45.772056  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:45.805421  451238 cri.go:89] found id: ""
	I0805 13:01:45.805451  451238 logs.go:276] 0 containers: []
	W0805 13:01:45.805459  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:45.805466  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:45.805521  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:45.841552  451238 cri.go:89] found id: ""
	I0805 13:01:45.841579  451238 logs.go:276] 0 containers: []
	W0805 13:01:45.841588  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:45.841597  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:45.841672  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:45.878502  451238 cri.go:89] found id: ""
	I0805 13:01:45.878529  451238 logs.go:276] 0 containers: []
	W0805 13:01:45.878537  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:45.878546  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:45.878622  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:45.921145  451238 cri.go:89] found id: ""
	I0805 13:01:45.921187  451238 logs.go:276] 0 containers: []
	W0805 13:01:45.921198  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:45.921207  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:45.921273  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:45.958408  451238 cri.go:89] found id: ""
	I0805 13:01:45.958437  451238 logs.go:276] 0 containers: []
	W0805 13:01:45.958445  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:45.958452  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:45.958521  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:45.994632  451238 cri.go:89] found id: ""
	I0805 13:01:45.994660  451238 logs.go:276] 0 containers: []
	W0805 13:01:45.994669  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:45.994676  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:45.994727  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:46.032930  451238 cri.go:89] found id: ""
	I0805 13:01:46.032961  451238 logs.go:276] 0 containers: []
	W0805 13:01:46.032971  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:46.032978  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:46.033041  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:46.074396  451238 cri.go:89] found id: ""
	I0805 13:01:46.074429  451238 logs.go:276] 0 containers: []
	W0805 13:01:46.074441  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:46.074454  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:46.074475  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:46.131977  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:46.132020  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:46.147924  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:46.147957  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:46.222005  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:46.222038  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:46.222054  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:46.306799  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:46.306842  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:43.024643  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:45.524936  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:45.410385  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:47.909281  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:46.444746  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:48.950198  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:48.856982  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:48.870945  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:48.871025  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:48.930811  451238 cri.go:89] found id: ""
	I0805 13:01:48.930837  451238 logs.go:276] 0 containers: []
	W0805 13:01:48.930852  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:48.930858  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:48.930917  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:48.986604  451238 cri.go:89] found id: ""
	I0805 13:01:48.986629  451238 logs.go:276] 0 containers: []
	W0805 13:01:48.986637  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:48.986643  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:48.986706  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:49.039433  451238 cri.go:89] found id: ""
	I0805 13:01:49.039468  451238 logs.go:276] 0 containers: []
	W0805 13:01:49.039479  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:49.039487  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:49.039555  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:49.079593  451238 cri.go:89] found id: ""
	I0805 13:01:49.079625  451238 logs.go:276] 0 containers: []
	W0805 13:01:49.079637  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:49.079645  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:49.079714  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:49.116243  451238 cri.go:89] found id: ""
	I0805 13:01:49.116274  451238 logs.go:276] 0 containers: []
	W0805 13:01:49.116284  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:49.116292  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:49.116360  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:49.158744  451238 cri.go:89] found id: ""
	I0805 13:01:49.158779  451238 logs.go:276] 0 containers: []
	W0805 13:01:49.158790  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:49.158799  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:49.158868  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:49.193747  451238 cri.go:89] found id: ""
	I0805 13:01:49.193778  451238 logs.go:276] 0 containers: []
	W0805 13:01:49.193786  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:49.193792  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:49.193843  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:49.227663  451238 cri.go:89] found id: ""
	I0805 13:01:49.227691  451238 logs.go:276] 0 containers: []
	W0805 13:01:49.227704  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:49.227714  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:49.227727  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:49.281380  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:49.281424  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:49.296286  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:49.296318  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:49.368584  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:49.368609  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:49.368625  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:49.453857  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:49.453909  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:48.024987  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:50.026076  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:50.408363  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:52.410039  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:51.444602  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:53.445118  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:51.993057  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:52.006066  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:52.006148  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:52.043179  451238 cri.go:89] found id: ""
	I0805 13:01:52.043212  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.043223  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:52.043231  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:52.043300  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:52.076469  451238 cri.go:89] found id: ""
	I0805 13:01:52.076502  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.076512  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:52.076520  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:52.076586  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:52.112443  451238 cri.go:89] found id: ""
	I0805 13:01:52.112477  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.112488  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:52.112497  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:52.112569  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:52.147589  451238 cri.go:89] found id: ""
	I0805 13:01:52.147620  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.147631  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:52.147638  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:52.147702  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:52.184016  451238 cri.go:89] found id: ""
	I0805 13:01:52.184053  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.184063  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:52.184072  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:52.184134  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:52.219670  451238 cri.go:89] found id: ""
	I0805 13:01:52.219702  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.219714  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:52.219727  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:52.219820  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:52.258697  451238 cri.go:89] found id: ""
	I0805 13:01:52.258731  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.258744  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:52.258752  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:52.258818  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:52.299599  451238 cri.go:89] found id: ""
	I0805 13:01:52.299636  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.299649  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:52.299665  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:52.299683  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:52.351730  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:52.351772  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:52.365993  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:52.366022  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:52.436019  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:52.436041  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:52.436056  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:52.520082  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:52.520118  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:55.064214  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:55.077358  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:55.077454  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:55.110523  451238 cri.go:89] found id: ""
	I0805 13:01:55.110555  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.110564  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:55.110570  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:55.110630  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:55.147870  451238 cri.go:89] found id: ""
	I0805 13:01:55.147905  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.147916  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:55.147925  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:55.147998  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:55.180769  451238 cri.go:89] found id: ""
	I0805 13:01:55.180803  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.180814  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:55.180822  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:55.180890  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:55.217290  451238 cri.go:89] found id: ""
	I0805 13:01:55.217332  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.217343  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:55.217353  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:55.217420  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:55.254185  451238 cri.go:89] found id: ""
	I0805 13:01:55.254221  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.254232  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:55.254239  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:55.254295  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:55.290633  451238 cri.go:89] found id: ""
	I0805 13:01:55.290662  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.290673  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:55.290681  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:55.290747  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:55.325830  451238 cri.go:89] found id: ""
	I0805 13:01:55.325862  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.325873  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:55.325880  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:55.325947  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:55.359887  451238 cri.go:89] found id: ""
	I0805 13:01:55.359922  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.359931  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:55.359941  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:55.359953  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:55.418251  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:55.418299  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:55.432007  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:55.432038  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:55.507177  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:55.507205  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:55.507219  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:55.586919  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:55.586965  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:52.525480  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:54.525653  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:57.024834  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:54.410408  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:56.909810  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:55.944741  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:57.946654  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:58.128822  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:58.142726  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:58.142799  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:58.178027  451238 cri.go:89] found id: ""
	I0805 13:01:58.178056  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.178067  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:58.178075  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:58.178147  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:58.213309  451238 cri.go:89] found id: ""
	I0805 13:01:58.213340  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.213351  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:58.213358  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:58.213430  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:58.247296  451238 cri.go:89] found id: ""
	I0805 13:01:58.247323  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.247332  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:58.247338  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:58.247393  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:58.280226  451238 cri.go:89] found id: ""
	I0805 13:01:58.280255  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.280266  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:58.280277  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:58.280335  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:58.316934  451238 cri.go:89] found id: ""
	I0805 13:01:58.316969  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.316981  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:58.316989  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:58.317055  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:58.360931  451238 cri.go:89] found id: ""
	I0805 13:01:58.360967  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.360979  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:58.360987  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:58.361055  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:58.399112  451238 cri.go:89] found id: ""
	I0805 13:01:58.399150  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.399163  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:58.399171  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:58.399244  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:58.441903  451238 cri.go:89] found id: ""
	I0805 13:01:58.441930  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.441941  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:58.441952  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:58.441967  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:58.524869  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:58.524908  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:58.562598  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:58.562634  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:58.618274  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:58.618313  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:58.633011  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:58.633039  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:58.706287  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:01.206971  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:01.222277  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:01.222357  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:01.266949  451238 cri.go:89] found id: ""
	I0805 13:02:01.266982  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.266993  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:01.267007  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:01.267108  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:01.306765  451238 cri.go:89] found id: ""
	I0805 13:02:01.306791  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.306799  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:01.306805  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:01.306859  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:01.345108  451238 cri.go:89] found id: ""
	I0805 13:02:01.345145  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.345157  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:01.345164  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:01.345227  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:01.383201  451238 cri.go:89] found id: ""
	I0805 13:02:01.383231  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.383239  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:01.383245  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:01.383307  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:01.419292  451238 cri.go:89] found id: ""
	I0805 13:02:01.419320  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.419331  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:01.419338  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:01.419410  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:01.456447  451238 cri.go:89] found id: ""
	I0805 13:02:01.456482  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.456492  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:01.456500  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:01.456568  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:01.496266  451238 cri.go:89] found id: ""
	I0805 13:02:01.496298  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.496306  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:01.496312  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:01.496375  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:01.541492  451238 cri.go:89] found id: ""
	I0805 13:02:01.541529  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.541541  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:01.541555  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:01.541571  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:01.593140  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:01.593185  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:01.606641  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:01.606670  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:01.681989  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:01.682015  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:01.682030  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:01.765612  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:01.765655  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:59.025355  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:01.025443  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:59.408591  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:01.409368  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:00.445254  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:02.944495  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:04.311066  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:04.326530  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:04.326599  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:04.360091  451238 cri.go:89] found id: ""
	I0805 13:02:04.360124  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.360136  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:04.360142  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:04.360214  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:04.398983  451238 cri.go:89] found id: ""
	I0805 13:02:04.399014  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.399026  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:04.399045  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:04.399122  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:04.433444  451238 cri.go:89] found id: ""
	I0805 13:02:04.433474  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.433483  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:04.433495  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:04.433546  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:04.470113  451238 cri.go:89] found id: ""
	I0805 13:02:04.470145  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.470156  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:04.470167  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:04.470233  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:04.505695  451238 cri.go:89] found id: ""
	I0805 13:02:04.505721  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.505731  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:04.505738  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:04.505801  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:04.544093  451238 cri.go:89] found id: ""
	I0805 13:02:04.544121  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.544129  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:04.544136  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:04.544196  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:04.579663  451238 cri.go:89] found id: ""
	I0805 13:02:04.579702  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.579715  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:04.579724  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:04.579803  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:04.616524  451238 cri.go:89] found id: ""
	I0805 13:02:04.616565  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.616577  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:04.616590  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:04.616607  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:04.693014  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:04.693035  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:04.693048  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:04.772508  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:04.772550  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:04.813014  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:04.813043  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:04.864653  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:04.864702  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:03.525225  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:06.024868  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:03.908365  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:05.908993  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:07.910958  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:05.444593  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:07.444737  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:07.378816  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:07.392347  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:07.392439  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:07.425843  451238 cri.go:89] found id: ""
	I0805 13:02:07.425876  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.425887  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:07.425895  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:07.425958  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:07.461547  451238 cri.go:89] found id: ""
	I0805 13:02:07.461575  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.461584  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:07.461591  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:07.461651  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:07.496461  451238 cri.go:89] found id: ""
	I0805 13:02:07.496500  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.496510  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:07.496521  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:07.496599  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:07.531520  451238 cri.go:89] found id: ""
	I0805 13:02:07.531556  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.531566  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:07.531574  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:07.531642  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:07.571821  451238 cri.go:89] found id: ""
	I0805 13:02:07.571855  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.571866  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:07.571876  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:07.571948  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:07.611111  451238 cri.go:89] found id: ""
	I0805 13:02:07.611151  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.611159  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:07.611165  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:07.611226  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:07.651428  451238 cri.go:89] found id: ""
	I0805 13:02:07.651456  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.651464  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:07.651470  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:07.651520  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:07.689828  451238 cri.go:89] found id: ""
	I0805 13:02:07.689858  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.689866  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:07.689877  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:07.689893  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:07.746381  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:07.746422  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:07.760953  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:07.760989  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:07.834859  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:07.834883  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:07.834901  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:07.915344  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:07.915376  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:10.459232  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:10.472789  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:10.472853  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:10.508434  451238 cri.go:89] found id: ""
	I0805 13:02:10.508462  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.508470  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:10.508477  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:10.508539  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:10.543487  451238 cri.go:89] found id: ""
	I0805 13:02:10.543515  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.543524  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:10.543530  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:10.543582  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:10.588274  451238 cri.go:89] found id: ""
	I0805 13:02:10.588302  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.588310  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:10.588317  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:10.588379  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:10.620810  451238 cri.go:89] found id: ""
	I0805 13:02:10.620851  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.620863  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:10.620871  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:10.620945  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:10.657882  451238 cri.go:89] found id: ""
	I0805 13:02:10.657913  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.657923  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:10.657929  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:10.657993  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:10.696188  451238 cri.go:89] found id: ""
	I0805 13:02:10.696220  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.696229  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:10.696235  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:10.696294  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:10.729942  451238 cri.go:89] found id: ""
	I0805 13:02:10.729977  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.729988  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:10.729996  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:10.730050  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:10.761972  451238 cri.go:89] found id: ""
	I0805 13:02:10.762000  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.762008  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:10.762018  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:10.762032  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:10.816859  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:10.816890  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:10.830348  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:10.830379  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:10.902720  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:10.902753  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:10.902771  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:10.981464  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:10.981505  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:08.024948  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:10.525441  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:10.408841  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:12.409506  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:09.445359  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:11.944853  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:13.528296  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:13.541813  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:13.541887  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:13.575632  451238 cri.go:89] found id: ""
	I0805 13:02:13.575669  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.575681  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:13.575689  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:13.575766  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:13.612646  451238 cri.go:89] found id: ""
	I0805 13:02:13.612680  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.612691  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:13.612699  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:13.612755  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:13.650310  451238 cri.go:89] found id: ""
	I0805 13:02:13.650341  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.650361  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:13.650369  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:13.650439  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:13.686941  451238 cri.go:89] found id: ""
	I0805 13:02:13.686970  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.686981  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:13.686990  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:13.687054  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:13.722250  451238 cri.go:89] found id: ""
	I0805 13:02:13.722285  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.722297  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:13.722306  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:13.722388  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:13.758337  451238 cri.go:89] found id: ""
	I0805 13:02:13.758367  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.758375  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:13.758382  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:13.758443  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:13.792980  451238 cri.go:89] found id: ""
	I0805 13:02:13.793016  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.793028  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:13.793036  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:13.793127  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:13.831511  451238 cri.go:89] found id: ""
	I0805 13:02:13.831539  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.831547  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:13.831558  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:13.831579  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:13.885124  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:13.885169  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:13.899112  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:13.899155  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:13.977058  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:13.977099  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:13.977115  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:14.060873  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:14.060911  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:16.602595  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:16.617557  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:16.617638  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:16.660212  451238 cri.go:89] found id: ""
	I0805 13:02:16.660244  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.660256  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:16.660264  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:16.660323  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:16.695515  451238 cri.go:89] found id: ""
	I0805 13:02:16.695553  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.695564  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:16.695572  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:16.695638  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:16.732844  451238 cri.go:89] found id: ""
	I0805 13:02:16.732875  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.732884  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:16.732891  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:16.732943  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:16.772465  451238 cri.go:89] found id: ""
	I0805 13:02:16.772497  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.772504  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:16.772517  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:16.772582  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:16.809826  451238 cri.go:89] found id: ""
	I0805 13:02:16.809863  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.809875  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:16.809882  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:16.809949  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:16.849480  451238 cri.go:89] found id: ""
	I0805 13:02:16.849512  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.849523  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:16.849531  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:16.849598  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:16.884098  451238 cri.go:89] found id: ""
	I0805 13:02:16.884132  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.884144  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:16.884152  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:16.884222  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:16.920497  451238 cri.go:89] found id: ""
	I0805 13:02:16.920523  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.920530  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:16.920541  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:16.920556  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:13.025299  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:15.525474  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:14.908633  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:16.909254  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:14.445321  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:16.945044  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:18.945630  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:16.975287  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:16.975317  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:16.989524  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:16.989552  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:17.057997  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:17.058022  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:17.058037  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:17.133721  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:17.133763  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:19.672385  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:19.687948  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:19.688017  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:19.724105  451238 cri.go:89] found id: ""
	I0805 13:02:19.724132  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.724140  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:19.724147  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:19.724199  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:19.758263  451238 cri.go:89] found id: ""
	I0805 13:02:19.758296  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.758306  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:19.758314  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:19.758381  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:19.792924  451238 cri.go:89] found id: ""
	I0805 13:02:19.792954  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.792961  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:19.792967  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:19.793023  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:19.826340  451238 cri.go:89] found id: ""
	I0805 13:02:19.826367  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.826375  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:19.826382  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:19.826434  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:19.864289  451238 cri.go:89] found id: ""
	I0805 13:02:19.864323  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.864334  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:19.864343  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:19.864413  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:19.899630  451238 cri.go:89] found id: ""
	I0805 13:02:19.899661  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.899673  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:19.899682  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:19.899786  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:19.935798  451238 cri.go:89] found id: ""
	I0805 13:02:19.935826  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.935836  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:19.935843  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:19.935896  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:19.977984  451238 cri.go:89] found id: ""
	I0805 13:02:19.978019  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.978031  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:19.978044  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:19.978062  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:20.030096  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:20.030131  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:20.043878  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:20.043940  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:20.119251  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:20.119279  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:20.119297  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:20.202445  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:20.202488  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:18.026282  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:20.524225  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:19.408760  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:21.410108  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:21.445045  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:23.944150  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:22.744728  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:22.758606  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:22.758675  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:22.791663  451238 cri.go:89] found id: ""
	I0805 13:02:22.791696  451238 logs.go:276] 0 containers: []
	W0805 13:02:22.791708  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:22.791717  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:22.791821  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:22.826568  451238 cri.go:89] found id: ""
	I0805 13:02:22.826594  451238 logs.go:276] 0 containers: []
	W0805 13:02:22.826603  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:22.826609  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:22.826671  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:22.860430  451238 cri.go:89] found id: ""
	I0805 13:02:22.860459  451238 logs.go:276] 0 containers: []
	W0805 13:02:22.860470  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:22.860479  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:22.860543  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:22.893815  451238 cri.go:89] found id: ""
	I0805 13:02:22.893846  451238 logs.go:276] 0 containers: []
	W0805 13:02:22.893854  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:22.893860  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:22.893929  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:22.929804  451238 cri.go:89] found id: ""
	I0805 13:02:22.929830  451238 logs.go:276] 0 containers: []
	W0805 13:02:22.929840  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:22.929849  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:22.929915  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:22.964918  451238 cri.go:89] found id: ""
	I0805 13:02:22.964950  451238 logs.go:276] 0 containers: []
	W0805 13:02:22.964961  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:22.964969  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:22.965035  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:23.000236  451238 cri.go:89] found id: ""
	I0805 13:02:23.000271  451238 logs.go:276] 0 containers: []
	W0805 13:02:23.000282  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:23.000290  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:23.000354  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:23.052075  451238 cri.go:89] found id: ""
	I0805 13:02:23.052108  451238 logs.go:276] 0 containers: []
	W0805 13:02:23.052117  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:23.052128  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:23.052141  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:23.104213  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:23.104248  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:23.118811  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:23.118851  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:23.188552  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:23.188578  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:23.188595  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:23.272518  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:23.272562  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:25.811116  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:25.825030  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:25.825113  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:25.864282  451238 cri.go:89] found id: ""
	I0805 13:02:25.864318  451238 logs.go:276] 0 containers: []
	W0805 13:02:25.864331  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:25.864339  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:25.864413  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:25.901712  451238 cri.go:89] found id: ""
	I0805 13:02:25.901746  451238 logs.go:276] 0 containers: []
	W0805 13:02:25.901754  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:25.901760  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:25.901822  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:25.937036  451238 cri.go:89] found id: ""
	I0805 13:02:25.937068  451238 logs.go:276] 0 containers: []
	W0805 13:02:25.937077  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:25.937083  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:25.937146  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:25.974598  451238 cri.go:89] found id: ""
	I0805 13:02:25.974627  451238 logs.go:276] 0 containers: []
	W0805 13:02:25.974638  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:25.974646  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:25.974713  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:26.011083  451238 cri.go:89] found id: ""
	I0805 13:02:26.011116  451238 logs.go:276] 0 containers: []
	W0805 13:02:26.011124  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:26.011130  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:26.011190  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:26.050187  451238 cri.go:89] found id: ""
	I0805 13:02:26.050219  451238 logs.go:276] 0 containers: []
	W0805 13:02:26.050231  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:26.050242  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:26.050317  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:26.085038  451238 cri.go:89] found id: ""
	I0805 13:02:26.085067  451238 logs.go:276] 0 containers: []
	W0805 13:02:26.085077  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:26.085086  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:26.085151  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:26.122121  451238 cri.go:89] found id: ""
	I0805 13:02:26.122150  451238 logs.go:276] 0 containers: []
	W0805 13:02:26.122158  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:26.122173  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:26.122191  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:26.193819  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:26.193850  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:26.193865  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:26.273453  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:26.273492  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:26.312474  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:26.312509  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:26.363176  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:26.363215  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:22.524303  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:24.525047  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:26.528347  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:23.909120  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:26.409913  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:25.944824  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:28.444803  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:28.878523  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:28.892242  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:28.892330  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:28.928650  451238 cri.go:89] found id: ""
	I0805 13:02:28.928682  451238 logs.go:276] 0 containers: []
	W0805 13:02:28.928693  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:28.928702  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:28.928772  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:28.965582  451238 cri.go:89] found id: ""
	I0805 13:02:28.965615  451238 logs.go:276] 0 containers: []
	W0805 13:02:28.965626  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:28.965634  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:28.965698  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:29.001824  451238 cri.go:89] found id: ""
	I0805 13:02:29.001855  451238 logs.go:276] 0 containers: []
	W0805 13:02:29.001865  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:29.001874  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:29.001939  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:29.037688  451238 cri.go:89] found id: ""
	I0805 13:02:29.037715  451238 logs.go:276] 0 containers: []
	W0805 13:02:29.037722  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:29.037730  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:29.037780  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:29.078495  451238 cri.go:89] found id: ""
	I0805 13:02:29.078540  451238 logs.go:276] 0 containers: []
	W0805 13:02:29.078552  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:29.078559  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:29.078627  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:29.113728  451238 cri.go:89] found id: ""
	I0805 13:02:29.113764  451238 logs.go:276] 0 containers: []
	W0805 13:02:29.113776  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:29.113786  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:29.113851  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:29.147590  451238 cri.go:89] found id: ""
	I0805 13:02:29.147618  451238 logs.go:276] 0 containers: []
	W0805 13:02:29.147629  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:29.147638  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:29.147702  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:29.186015  451238 cri.go:89] found id: ""
	I0805 13:02:29.186043  451238 logs.go:276] 0 containers: []
	W0805 13:02:29.186052  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:29.186062  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:29.186074  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:29.242795  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:29.242850  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:29.257012  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:29.257046  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:29.330528  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:29.330555  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:29.330569  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:29.418109  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:29.418145  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:29.025256  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:31.526187  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:28.909283  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:31.409736  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:30.944380  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:32.945421  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:31.986351  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:32.001265  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:32.001349  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:32.035152  451238 cri.go:89] found id: ""
	I0805 13:02:32.035191  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.035200  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:32.035208  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:32.035262  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:32.069086  451238 cri.go:89] found id: ""
	I0805 13:02:32.069118  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.069128  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:32.069136  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:32.069204  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:32.103788  451238 cri.go:89] found id: ""
	I0805 13:02:32.103814  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.103822  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:32.103831  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:32.103893  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:32.139104  451238 cri.go:89] found id: ""
	I0805 13:02:32.139138  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.139149  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:32.139157  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:32.139222  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:32.192759  451238 cri.go:89] found id: ""
	I0805 13:02:32.192789  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.192798  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:32.192804  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:32.192865  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:32.231080  451238 cri.go:89] found id: ""
	I0805 13:02:32.231115  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.231126  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:32.231135  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:32.231200  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:32.266547  451238 cri.go:89] found id: ""
	I0805 13:02:32.266578  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.266587  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:32.266594  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:32.266647  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:32.301828  451238 cri.go:89] found id: ""
	I0805 13:02:32.301856  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.301865  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:32.301875  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:32.301888  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:32.358439  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:32.358479  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:32.372349  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:32.372383  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:32.442335  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:32.442369  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:32.442388  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:32.521705  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:32.521744  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:35.060867  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:35.074370  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:35.074433  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:35.111149  451238 cri.go:89] found id: ""
	I0805 13:02:35.111181  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.111191  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:35.111200  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:35.111268  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:35.153781  451238 cri.go:89] found id: ""
	I0805 13:02:35.153814  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.153825  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:35.153832  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:35.153894  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:35.193207  451238 cri.go:89] found id: ""
	I0805 13:02:35.193239  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.193256  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:35.193291  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:35.193370  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:35.243879  451238 cri.go:89] found id: ""
	I0805 13:02:35.243915  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.243928  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:35.243936  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:35.243994  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:35.297922  451238 cri.go:89] found id: ""
	I0805 13:02:35.297954  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.297966  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:35.297973  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:35.298039  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:35.333201  451238 cri.go:89] found id: ""
	I0805 13:02:35.333234  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.333245  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:35.333254  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:35.333316  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:35.366327  451238 cri.go:89] found id: ""
	I0805 13:02:35.366361  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.366373  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:35.366381  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:35.366449  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:35.401515  451238 cri.go:89] found id: ""
	I0805 13:02:35.401546  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.401555  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:35.401565  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:35.401578  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:35.451057  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:35.451090  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:35.465054  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:35.465095  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:35.547111  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:35.547142  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:35.547160  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:35.627451  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:35.627490  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:34.025104  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:36.524904  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:33.908489  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:35.909183  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:37.909360  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:35.445317  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:37.446056  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:38.169022  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:38.181892  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:38.181968  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:38.217919  451238 cri.go:89] found id: ""
	I0805 13:02:38.217951  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.217961  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:38.217970  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:38.218041  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:38.253967  451238 cri.go:89] found id: ""
	I0805 13:02:38.253999  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.254008  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:38.254020  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:38.254073  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:38.293757  451238 cri.go:89] found id: ""
	I0805 13:02:38.293789  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.293801  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:38.293809  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:38.293904  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:38.329657  451238 cri.go:89] found id: ""
	I0805 13:02:38.329686  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.329697  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:38.329705  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:38.329772  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:38.364602  451238 cri.go:89] found id: ""
	I0805 13:02:38.364635  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.364647  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:38.364656  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:38.364732  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:38.396352  451238 cri.go:89] found id: ""
	I0805 13:02:38.396382  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.396394  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:38.396403  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:38.396471  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:38.429172  451238 cri.go:89] found id: ""
	I0805 13:02:38.429203  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.429214  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:38.429223  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:38.429293  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:38.464855  451238 cri.go:89] found id: ""
	I0805 13:02:38.464891  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.464903  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:38.464916  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:38.464931  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:38.514924  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:38.514967  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:38.530076  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:38.530113  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:38.602472  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:38.602494  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:38.602509  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:38.683905  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:38.683948  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:41.226878  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:41.245027  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:41.245100  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:41.280482  451238 cri.go:89] found id: ""
	I0805 13:02:41.280511  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.280523  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:41.280532  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:41.280597  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:41.316592  451238 cri.go:89] found id: ""
	I0805 13:02:41.316622  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.316633  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:41.316641  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:41.316708  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:41.353282  451238 cri.go:89] found id: ""
	I0805 13:02:41.353313  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.353324  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:41.353333  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:41.353397  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:41.393379  451238 cri.go:89] found id: ""
	I0805 13:02:41.393406  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.393417  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:41.393426  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:41.393502  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:41.430980  451238 cri.go:89] found id: ""
	I0805 13:02:41.431012  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.431023  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:41.431031  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:41.431106  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:41.467228  451238 cri.go:89] found id: ""
	I0805 13:02:41.467261  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.467273  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:41.467281  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:41.467348  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:41.502105  451238 cri.go:89] found id: ""
	I0805 13:02:41.502153  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.502166  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:41.502175  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:41.502250  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:41.539286  451238 cri.go:89] found id: ""
	I0805 13:02:41.539314  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.539325  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:41.539338  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:41.539353  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:41.592135  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:41.592175  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:41.608151  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:41.608184  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:41.680096  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:41.680131  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:41.680148  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:41.759589  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:41.759628  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:39.025448  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:41.526590  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:40.409447  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:42.909412  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:39.945459  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:42.444630  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:44.300461  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:44.314310  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:44.314388  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:44.348516  451238 cri.go:89] found id: ""
	I0805 13:02:44.348549  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.348562  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:44.348570  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:44.348635  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:44.388256  451238 cri.go:89] found id: ""
	I0805 13:02:44.388289  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.388299  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:44.388309  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:44.388383  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:44.426743  451238 cri.go:89] found id: ""
	I0805 13:02:44.426778  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.426786  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:44.426792  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:44.426848  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:44.463008  451238 cri.go:89] found id: ""
	I0805 13:02:44.463044  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.463054  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:44.463062  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:44.463129  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:44.497662  451238 cri.go:89] found id: ""
	I0805 13:02:44.497696  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.497707  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:44.497715  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:44.497789  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:44.534253  451238 cri.go:89] found id: ""
	I0805 13:02:44.534281  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.534288  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:44.534294  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:44.534378  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:44.574350  451238 cri.go:89] found id: ""
	I0805 13:02:44.574380  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.574390  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:44.574398  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:44.574468  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:44.609984  451238 cri.go:89] found id: ""
	I0805 13:02:44.610018  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.610031  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:44.610044  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:44.610060  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:44.650363  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:44.650402  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:44.700997  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:44.701032  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:44.716841  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:44.716874  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:44.785482  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:44.785502  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:44.785517  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:44.023932  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:46.025733  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:44.909613  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:47.409724  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:44.445234  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:46.944157  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:48.946098  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:47.365382  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:47.378779  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:47.378851  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:47.413615  451238 cri.go:89] found id: ""
	I0805 13:02:47.413636  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.413645  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:47.413651  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:47.413699  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:47.448536  451238 cri.go:89] found id: ""
	I0805 13:02:47.448563  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.448572  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:47.448578  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:47.448629  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:47.490817  451238 cri.go:89] found id: ""
	I0805 13:02:47.490847  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.490856  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:47.490862  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:47.490931  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:47.533151  451238 cri.go:89] found id: ""
	I0805 13:02:47.533179  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.533187  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:47.533193  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:47.533250  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:47.571991  451238 cri.go:89] found id: ""
	I0805 13:02:47.572022  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.572030  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:47.572036  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:47.572096  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:47.606943  451238 cri.go:89] found id: ""
	I0805 13:02:47.606976  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.606987  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:47.606995  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:47.607073  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:47.644704  451238 cri.go:89] found id: ""
	I0805 13:02:47.644741  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.644753  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:47.644762  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:47.644828  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:47.687361  451238 cri.go:89] found id: ""
	I0805 13:02:47.687395  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.687408  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:47.687427  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:47.687453  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:47.766572  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:47.766614  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:47.812209  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:47.812242  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:47.862948  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:47.862987  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:47.878697  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:47.878729  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:47.951680  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:50.452861  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:50.466370  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:50.466440  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:50.500001  451238 cri.go:89] found id: ""
	I0805 13:02:50.500031  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.500043  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:50.500051  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:50.500126  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:50.541752  451238 cri.go:89] found id: ""
	I0805 13:02:50.541786  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.541794  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:50.541800  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:50.541864  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:50.578889  451238 cri.go:89] found id: ""
	I0805 13:02:50.578915  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.578923  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:50.578930  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:50.578984  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:50.614865  451238 cri.go:89] found id: ""
	I0805 13:02:50.614896  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.614906  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:50.614912  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:50.614980  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:50.656169  451238 cri.go:89] found id: ""
	I0805 13:02:50.656195  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.656202  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:50.656209  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:50.656277  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:50.695050  451238 cri.go:89] found id: ""
	I0805 13:02:50.695082  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.695099  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:50.695108  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:50.695187  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:50.733205  451238 cri.go:89] found id: ""
	I0805 13:02:50.733233  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.733242  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:50.733249  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:50.733300  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:50.770654  451238 cri.go:89] found id: ""
	I0805 13:02:50.770683  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.770693  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:50.770706  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:50.770721  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:50.826521  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:50.826567  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:50.842153  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:50.842181  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:50.916445  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:50.916474  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:50.916487  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:50.999973  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:51.000020  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:48.525240  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:51.024459  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:49.907505  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:51.909037  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:50.946199  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:53.444128  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:53.539541  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:53.553804  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:53.553893  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:53.593075  451238 cri.go:89] found id: ""
	I0805 13:02:53.593105  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.593114  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:53.593121  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:53.593190  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:53.629967  451238 cri.go:89] found id: ""
	I0805 13:02:53.630001  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.630012  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:53.630020  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:53.630088  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:53.663535  451238 cri.go:89] found id: ""
	I0805 13:02:53.663564  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.663572  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:53.663577  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:53.663635  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:53.697650  451238 cri.go:89] found id: ""
	I0805 13:02:53.697676  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.697684  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:53.697690  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:53.697741  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:53.732845  451238 cri.go:89] found id: ""
	I0805 13:02:53.732873  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.732883  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:53.732891  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:53.732950  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:53.774673  451238 cri.go:89] found id: ""
	I0805 13:02:53.774703  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.774712  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:53.774719  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:53.774783  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:53.815368  451238 cri.go:89] found id: ""
	I0805 13:02:53.815401  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.815413  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:53.815423  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:53.815487  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:53.849726  451238 cri.go:89] found id: ""
	I0805 13:02:53.849760  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.849771  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:53.849785  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:53.849801  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:53.925356  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:53.925398  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:53.966721  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:53.966751  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:54.023096  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:54.023140  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:54.037634  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:54.037666  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:54.115159  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:56.616326  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:56.629665  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:56.629744  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:56.665665  451238 cri.go:89] found id: ""
	I0805 13:02:56.665701  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.665713  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:56.665722  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:56.665790  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:56.700446  451238 cri.go:89] found id: ""
	I0805 13:02:56.700473  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.700481  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:56.700488  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:56.700554  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:56.737152  451238 cri.go:89] found id: ""
	I0805 13:02:56.737190  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.737202  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:56.737210  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:56.737283  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:56.777909  451238 cri.go:89] found id: ""
	I0805 13:02:56.777942  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.777954  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:56.777961  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:56.778027  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:56.813503  451238 cri.go:89] found id: ""
	I0805 13:02:56.813537  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.813547  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:56.813556  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:56.813625  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:56.848964  451238 cri.go:89] found id: ""
	I0805 13:02:56.848993  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.849002  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:56.849008  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:56.849071  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:56.884310  451238 cri.go:89] found id: ""
	I0805 13:02:56.884339  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.884347  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:56.884356  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:56.884417  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:56.925895  451238 cri.go:89] found id: ""
	I0805 13:02:56.925926  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.925936  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:56.925948  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:56.925962  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:53.025086  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:55.025424  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:57.026117  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:53.909851  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:56.411536  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:55.945123  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:57.945278  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:56.982847  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:56.982882  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:56.997703  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:56.997742  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:57.071130  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:57.071153  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:57.071174  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:57.152985  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:57.153029  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:59.697501  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:59.711799  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:59.711879  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:59.746992  451238 cri.go:89] found id: ""
	I0805 13:02:59.747024  451238 logs.go:276] 0 containers: []
	W0805 13:02:59.747035  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:59.747043  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:59.747115  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:59.780563  451238 cri.go:89] found id: ""
	I0805 13:02:59.780592  451238 logs.go:276] 0 containers: []
	W0805 13:02:59.780604  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:59.780611  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:59.780676  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:59.816973  451238 cri.go:89] found id: ""
	I0805 13:02:59.817007  451238 logs.go:276] 0 containers: []
	W0805 13:02:59.817019  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:59.817027  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:59.817098  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:59.851989  451238 cri.go:89] found id: ""
	I0805 13:02:59.852018  451238 logs.go:276] 0 containers: []
	W0805 13:02:59.852028  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:59.852035  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:59.852086  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:59.887491  451238 cri.go:89] found id: ""
	I0805 13:02:59.887517  451238 logs.go:276] 0 containers: []
	W0805 13:02:59.887525  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:59.887535  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:59.887587  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:59.924965  451238 cri.go:89] found id: ""
	I0805 13:02:59.924997  451238 logs.go:276] 0 containers: []
	W0805 13:02:59.925005  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:59.925012  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:59.925062  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:59.965830  451238 cri.go:89] found id: ""
	I0805 13:02:59.965860  451238 logs.go:276] 0 containers: []
	W0805 13:02:59.965868  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:59.965875  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:59.965932  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:03:00.003208  451238 cri.go:89] found id: ""
	I0805 13:03:00.003241  451238 logs.go:276] 0 containers: []
	W0805 13:03:00.003250  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:03:00.003260  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:00.003275  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:00.056865  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:00.056911  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:00.070563  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:00.070593  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:03:00.137931  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:03:00.137957  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:00.137976  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:00.221598  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:03:00.221649  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:59.525042  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:02.024461  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:58.903499  450576 pod_ready.go:81] duration metric: took 4m0.001018928s for pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace to be "Ready" ...
	E0805 13:02:58.903533  450576 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace to be "Ready" (will not retry!)
	I0805 13:02:58.903556  450576 pod_ready.go:38] duration metric: took 4m8.049032492s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 13:02:58.903598  450576 kubeadm.go:597] duration metric: took 4m18.518107211s to restartPrimaryControlPlane
	W0805 13:02:58.903786  450576 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0805 13:02:58.903819  450576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0805 13:02:59.945464  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:02.443954  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:02.761328  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:03:02.775836  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:03:02.775904  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:03:02.812714  451238 cri.go:89] found id: ""
	I0805 13:03:02.812752  451238 logs.go:276] 0 containers: []
	W0805 13:03:02.812764  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:03:02.812773  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:03:02.812848  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:03:02.850072  451238 cri.go:89] found id: ""
	I0805 13:03:02.850103  451238 logs.go:276] 0 containers: []
	W0805 13:03:02.850130  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:03:02.850138  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:03:02.850197  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:03:02.886956  451238 cri.go:89] found id: ""
	I0805 13:03:02.887081  451238 logs.go:276] 0 containers: []
	W0805 13:03:02.887103  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:03:02.887114  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:03:02.887188  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:03:02.924874  451238 cri.go:89] found id: ""
	I0805 13:03:02.924906  451238 logs.go:276] 0 containers: []
	W0805 13:03:02.924918  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:03:02.924925  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:03:02.924996  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:03:02.965965  451238 cri.go:89] found id: ""
	I0805 13:03:02.965996  451238 logs.go:276] 0 containers: []
	W0805 13:03:02.966007  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:03:02.966015  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:03:02.966101  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:03:03.001081  451238 cri.go:89] found id: ""
	I0805 13:03:03.001118  451238 logs.go:276] 0 containers: []
	W0805 13:03:03.001130  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:03:03.001140  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:03:03.001201  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:03:03.036194  451238 cri.go:89] found id: ""
	I0805 13:03:03.036223  451238 logs.go:276] 0 containers: []
	W0805 13:03:03.036234  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:03:03.036243  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:03:03.036303  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:03:03.071905  451238 cri.go:89] found id: ""
	I0805 13:03:03.071940  451238 logs.go:276] 0 containers: []
	W0805 13:03:03.071951  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:03:03.071964  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:03.071982  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:03.124400  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:03.124442  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:03.138492  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:03.138520  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:03:03.207300  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:03:03.207326  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:03.207342  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:03.294941  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:03:03.294983  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:03:05.836187  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:03:05.850504  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:03:05.850609  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:03:05.889692  451238 cri.go:89] found id: ""
	I0805 13:03:05.889718  451238 logs.go:276] 0 containers: []
	W0805 13:03:05.889729  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:03:05.889737  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:03:05.889804  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:03:05.924597  451238 cri.go:89] found id: ""
	I0805 13:03:05.924630  451238 logs.go:276] 0 containers: []
	W0805 13:03:05.924640  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:03:05.924647  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:03:05.924711  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:03:05.960373  451238 cri.go:89] found id: ""
	I0805 13:03:05.960404  451238 logs.go:276] 0 containers: []
	W0805 13:03:05.960413  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:03:05.960419  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:03:05.960471  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:03:05.996583  451238 cri.go:89] found id: ""
	I0805 13:03:05.996617  451238 logs.go:276] 0 containers: []
	W0805 13:03:05.996628  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:03:05.996636  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:03:05.996708  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:03:06.033539  451238 cri.go:89] found id: ""
	I0805 13:03:06.033567  451238 logs.go:276] 0 containers: []
	W0805 13:03:06.033575  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:03:06.033586  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:03:06.033655  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:03:06.069348  451238 cri.go:89] found id: ""
	I0805 13:03:06.069378  451238 logs.go:276] 0 containers: []
	W0805 13:03:06.069391  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:03:06.069401  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:03:06.069466  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:03:06.103570  451238 cri.go:89] found id: ""
	I0805 13:03:06.103599  451238 logs.go:276] 0 containers: []
	W0805 13:03:06.103607  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:03:06.103613  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:03:06.103665  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:03:06.140230  451238 cri.go:89] found id: ""
	I0805 13:03:06.140260  451238 logs.go:276] 0 containers: []
	W0805 13:03:06.140271  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:03:06.140284  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:06.140300  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:06.191073  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:06.191123  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:06.204825  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:06.204857  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:03:06.281309  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:03:06.281339  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:06.281358  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:06.361709  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:03:06.361749  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:03:04.025007  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:06.524506  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:04.444267  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:06.444910  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:08.445441  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:08.903194  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:03:08.921602  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:03:08.921681  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:03:08.960916  451238 cri.go:89] found id: ""
	I0805 13:03:08.960945  451238 logs.go:276] 0 containers: []
	W0805 13:03:08.960975  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:03:08.960986  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:03:08.961055  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:03:08.996316  451238 cri.go:89] found id: ""
	I0805 13:03:08.996417  451238 logs.go:276] 0 containers: []
	W0805 13:03:08.996436  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:03:08.996448  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:03:08.996522  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:03:09.038536  451238 cri.go:89] found id: ""
	I0805 13:03:09.038572  451238 logs.go:276] 0 containers: []
	W0805 13:03:09.038584  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:03:09.038593  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:03:09.038664  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:03:09.075368  451238 cri.go:89] found id: ""
	I0805 13:03:09.075396  451238 logs.go:276] 0 containers: []
	W0805 13:03:09.075405  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:03:09.075412  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:03:09.075474  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:03:09.114232  451238 cri.go:89] found id: ""
	I0805 13:03:09.114262  451238 logs.go:276] 0 containers: []
	W0805 13:03:09.114272  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:03:09.114280  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:03:09.114353  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:03:09.161878  451238 cri.go:89] found id: ""
	I0805 13:03:09.161964  451238 logs.go:276] 0 containers: []
	W0805 13:03:09.161978  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:03:09.161988  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:03:09.162062  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:03:09.206694  451238 cri.go:89] found id: ""
	I0805 13:03:09.206727  451238 logs.go:276] 0 containers: []
	W0805 13:03:09.206739  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:03:09.206748  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:03:09.206890  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:03:09.257029  451238 cri.go:89] found id: ""
	I0805 13:03:09.257066  451238 logs.go:276] 0 containers: []
	W0805 13:03:09.257079  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:03:09.257090  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:09.257107  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:09.278638  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:09.278679  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:03:09.353760  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:03:09.353781  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:09.353793  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:09.438371  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:03:09.438419  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:03:09.487253  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:09.487297  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:08.018954  450884 pod_ready.go:81] duration metric: took 4m0.00055059s for pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace to be "Ready" ...
	E0805 13:03:08.018987  450884 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace to be "Ready" (will not retry!)
	I0805 13:03:08.019010  450884 pod_ready.go:38] duration metric: took 4m11.028507743s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 13:03:08.019048  450884 kubeadm.go:597] duration metric: took 4m19.097834327s to restartPrimaryControlPlane
	W0805 13:03:08.019122  450884 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0805 13:03:08.019157  450884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0805 13:03:10.945002  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:12.945953  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:12.042215  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:03:12.055721  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:03:12.055812  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:03:12.096936  451238 cri.go:89] found id: ""
	I0805 13:03:12.096965  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.096977  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:03:12.096985  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:03:12.097051  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:03:12.136149  451238 cri.go:89] found id: ""
	I0805 13:03:12.136181  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.136192  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:03:12.136199  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:03:12.136276  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:03:12.180568  451238 cri.go:89] found id: ""
	I0805 13:03:12.180606  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.180618  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:03:12.180626  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:03:12.180695  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:03:12.221759  451238 cri.go:89] found id: ""
	I0805 13:03:12.221794  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.221806  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:03:12.221815  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:03:12.221882  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:03:12.259460  451238 cri.go:89] found id: ""
	I0805 13:03:12.259490  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.259498  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:03:12.259508  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:03:12.259563  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:03:12.301245  451238 cri.go:89] found id: ""
	I0805 13:03:12.301277  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.301289  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:03:12.301297  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:03:12.301368  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:03:12.343640  451238 cri.go:89] found id: ""
	I0805 13:03:12.343678  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.343690  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:03:12.343698  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:03:12.343809  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:03:12.382729  451238 cri.go:89] found id: ""
	I0805 13:03:12.382762  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.382774  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:03:12.382787  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:12.382807  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:12.400862  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:12.400897  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:03:12.478755  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:03:12.478788  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:12.478807  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:12.566029  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:03:12.566080  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:03:12.611834  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:12.611929  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:15.171517  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:03:15.185569  451238 kubeadm.go:597] duration metric: took 4m3.737627997s to restartPrimaryControlPlane
	W0805 13:03:15.185662  451238 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0805 13:03:15.185697  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0805 13:03:15.669994  451238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 13:03:15.684794  451238 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 13:03:15.695088  451238 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 13:03:15.705403  451238 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 13:03:15.705427  451238 kubeadm.go:157] found existing configuration files:
	
	I0805 13:03:15.705488  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 13:03:15.714777  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 13:03:15.714833  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 13:03:15.724437  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 13:03:15.733263  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 13:03:15.733317  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 13:03:15.743004  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 13:03:15.752219  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 13:03:15.752278  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 13:03:15.761788  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 13:03:15.771193  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 13:03:15.771245  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 13:03:15.780964  451238 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 13:03:15.855628  451238 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0805 13:03:15.855751  451238 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 13:03:16.015686  451238 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 13:03:16.015880  451238 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 13:03:16.016041  451238 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 13:03:16.207054  451238 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 13:03:16.209133  451238 out.go:204]   - Generating certificates and keys ...
	I0805 13:03:16.209256  451238 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 13:03:16.209376  451238 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 13:03:16.209493  451238 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0805 13:03:16.209597  451238 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0805 13:03:16.209703  451238 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0805 13:03:16.211637  451238 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0805 13:03:16.211726  451238 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0805 13:03:16.211833  451238 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0805 13:03:16.211959  451238 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0805 13:03:16.212690  451238 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0805 13:03:16.212863  451238 kubeadm.go:310] [certs] Using the existing "sa" key
	I0805 13:03:16.212963  451238 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 13:03:16.283080  451238 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 13:03:16.609523  451238 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 13:03:16.765635  451238 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 13:03:16.934487  451238 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 13:03:16.955335  451238 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 13:03:16.956267  451238 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 13:03:16.956328  451238 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 13:03:17.088081  451238 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 13:03:15.445305  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:17.447306  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:17.090118  451238 out.go:204]   - Booting up control plane ...
	I0805 13:03:17.090264  451238 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 13:03:17.100902  451238 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 13:03:17.101263  451238 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 13:03:17.102210  451238 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 13:03:17.112522  451238 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0805 13:03:19.943658  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:21.944253  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:23.945158  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:25.252381  450576 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.348530672s)
	I0805 13:03:25.252504  450576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 13:03:25.269305  450576 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 13:03:25.279322  450576 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 13:03:25.289241  450576 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 13:03:25.289266  450576 kubeadm.go:157] found existing configuration files:
	
	I0805 13:03:25.289304  450576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 13:03:25.298671  450576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 13:03:25.298732  450576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 13:03:25.309962  450576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 13:03:25.320180  450576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 13:03:25.320247  450576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 13:03:25.330481  450576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 13:03:25.340565  450576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 13:03:25.340652  450576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 13:03:25.351244  450576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 13:03:25.361443  450576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 13:03:25.361536  450576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 13:03:25.371655  450576 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 13:03:25.419277  450576 kubeadm.go:310] W0805 13:03:25.398597    2979 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0805 13:03:25.420220  450576 kubeadm.go:310] W0805 13:03:25.399642    2979 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0805 13:03:25.537148  450576 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 13:03:25.945501  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:27.945972  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:33.413703  450576 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-rc.0
	I0805 13:03:33.413775  450576 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 13:03:33.413863  450576 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 13:03:33.414008  450576 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 13:03:33.414152  450576 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0805 13:03:33.414235  450576 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 13:03:33.415804  450576 out.go:204]   - Generating certificates and keys ...
	I0805 13:03:33.415874  450576 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 13:03:33.415949  450576 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 13:03:33.416037  450576 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0805 13:03:33.416101  450576 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0805 13:03:33.416174  450576 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0805 13:03:33.416237  450576 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0805 13:03:33.416289  450576 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0805 13:03:33.416357  450576 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0805 13:03:33.416437  450576 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0805 13:03:33.416518  450576 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0805 13:03:33.416553  450576 kubeadm.go:310] [certs] Using the existing "sa" key
	I0805 13:03:33.416603  450576 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 13:03:33.416646  450576 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 13:03:33.416701  450576 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0805 13:03:33.416745  450576 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 13:03:33.416816  450576 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 13:03:33.416878  450576 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 13:03:33.416971  450576 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 13:03:33.417059  450576 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 13:03:33.418572  450576 out.go:204]   - Booting up control plane ...
	I0805 13:03:33.418671  450576 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 13:03:33.418751  450576 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 13:03:33.418833  450576 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 13:03:33.418965  450576 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 13:03:33.419092  450576 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 13:03:33.419172  450576 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 13:03:33.419342  450576 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0805 13:03:33.419488  450576 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0805 13:03:33.419577  450576 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.308417ms
	I0805 13:03:33.419672  450576 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0805 13:03:33.419780  450576 kubeadm.go:310] [api-check] The API server is healthy after 5.001429681s
	I0805 13:03:33.419908  450576 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 13:03:33.420049  450576 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 13:03:33.420117  450576 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0805 13:03:33.420293  450576 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-669469 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 13:03:33.420385  450576 kubeadm.go:310] [bootstrap-token] Using token: i9zl3x.c4hzh1c9ccxlydzt
	I0805 13:03:33.421925  450576 out.go:204]   - Configuring RBAC rules ...
	I0805 13:03:33.422042  450576 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 13:03:33.422157  450576 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 13:03:33.422352  450576 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 13:03:33.422488  450576 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 13:03:33.422649  450576 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 13:03:33.422784  450576 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 13:03:33.422914  450576 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 13:03:33.422991  450576 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0805 13:03:33.423060  450576 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0805 13:03:33.423070  450576 kubeadm.go:310] 
	I0805 13:03:33.423160  450576 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0805 13:03:33.423173  450576 kubeadm.go:310] 
	I0805 13:03:33.423274  450576 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0805 13:03:33.423283  450576 kubeadm.go:310] 
	I0805 13:03:33.423316  450576 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0805 13:03:33.423409  450576 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 13:03:33.423495  450576 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 13:03:33.423513  450576 kubeadm.go:310] 
	I0805 13:03:33.423616  450576 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0805 13:03:33.423628  450576 kubeadm.go:310] 
	I0805 13:03:33.423692  450576 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 13:03:33.423701  450576 kubeadm.go:310] 
	I0805 13:03:33.423793  450576 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0805 13:03:33.423931  450576 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 13:03:33.424030  450576 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 13:03:33.424039  450576 kubeadm.go:310] 
	I0805 13:03:33.424106  450576 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0805 13:03:33.424176  450576 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0805 13:03:33.424185  450576 kubeadm.go:310] 
	I0805 13:03:33.424282  450576 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token i9zl3x.c4hzh1c9ccxlydzt \
	I0805 13:03:33.424430  450576 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d5d31a77e9c4cbf19599d2fca5d8f2345e115b01301fa4b841f92bcfec86ddc6 \
	I0805 13:03:33.424473  450576 kubeadm.go:310] 	--control-plane 
	I0805 13:03:33.424482  450576 kubeadm.go:310] 
	I0805 13:03:33.424588  450576 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0805 13:03:33.424602  450576 kubeadm.go:310] 
	I0805 13:03:33.424725  450576 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token i9zl3x.c4hzh1c9ccxlydzt \
	I0805 13:03:33.424870  450576 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d5d31a77e9c4cbf19599d2fca5d8f2345e115b01301fa4b841f92bcfec86ddc6 
	I0805 13:03:33.424892  450576 cni.go:84] Creating CNI manager for ""
	I0805 13:03:33.424911  450576 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 13:03:33.426503  450576 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0805 13:03:33.427981  450576 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0805 13:03:33.439484  450576 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0805 13:03:33.458459  450576 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 13:03:33.458547  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:33.458579  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-669469 minikube.k8s.io/updated_at=2024_08_05T13_03_33_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f minikube.k8s.io/name=no-preload-669469 minikube.k8s.io/primary=true
	I0805 13:03:33.488847  450576 ops.go:34] apiserver oom_adj: -16
	I0805 13:03:29.946423  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:32.444923  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:33.674306  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:34.174940  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:34.674936  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:35.174693  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:35.675004  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:36.174801  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:36.674878  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:37.174394  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:37.263948  450576 kubeadm.go:1113] duration metric: took 3.805464287s to wait for elevateKubeSystemPrivileges
	I0805 13:03:37.263985  450576 kubeadm.go:394] duration metric: took 4m56.93214495s to StartCluster
	I0805 13:03:37.264025  450576 settings.go:142] acquiring lock: {Name:mkef693333292ed53a03690c72ec170ce2e26d3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 13:03:37.264143  450576 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 13:03:37.265965  450576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/kubeconfig: {Name:mkf2ea766e58530103015ce4ba9d1ed3336f3926 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 13:03:37.266283  450576 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.223 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 13:03:37.266400  450576 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 13:03:37.266469  450576 addons.go:69] Setting storage-provisioner=true in profile "no-preload-669469"
	I0805 13:03:37.266510  450576 addons.go:234] Setting addon storage-provisioner=true in "no-preload-669469"
	W0805 13:03:37.266518  450576 addons.go:243] addon storage-provisioner should already be in state true
	I0805 13:03:37.266519  450576 config.go:182] Loaded profile config "no-preload-669469": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0805 13:03:37.266551  450576 host.go:66] Checking if "no-preload-669469" exists ...
	I0805 13:03:37.266505  450576 addons.go:69] Setting default-storageclass=true in profile "no-preload-669469"
	I0805 13:03:37.266547  450576 addons.go:69] Setting metrics-server=true in profile "no-preload-669469"
	I0805 13:03:37.266612  450576 addons.go:234] Setting addon metrics-server=true in "no-preload-669469"
	I0805 13:03:37.266616  450576 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-669469"
	W0805 13:03:37.266627  450576 addons.go:243] addon metrics-server should already be in state true
	I0805 13:03:37.266668  450576 host.go:66] Checking if "no-preload-669469" exists ...
	I0805 13:03:37.267002  450576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:03:37.267002  450576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:03:37.267035  450576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:03:37.267049  450576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:03:37.267041  450576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:03:37.267085  450576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:03:37.267985  450576 out.go:177] * Verifying Kubernetes components...
	I0805 13:03:37.269486  450576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 13:03:37.283242  450576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44391
	I0805 13:03:37.283291  450576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35597
	I0805 13:03:37.283245  450576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38679
	I0805 13:03:37.283710  450576 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:03:37.283785  450576 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:03:37.283717  450576 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:03:37.284296  450576 main.go:141] libmachine: Using API Version  1
	I0805 13:03:37.284316  450576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:03:37.284319  450576 main.go:141] libmachine: Using API Version  1
	I0805 13:03:37.284296  450576 main.go:141] libmachine: Using API Version  1
	I0805 13:03:37.284335  450576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:03:37.284360  450576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:03:37.284734  450576 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:03:37.284735  450576 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:03:37.284746  450576 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:03:37.284963  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetState
	I0805 13:03:37.285343  450576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:03:37.285375  450576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:03:37.285387  450576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:03:37.285441  450576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:03:37.288699  450576 addons.go:234] Setting addon default-storageclass=true in "no-preload-669469"
	W0805 13:03:37.288722  450576 addons.go:243] addon default-storageclass should already be in state true
	I0805 13:03:37.288753  450576 host.go:66] Checking if "no-preload-669469" exists ...
	I0805 13:03:37.289023  450576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:03:37.289049  450576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:03:37.303814  450576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38647
	I0805 13:03:37.304491  450576 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:03:37.305081  450576 main.go:141] libmachine: Using API Version  1
	I0805 13:03:37.305104  450576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:03:37.305552  450576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42975
	I0805 13:03:37.305566  450576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36331
	I0805 13:03:37.305583  450576 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:03:37.305928  450576 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:03:37.306007  450576 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:03:37.306148  450576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:03:37.306190  450576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:03:37.306485  450576 main.go:141] libmachine: Using API Version  1
	I0805 13:03:37.306503  450576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:03:37.306595  450576 main.go:141] libmachine: Using API Version  1
	I0805 13:03:37.306611  450576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:03:37.306971  450576 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:03:37.306998  450576 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:03:37.307157  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetState
	I0805 13:03:37.307162  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetState
	I0805 13:03:37.309002  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 13:03:37.309241  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 13:03:37.311054  450576 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0805 13:03:37.311055  450576 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 13:03:37.312682  450576 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0805 13:03:37.312695  450576 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0805 13:03:37.312710  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 13:03:37.312834  450576 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 13:03:37.312856  450576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 13:03:37.312874  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 13:03:37.317044  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 13:03:37.317635  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 13:03:37.317660  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 13:03:37.317753  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 13:03:37.317955  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 13:03:37.318141  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 13:03:37.318360  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 13:03:37.318400  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 13:03:37.318427  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 13:03:37.318539  450576 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa Username:docker}
	I0805 13:03:37.318633  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 13:03:37.318967  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 13:03:37.319111  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 13:03:37.319241  450576 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa Username:docker}
	I0805 13:03:37.325066  450576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46527
	I0805 13:03:37.325633  450576 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:03:37.326052  450576 main.go:141] libmachine: Using API Version  1
	I0805 13:03:37.326071  450576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:03:37.326326  450576 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:03:37.326473  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetState
	I0805 13:03:37.328502  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 13:03:37.328814  450576 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 13:03:37.328826  450576 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 13:03:37.328839  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 13:03:37.331482  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 13:03:37.331853  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 13:03:37.331874  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 13:03:37.332013  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 13:03:37.332169  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 13:03:37.332270  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 13:03:37.332358  450576 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa Username:docker}
	I0805 13:03:37.483477  450576 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 13:03:37.501924  450576 node_ready.go:35] waiting up to 6m0s for node "no-preload-669469" to be "Ready" ...
	I0805 13:03:37.511394  450576 node_ready.go:49] node "no-preload-669469" has status "Ready":"True"
	I0805 13:03:37.511427  450576 node_ready.go:38] duration metric: took 9.462968ms for node "no-preload-669469" to be "Ready" ...
	I0805 13:03:37.511443  450576 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 13:03:37.526505  450576 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 13:03:37.575598  450576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 13:03:37.583338  450576 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0805 13:03:37.583362  450576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0805 13:03:37.594019  450576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 13:03:37.629885  450576 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0805 13:03:37.629913  450576 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0805 13:03:37.684790  450576 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0805 13:03:37.684825  450576 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0805 13:03:37.753629  450576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0805 13:03:37.857352  450576 main.go:141] libmachine: Making call to close driver server
	I0805 13:03:37.857386  450576 main.go:141] libmachine: (no-preload-669469) Calling .Close
	I0805 13:03:37.857777  450576 main.go:141] libmachine: (no-preload-669469) DBG | Closing plugin on server side
	I0805 13:03:37.857780  450576 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:03:37.857812  450576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:03:37.857829  450576 main.go:141] libmachine: Making call to close driver server
	I0805 13:03:37.857838  450576 main.go:141] libmachine: (no-preload-669469) Calling .Close
	I0805 13:03:37.858101  450576 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:03:37.858117  450576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:03:37.858153  450576 main.go:141] libmachine: (no-preload-669469) DBG | Closing plugin on server side
	I0805 13:03:37.871616  450576 main.go:141] libmachine: Making call to close driver server
	I0805 13:03:37.871639  450576 main.go:141] libmachine: (no-preload-669469) Calling .Close
	I0805 13:03:37.871970  450576 main.go:141] libmachine: (no-preload-669469) DBG | Closing plugin on server side
	I0805 13:03:37.872022  450576 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:03:37.872031  450576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:03:38.290429  450576 main.go:141] libmachine: Making call to close driver server
	I0805 13:03:38.290449  450576 main.go:141] libmachine: (no-preload-669469) Calling .Close
	I0805 13:03:38.290784  450576 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:03:38.290856  450576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:03:38.290871  450576 main.go:141] libmachine: Making call to close driver server
	I0805 13:03:38.290880  450576 main.go:141] libmachine: (no-preload-669469) Calling .Close
	I0805 13:03:38.290829  450576 main.go:141] libmachine: (no-preload-669469) DBG | Closing plugin on server side
	I0805 13:03:38.291265  450576 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:03:38.291289  450576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:03:38.291271  450576 main.go:141] libmachine: (no-preload-669469) DBG | Closing plugin on server side
	I0805 13:03:38.880274  450576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.126602375s)
	I0805 13:03:38.880331  450576 main.go:141] libmachine: Making call to close driver server
	I0805 13:03:38.880344  450576 main.go:141] libmachine: (no-preload-669469) Calling .Close
	I0805 13:03:38.880868  450576 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:03:38.880896  450576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:03:38.880906  450576 main.go:141] libmachine: Making call to close driver server
	I0805 13:03:38.880916  450576 main.go:141] libmachine: (no-preload-669469) Calling .Close
	I0805 13:03:38.880871  450576 main.go:141] libmachine: (no-preload-669469) DBG | Closing plugin on server side
	I0805 13:03:38.881196  450576 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:03:38.881204  450576 main.go:141] libmachine: (no-preload-669469) DBG | Closing plugin on server side
	I0805 13:03:38.881211  450576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:03:38.881230  450576 addons.go:475] Verifying addon metrics-server=true in "no-preload-669469"
	I0805 13:03:38.882896  450576 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0805 13:03:34.945631  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:37.446855  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:39.741362  450884 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.722174979s)
	I0805 13:03:39.741438  450884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 13:03:39.760465  450884 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 13:03:39.770587  450884 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 13:03:39.780157  450884 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 13:03:39.780177  450884 kubeadm.go:157] found existing configuration files:
	
	I0805 13:03:39.780215  450884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0805 13:03:39.790172  450884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 13:03:39.790243  450884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 13:03:39.803838  450884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0805 13:03:39.816314  450884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 13:03:39.816367  450884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 13:03:39.826636  450884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0805 13:03:39.836513  450884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 13:03:39.836570  450884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 13:03:39.846356  450884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0805 13:03:39.855694  450884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 13:03:39.855770  450884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 13:03:39.865721  450884 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 13:03:40.081251  450884 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 13:03:38.884521  450576 addons.go:510] duration metric: took 1.618121451s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0805 13:03:39.536758  450576 pod_ready.go:102] pod "etcd-no-preload-669469" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:41.035239  450576 pod_ready.go:92] pod "etcd-no-preload-669469" in "kube-system" namespace has status "Ready":"True"
	I0805 13:03:41.035266  450576 pod_ready.go:81] duration metric: took 3.508734543s for pod "etcd-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 13:03:41.035280  450576 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 13:03:41.042787  450576 pod_ready.go:92] pod "kube-apiserver-no-preload-669469" in "kube-system" namespace has status "Ready":"True"
	I0805 13:03:41.042811  450576 pod_ready.go:81] duration metric: took 7.522909ms for pod "kube-apiserver-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 13:03:41.042824  450576 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 13:03:42.048338  450576 pod_ready.go:92] pod "kube-controller-manager-no-preload-669469" in "kube-system" namespace has status "Ready":"True"
	I0805 13:03:42.048363  450576 pod_ready.go:81] duration metric: took 1.005531569s for pod "kube-controller-manager-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 13:03:42.048373  450576 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 13:03:39.945815  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:42.445704  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:44.056394  450576 pod_ready.go:102] pod "kube-scheduler-no-preload-669469" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:45.555280  450576 pod_ready.go:92] pod "kube-scheduler-no-preload-669469" in "kube-system" namespace has status "Ready":"True"
	I0805 13:03:45.555310  450576 pod_ready.go:81] duration metric: took 3.506927542s for pod "kube-scheduler-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 13:03:45.555321  450576 pod_ready.go:38] duration metric: took 8.043865797s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 13:03:45.555338  450576 api_server.go:52] waiting for apiserver process to appear ...
	I0805 13:03:45.555397  450576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:03:45.572225  450576 api_server.go:72] duration metric: took 8.30589728s to wait for apiserver process to appear ...
	I0805 13:03:45.572249  450576 api_server.go:88] waiting for apiserver healthz status ...
	I0805 13:03:45.572272  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 13:03:45.578042  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 200:
	ok
	I0805 13:03:45.579014  450576 api_server.go:141] control plane version: v1.31.0-rc.0
	I0805 13:03:45.579034  450576 api_server.go:131] duration metric: took 6.778214ms to wait for apiserver health ...
	I0805 13:03:45.579042  450576 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 13:03:45.585537  450576 system_pods.go:59] 9 kube-system pods found
	I0805 13:03:45.585660  450576 system_pods.go:61] "coredns-6f6b679f8f-npbmj" [9eea9e0a-697b-42c9-857c-a3556c658fde] Running
	I0805 13:03:45.585673  450576 system_pods.go:61] "coredns-6f6b679f8f-pqhwx" [3d7bb193-e93e-49b8-be4b-943f2d7fe59d] Running
	I0805 13:03:45.585679  450576 system_pods.go:61] "etcd-no-preload-669469" [550acfbb-f255-470e-9e4f-a6eb36447951] Running
	I0805 13:03:45.585687  450576 system_pods.go:61] "kube-apiserver-no-preload-669469" [57089d30-f83b-4f06-8281-8bcdfb571df9] Running
	I0805 13:03:45.585694  450576 system_pods.go:61] "kube-controller-manager-no-preload-669469" [8f3b2de3-6296-4f95-8d91-b9408c8eb38b] Running
	I0805 13:03:45.585700  450576 system_pods.go:61] "kube-proxy-tpn5s" [f89e32f9-d750-41ac-891e-e3ca4a4fbbd2] Running
	I0805 13:03:45.585705  450576 system_pods.go:61] "kube-scheduler-no-preload-669469" [69af56a0-7269-4bc5-83ea-c632c7b8d060] Running
	I0805 13:03:45.585716  450576 system_pods.go:61] "metrics-server-6867b74b74-x4j7b" [55a747e4-f9a7-41f1-b584-470048ba6fcb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 13:03:45.585726  450576 system_pods.go:61] "storage-provisioner" [cb19adf6-e208-4709-b02f-ae32acc30478] Running
	I0805 13:03:45.585736  450576 system_pods.go:74] duration metric: took 6.688464ms to wait for pod list to return data ...
	I0805 13:03:45.585749  450576 default_sa.go:34] waiting for default service account to be created ...
	I0805 13:03:45.589498  450576 default_sa.go:45] found service account: "default"
	I0805 13:03:45.589526  450576 default_sa.go:55] duration metric: took 3.765664ms for default service account to be created ...
	I0805 13:03:45.589535  450576 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 13:03:45.597499  450576 system_pods.go:86] 9 kube-system pods found
	I0805 13:03:45.597527  450576 system_pods.go:89] "coredns-6f6b679f8f-npbmj" [9eea9e0a-697b-42c9-857c-a3556c658fde] Running
	I0805 13:03:45.597533  450576 system_pods.go:89] "coredns-6f6b679f8f-pqhwx" [3d7bb193-e93e-49b8-be4b-943f2d7fe59d] Running
	I0805 13:03:45.597537  450576 system_pods.go:89] "etcd-no-preload-669469" [550acfbb-f255-470e-9e4f-a6eb36447951] Running
	I0805 13:03:45.597541  450576 system_pods.go:89] "kube-apiserver-no-preload-669469" [57089d30-f83b-4f06-8281-8bcdfb571df9] Running
	I0805 13:03:45.597547  450576 system_pods.go:89] "kube-controller-manager-no-preload-669469" [8f3b2de3-6296-4f95-8d91-b9408c8eb38b] Running
	I0805 13:03:45.597550  450576 system_pods.go:89] "kube-proxy-tpn5s" [f89e32f9-d750-41ac-891e-e3ca4a4fbbd2] Running
	I0805 13:03:45.597554  450576 system_pods.go:89] "kube-scheduler-no-preload-669469" [69af56a0-7269-4bc5-83ea-c632c7b8d060] Running
	I0805 13:03:45.597563  450576 system_pods.go:89] "metrics-server-6867b74b74-x4j7b" [55a747e4-f9a7-41f1-b584-470048ba6fcb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 13:03:45.597568  450576 system_pods.go:89] "storage-provisioner" [cb19adf6-e208-4709-b02f-ae32acc30478] Running
	I0805 13:03:45.597577  450576 system_pods.go:126] duration metric: took 8.035546ms to wait for k8s-apps to be running ...
	I0805 13:03:45.597586  450576 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 13:03:45.597631  450576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 13:03:45.619317  450576 system_svc.go:56] duration metric: took 21.706117ms WaitForService to wait for kubelet
	I0805 13:03:45.619365  450576 kubeadm.go:582] duration metric: took 8.353035332s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 13:03:45.619398  450576 node_conditions.go:102] verifying NodePressure condition ...
	I0805 13:03:45.622763  450576 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 13:03:45.622790  450576 node_conditions.go:123] node cpu capacity is 2
	I0805 13:03:45.622801  450576 node_conditions.go:105] duration metric: took 3.396756ms to run NodePressure ...
	I0805 13:03:45.622814  450576 start.go:241] waiting for startup goroutines ...
	I0805 13:03:45.622821  450576 start.go:246] waiting for cluster config update ...
	I0805 13:03:45.622831  450576 start.go:255] writing updated cluster config ...
	I0805 13:03:45.623102  450576 ssh_runner.go:195] Run: rm -f paused
	I0805 13:03:45.682547  450576 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-rc.0 (minor skew: 1)
	I0805 13:03:45.684415  450576 out.go:177] * Done! kubectl is now configured to use "no-preload-669469" cluster and "default" namespace by default
	I0805 13:03:48.707730  450884 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0805 13:03:48.707817  450884 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 13:03:48.707920  450884 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 13:03:48.708065  450884 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 13:03:48.708218  450884 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 13:03:48.708311  450884 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 13:03:48.709807  450884 out.go:204]   - Generating certificates and keys ...
	I0805 13:03:48.709878  450884 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 13:03:48.709931  450884 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 13:03:48.710008  450884 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0805 13:03:48.710084  450884 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0805 13:03:48.710148  450884 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0805 13:03:48.710196  450884 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0805 13:03:48.710251  450884 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0805 13:03:48.710316  450884 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0805 13:03:48.710415  450884 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0805 13:03:48.710520  450884 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0805 13:03:48.710582  450884 kubeadm.go:310] [certs] Using the existing "sa" key
	I0805 13:03:48.710656  450884 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 13:03:48.710700  450884 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 13:03:48.710746  450884 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0805 13:03:48.710790  450884 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 13:03:48.710843  450884 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 13:03:48.710895  450884 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 13:03:48.710971  450884 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 13:03:48.711055  450884 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 13:03:48.713503  450884 out.go:204]   - Booting up control plane ...
	I0805 13:03:48.713601  450884 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 13:03:48.713687  450884 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 13:03:48.713763  450884 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 13:03:48.713911  450884 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 13:03:48.714039  450884 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 13:03:48.714105  450884 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 13:03:48.714222  450884 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0805 13:03:48.714284  450884 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0805 13:03:48.714345  450884 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.128103ms
	I0805 13:03:48.714423  450884 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0805 13:03:48.714491  450884 kubeadm.go:310] [api-check] The API server is healthy after 5.502076793s
	I0805 13:03:48.714600  450884 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 13:03:48.714730  450884 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 13:03:48.714794  450884 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0805 13:03:48.714987  450884 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-371585 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 13:03:48.715075  450884 kubeadm.go:310] [bootstrap-token] Using token: cpuyhq.sjq5yhx27tk7meks
	I0805 13:03:48.716575  450884 out.go:204]   - Configuring RBAC rules ...
	I0805 13:03:48.716686  450884 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 13:03:48.716775  450884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 13:03:48.716952  450884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 13:03:48.717075  450884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 13:03:48.717196  450884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 13:03:48.717270  450884 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 13:03:48.717391  450884 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 13:03:48.717450  450884 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0805 13:03:48.717512  450884 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0805 13:03:48.717521  450884 kubeadm.go:310] 
	I0805 13:03:48.717613  450884 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0805 13:03:48.717623  450884 kubeadm.go:310] 
	I0805 13:03:48.717724  450884 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0805 13:03:48.717734  450884 kubeadm.go:310] 
	I0805 13:03:48.717768  450884 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0805 13:03:48.717848  450884 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 13:03:48.717892  450884 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 13:03:48.717898  450884 kubeadm.go:310] 
	I0805 13:03:48.717968  450884 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0805 13:03:48.717978  450884 kubeadm.go:310] 
	I0805 13:03:48.718047  450884 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 13:03:48.718057  450884 kubeadm.go:310] 
	I0805 13:03:48.718133  450884 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0805 13:03:48.718220  450884 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 13:03:48.718297  450884 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 13:03:48.718304  450884 kubeadm.go:310] 
	I0805 13:03:48.718422  450884 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0805 13:03:48.718506  450884 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0805 13:03:48.718513  450884 kubeadm.go:310] 
	I0805 13:03:48.718585  450884 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token cpuyhq.sjq5yhx27tk7meks \
	I0805 13:03:48.718669  450884 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d5d31a77e9c4cbf19599d2fca5d8f2345e115b01301fa4b841f92bcfec86ddc6 \
	I0805 13:03:48.718688  450884 kubeadm.go:310] 	--control-plane 
	I0805 13:03:48.718694  450884 kubeadm.go:310] 
	I0805 13:03:48.718761  450884 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0805 13:03:48.718769  450884 kubeadm.go:310] 
	I0805 13:03:48.718848  450884 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token cpuyhq.sjq5yhx27tk7meks \
	I0805 13:03:48.718948  450884 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d5d31a77e9c4cbf19599d2fca5d8f2345e115b01301fa4b841f92bcfec86ddc6 
	I0805 13:03:48.718957  450884 cni.go:84] Creating CNI manager for ""
	I0805 13:03:48.718965  450884 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 13:03:48.720262  450884 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0805 13:03:44.946225  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:47.444313  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:48.721390  450884 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0805 13:03:48.732324  450884 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0805 13:03:48.750318  450884 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 13:03:48.750397  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:48.750398  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-371585 minikube.k8s.io/updated_at=2024_08_05T13_03_48_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f minikube.k8s.io/name=default-k8s-diff-port-371585 minikube.k8s.io/primary=true
	I0805 13:03:48.781590  450884 ops.go:34] apiserver oom_adj: -16
	I0805 13:03:48.966544  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:49.467473  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:49.967093  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:50.466813  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:50.967183  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:51.467350  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:51.967432  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:49.444667  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:49.444719  450393 pod_ready.go:81] duration metric: took 4m0.006667631s for pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace to be "Ready" ...
	E0805 13:03:49.444731  450393 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0805 13:03:49.444738  450393 pod_ready.go:38] duration metric: took 4m2.407503205s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 13:03:49.444757  450393 api_server.go:52] waiting for apiserver process to appear ...
	I0805 13:03:49.444787  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:03:49.444849  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:03:49.502039  450393 cri.go:89] found id: "be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7"
	I0805 13:03:49.502067  450393 cri.go:89] found id: ""
	I0805 13:03:49.502079  450393 logs.go:276] 1 containers: [be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7]
	I0805 13:03:49.502139  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:49.510426  450393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:03:49.510494  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:03:49.553861  450393 cri.go:89] found id: "85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804"
	I0805 13:03:49.553889  450393 cri.go:89] found id: ""
	I0805 13:03:49.553899  450393 logs.go:276] 1 containers: [85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804]
	I0805 13:03:49.553960  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:49.558802  450393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:03:49.558868  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:03:49.594787  450393 cri.go:89] found id: "b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb"
	I0805 13:03:49.594810  450393 cri.go:89] found id: ""
	I0805 13:03:49.594828  450393 logs.go:276] 1 containers: [b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb]
	I0805 13:03:49.594891  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:49.599735  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:03:49.599822  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:03:49.637856  450393 cri.go:89] found id: "8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756"
	I0805 13:03:49.637878  450393 cri.go:89] found id: ""
	I0805 13:03:49.637886  450393 logs.go:276] 1 containers: [8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756]
	I0805 13:03:49.637939  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:49.642228  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:03:49.642295  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:03:49.683822  450393 cri.go:89] found id: "c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0"
	I0805 13:03:49.683844  450393 cri.go:89] found id: ""
	I0805 13:03:49.683853  450393 logs.go:276] 1 containers: [c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0]
	I0805 13:03:49.683913  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:49.688077  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:03:49.688155  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:03:49.724887  450393 cri.go:89] found id: "75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f"
	I0805 13:03:49.724913  450393 cri.go:89] found id: ""
	I0805 13:03:49.724923  450393 logs.go:276] 1 containers: [75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f]
	I0805 13:03:49.724987  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:49.728965  450393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:03:49.729052  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:03:49.765826  450393 cri.go:89] found id: ""
	I0805 13:03:49.765859  450393 logs.go:276] 0 containers: []
	W0805 13:03:49.765871  450393 logs.go:278] No container was found matching "kindnet"
	I0805 13:03:49.765878  450393 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0805 13:03:49.765944  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0805 13:03:49.803790  450393 cri.go:89] found id: "07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b"
	I0805 13:03:49.803811  450393 cri.go:89] found id: "2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86"
	I0805 13:03:49.803815  450393 cri.go:89] found id: ""
	I0805 13:03:49.803823  450393 logs.go:276] 2 containers: [07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b 2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86]
	I0805 13:03:49.803887  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:49.808064  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:49.812308  450393 logs.go:123] Gathering logs for storage-provisioner [2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86] ...
	I0805 13:03:49.812332  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86"
	I0805 13:03:49.851842  450393 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:49.851867  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:50.418758  450393 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:50.418808  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 13:03:50.564965  450393 logs.go:123] Gathering logs for coredns [b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb] ...
	I0805 13:03:50.564999  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb"
	I0805 13:03:50.608518  450393 logs.go:123] Gathering logs for kube-apiserver [be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7] ...
	I0805 13:03:50.608557  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7"
	I0805 13:03:50.658446  450393 logs.go:123] Gathering logs for etcd [85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804] ...
	I0805 13:03:50.658482  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804"
	I0805 13:03:50.699924  450393 logs.go:123] Gathering logs for kube-scheduler [8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756] ...
	I0805 13:03:50.699962  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756"
	I0805 13:03:50.741228  450393 logs.go:123] Gathering logs for kube-proxy [c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0] ...
	I0805 13:03:50.741264  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0"
	I0805 13:03:50.776100  450393 logs.go:123] Gathering logs for kube-controller-manager [75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f] ...
	I0805 13:03:50.776133  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f"
	I0805 13:03:50.827847  450393 logs.go:123] Gathering logs for storage-provisioner [07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b] ...
	I0805 13:03:50.827880  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b"
	I0805 13:03:50.867699  450393 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:50.867731  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:50.920049  450393 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:50.920085  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:50.934198  450393 logs.go:123] Gathering logs for container status ...
	I0805 13:03:50.934224  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:03:53.477808  450393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:03:53.494062  450393 api_server.go:72] duration metric: took 4m14.183013645s to wait for apiserver process to appear ...
	I0805 13:03:53.494093  450393 api_server.go:88] waiting for apiserver healthz status ...
	I0805 13:03:53.494143  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:03:53.494211  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:03:53.534293  450393 cri.go:89] found id: "be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7"
	I0805 13:03:53.534322  450393 cri.go:89] found id: ""
	I0805 13:03:53.534333  450393 logs.go:276] 1 containers: [be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7]
	I0805 13:03:53.534400  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:53.539014  450393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:03:53.539088  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:03:53.576587  450393 cri.go:89] found id: "85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804"
	I0805 13:03:53.576608  450393 cri.go:89] found id: ""
	I0805 13:03:53.576616  450393 logs.go:276] 1 containers: [85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804]
	I0805 13:03:53.576667  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:53.582068  450393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:03:53.582147  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:03:53.623240  450393 cri.go:89] found id: "b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb"
	I0805 13:03:53.623264  450393 cri.go:89] found id: ""
	I0805 13:03:53.623274  450393 logs.go:276] 1 containers: [b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb]
	I0805 13:03:53.623352  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:53.627638  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:03:53.627699  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:03:53.668167  450393 cri.go:89] found id: "8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756"
	I0805 13:03:53.668198  450393 cri.go:89] found id: ""
	I0805 13:03:53.668209  450393 logs.go:276] 1 containers: [8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756]
	I0805 13:03:53.668281  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:53.672390  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:03:53.672469  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:03:53.714046  450393 cri.go:89] found id: "c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0"
	I0805 13:03:53.714069  450393 cri.go:89] found id: ""
	I0805 13:03:53.714078  450393 logs.go:276] 1 containers: [c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0]
	I0805 13:03:53.714130  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:53.718325  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:03:53.718392  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:03:53.756343  450393 cri.go:89] found id: "75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f"
	I0805 13:03:53.756372  450393 cri.go:89] found id: ""
	I0805 13:03:53.756382  450393 logs.go:276] 1 containers: [75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f]
	I0805 13:03:53.756444  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:53.760627  450393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:03:53.760696  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:03:53.806370  450393 cri.go:89] found id: ""
	I0805 13:03:53.806406  450393 logs.go:276] 0 containers: []
	W0805 13:03:53.806424  450393 logs.go:278] No container was found matching "kindnet"
	I0805 13:03:53.806432  450393 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0805 13:03:53.806505  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0805 13:03:53.843082  450393 cri.go:89] found id: "07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b"
	I0805 13:03:53.843116  450393 cri.go:89] found id: "2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86"
	I0805 13:03:53.843121  450393 cri.go:89] found id: ""
	I0805 13:03:53.843129  450393 logs.go:276] 2 containers: [07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b 2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86]
	I0805 13:03:53.843188  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:53.847214  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:53.851093  450393 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:53.851112  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:52.467589  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:52.967390  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:53.466580  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:53.967544  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:54.467454  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:54.967281  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:55.467111  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:55.967513  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:56.467255  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:56.967513  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:54.296506  450393 logs.go:123] Gathering logs for kube-apiserver [be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7] ...
	I0805 13:03:54.296556  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7"
	I0805 13:03:54.343983  450393 logs.go:123] Gathering logs for etcd [85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804] ...
	I0805 13:03:54.344026  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804"
	I0805 13:03:54.389236  450393 logs.go:123] Gathering logs for coredns [b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb] ...
	I0805 13:03:54.389271  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb"
	I0805 13:03:54.427964  450393 logs.go:123] Gathering logs for kube-proxy [c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0] ...
	I0805 13:03:54.427996  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0"
	I0805 13:03:54.465953  450393 logs.go:123] Gathering logs for kube-controller-manager [75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f] ...
	I0805 13:03:54.465988  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f"
	I0805 13:03:54.521755  450393 logs.go:123] Gathering logs for storage-provisioner [07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b] ...
	I0805 13:03:54.521835  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b"
	I0805 13:03:54.565481  450393 logs.go:123] Gathering logs for storage-provisioner [2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86] ...
	I0805 13:03:54.565513  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86"
	I0805 13:03:54.606592  450393 logs.go:123] Gathering logs for container status ...
	I0805 13:03:54.606634  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:03:54.650820  450393 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:54.650858  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:54.704512  450393 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:54.704559  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:54.722149  450393 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:54.722184  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 13:03:54.844289  450393 logs.go:123] Gathering logs for kube-scheduler [8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756] ...
	I0805 13:03:54.844324  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756"
	I0805 13:03:57.386998  450393 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0805 13:03:57.391714  450393 api_server.go:279] https://192.168.39.196:8443/healthz returned 200:
	ok
	I0805 13:03:57.392752  450393 api_server.go:141] control plane version: v1.30.3
	I0805 13:03:57.392776  450393 api_server.go:131] duration metric: took 3.898675075s to wait for apiserver health ...
	I0805 13:03:57.392783  450393 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 13:03:57.392812  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:03:57.392868  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:03:57.430171  450393 cri.go:89] found id: "be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7"
	I0805 13:03:57.430201  450393 cri.go:89] found id: ""
	I0805 13:03:57.430210  450393 logs.go:276] 1 containers: [be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7]
	I0805 13:03:57.430270  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:57.434861  450393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:03:57.434920  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:03:57.490595  450393 cri.go:89] found id: "85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804"
	I0805 13:03:57.490622  450393 cri.go:89] found id: ""
	I0805 13:03:57.490632  450393 logs.go:276] 1 containers: [85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804]
	I0805 13:03:57.490702  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:57.496054  450393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:03:57.496141  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:03:57.540248  450393 cri.go:89] found id: "b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb"
	I0805 13:03:57.540278  450393 cri.go:89] found id: ""
	I0805 13:03:57.540289  450393 logs.go:276] 1 containers: [b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb]
	I0805 13:03:57.540353  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:57.547750  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:03:57.547820  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:03:57.595821  450393 cri.go:89] found id: "8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756"
	I0805 13:03:57.595852  450393 cri.go:89] found id: ""
	I0805 13:03:57.595864  450393 logs.go:276] 1 containers: [8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756]
	I0805 13:03:57.595932  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:57.600153  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:03:57.600225  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:03:57.640382  450393 cri.go:89] found id: "c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0"
	I0805 13:03:57.640409  450393 cri.go:89] found id: ""
	I0805 13:03:57.640418  450393 logs.go:276] 1 containers: [c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0]
	I0805 13:03:57.640486  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:57.645476  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:03:57.645569  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:03:57.700199  450393 cri.go:89] found id: "75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f"
	I0805 13:03:57.700224  450393 cri.go:89] found id: ""
	I0805 13:03:57.700233  450393 logs.go:276] 1 containers: [75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f]
	I0805 13:03:57.700294  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:57.704818  450393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:03:57.704874  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:03:57.745647  450393 cri.go:89] found id: ""
	I0805 13:03:57.745677  450393 logs.go:276] 0 containers: []
	W0805 13:03:57.745687  450393 logs.go:278] No container was found matching "kindnet"
	I0805 13:03:57.745696  450393 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0805 13:03:57.745760  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0805 13:03:57.787327  450393 cri.go:89] found id: "07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b"
	I0805 13:03:57.787367  450393 cri.go:89] found id: "2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86"
	I0805 13:03:57.787374  450393 cri.go:89] found id: ""
	I0805 13:03:57.787384  450393 logs.go:276] 2 containers: [07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b 2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86]
	I0805 13:03:57.787448  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:57.792340  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:57.796906  450393 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:57.796933  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:57.850401  450393 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:57.850447  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 13:03:57.961760  450393 logs.go:123] Gathering logs for kube-apiserver [be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7] ...
	I0805 13:03:57.961808  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7"
	I0805 13:03:58.009682  450393 logs.go:123] Gathering logs for etcd [85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804] ...
	I0805 13:03:58.009720  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804"
	I0805 13:03:58.061874  450393 logs.go:123] Gathering logs for kube-proxy [c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0] ...
	I0805 13:03:58.061915  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0"
	I0805 13:03:58.105715  450393 logs.go:123] Gathering logs for kube-controller-manager [75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f] ...
	I0805 13:03:58.105745  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f"
	I0805 13:03:58.164739  450393 logs.go:123] Gathering logs for storage-provisioner [07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b] ...
	I0805 13:03:58.164780  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b"
	I0805 13:03:58.203530  450393 logs.go:123] Gathering logs for storage-provisioner [2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86] ...
	I0805 13:03:58.203579  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86"
	I0805 13:03:58.245478  450393 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:58.245511  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:58.647807  450393 logs.go:123] Gathering logs for container status ...
	I0805 13:03:58.647857  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:03:58.694175  450393 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:58.694211  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:58.709744  450393 logs.go:123] Gathering logs for coredns [b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb] ...
	I0805 13:03:58.709773  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb"
	I0805 13:03:58.750668  450393 logs.go:123] Gathering logs for kube-scheduler [8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756] ...
	I0805 13:03:58.750698  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756"
	I0805 13:04:01.297212  450393 system_pods.go:59] 8 kube-system pods found
	I0805 13:04:01.297248  450393 system_pods.go:61] "coredns-7db6d8ff4d-wm7lh" [e3851d79-431c-4629-bfdc-ed9615cd46aa] Running
	I0805 13:04:01.297255  450393 system_pods.go:61] "etcd-embed-certs-321139" [98de664b-92d7-432d-9881-496dd8edd9f3] Running
	I0805 13:04:01.297261  450393 system_pods.go:61] "kube-apiserver-embed-certs-321139" [2d93e6df-1933-4ac1-82f6-d0d8f74f6d4e] Running
	I0805 13:04:01.297265  450393 system_pods.go:61] "kube-controller-manager-embed-certs-321139" [84165f78-f74b-4714-81b9-eeac2771b86b] Running
	I0805 13:04:01.297269  450393 system_pods.go:61] "kube-proxy-shgv2" [a19c5991-505f-4105-8c20-7afd63dd8e61] Running
	I0805 13:04:01.297273  450393 system_pods.go:61] "kube-scheduler-embed-certs-321139" [961a5013-fd55-48a2-adc2-acde33f6aed5] Running
	I0805 13:04:01.297281  450393 system_pods.go:61] "metrics-server-569cc877fc-k8mrt" [6d400b20-5de5-4046-b773-39766c67cdb4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 13:04:01.297289  450393 system_pods.go:61] "storage-provisioner" [8b2db057-5262-4648-93ea-f2f0ed51a19b] Running
	I0805 13:04:01.297300  450393 system_pods.go:74] duration metric: took 3.904508974s to wait for pod list to return data ...
	I0805 13:04:01.297312  450393 default_sa.go:34] waiting for default service account to be created ...
	I0805 13:04:01.299765  450393 default_sa.go:45] found service account: "default"
	I0805 13:04:01.299792  450393 default_sa.go:55] duration metric: took 2.470684ms for default service account to be created ...
	I0805 13:04:01.299802  450393 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 13:04:01.304612  450393 system_pods.go:86] 8 kube-system pods found
	I0805 13:04:01.304644  450393 system_pods.go:89] "coredns-7db6d8ff4d-wm7lh" [e3851d79-431c-4629-bfdc-ed9615cd46aa] Running
	I0805 13:04:01.304651  450393 system_pods.go:89] "etcd-embed-certs-321139" [98de664b-92d7-432d-9881-496dd8edd9f3] Running
	I0805 13:04:01.304656  450393 system_pods.go:89] "kube-apiserver-embed-certs-321139" [2d93e6df-1933-4ac1-82f6-d0d8f74f6d4e] Running
	I0805 13:04:01.304661  450393 system_pods.go:89] "kube-controller-manager-embed-certs-321139" [84165f78-f74b-4714-81b9-eeac2771b86b] Running
	I0805 13:04:01.304665  450393 system_pods.go:89] "kube-proxy-shgv2" [a19c5991-505f-4105-8c20-7afd63dd8e61] Running
	I0805 13:04:01.304670  450393 system_pods.go:89] "kube-scheduler-embed-certs-321139" [961a5013-fd55-48a2-adc2-acde33f6aed5] Running
	I0805 13:04:01.304677  450393 system_pods.go:89] "metrics-server-569cc877fc-k8mrt" [6d400b20-5de5-4046-b773-39766c67cdb4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 13:04:01.304685  450393 system_pods.go:89] "storage-provisioner" [8b2db057-5262-4648-93ea-f2f0ed51a19b] Running
	I0805 13:04:01.304694  450393 system_pods.go:126] duration metric: took 4.885808ms to wait for k8s-apps to be running ...
	I0805 13:04:01.304702  450393 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 13:04:01.304751  450393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 13:04:01.323278  450393 system_svc.go:56] duration metric: took 18.55935ms WaitForService to wait for kubelet
	I0805 13:04:01.323316  450393 kubeadm.go:582] duration metric: took 4m22.01227204s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 13:04:01.323349  450393 node_conditions.go:102] verifying NodePressure condition ...
	I0805 13:04:01.326802  450393 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 13:04:01.326829  450393 node_conditions.go:123] node cpu capacity is 2
	I0805 13:04:01.326843  450393 node_conditions.go:105] duration metric: took 3.486931ms to run NodePressure ...
	I0805 13:04:01.326859  450393 start.go:241] waiting for startup goroutines ...
	I0805 13:04:01.326869  450393 start.go:246] waiting for cluster config update ...
	I0805 13:04:01.326883  450393 start.go:255] writing updated cluster config ...
	I0805 13:04:01.327230  450393 ssh_runner.go:195] Run: rm -f paused
	I0805 13:04:01.380315  450393 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0805 13:04:01.381891  450393 out.go:177] * Done! kubectl is now configured to use "embed-certs-321139" cluster and "default" namespace by default
	I0805 13:03:57.113870  451238 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0805 13:03:57.114408  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:03:57.114630  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:03:57.467412  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:57.967538  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:58.467217  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:58.967035  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:59.466816  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:59.966909  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:04:00.467553  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:04:00.967667  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:04:01.467382  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:04:01.967495  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:04:02.085428  450884 kubeadm.go:1113] duration metric: took 13.335097096s to wait for elevateKubeSystemPrivileges
	I0805 13:04:02.085464  450884 kubeadm.go:394] duration metric: took 5m13.227479413s to StartCluster
	I0805 13:04:02.085482  450884 settings.go:142] acquiring lock: {Name:mkef693333292ed53a03690c72ec170ce2e26d3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 13:04:02.085571  450884 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 13:04:02.087178  450884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/kubeconfig: {Name:mkf2ea766e58530103015ce4ba9d1ed3336f3926 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 13:04:02.087425  450884 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.228 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 13:04:02.087550  450884 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 13:04:02.087653  450884 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-371585"
	I0805 13:04:02.087659  450884 config.go:182] Loaded profile config "default-k8s-diff-port-371585": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 13:04:02.087681  450884 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-371585"
	I0805 13:04:02.087697  450884 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-371585"
	I0805 13:04:02.087718  450884 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-371585"
	W0805 13:04:02.087729  450884 addons.go:243] addon metrics-server should already be in state true
	I0805 13:04:02.087783  450884 host.go:66] Checking if "default-k8s-diff-port-371585" exists ...
	I0805 13:04:02.087727  450884 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-371585"
	I0805 13:04:02.087692  450884 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-371585"
	W0805 13:04:02.087953  450884 addons.go:243] addon storage-provisioner should already be in state true
	I0805 13:04:02.087986  450884 host.go:66] Checking if "default-k8s-diff-port-371585" exists ...
	I0805 13:04:02.088243  450884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:04:02.088294  450884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:04:02.088243  450884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:04:02.088377  450884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:04:02.088406  450884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:04:02.088415  450884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:04:02.088935  450884 out.go:177] * Verifying Kubernetes components...
	I0805 13:04:02.090386  450884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 13:04:02.105328  450884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39169
	I0805 13:04:02.105335  450884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33049
	I0805 13:04:02.105853  450884 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:04:02.105848  450884 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:04:02.106395  450884 main.go:141] libmachine: Using API Version  1
	I0805 13:04:02.106398  450884 main.go:141] libmachine: Using API Version  1
	I0805 13:04:02.106420  450884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:04:02.106423  450884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:04:02.106506  450884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33831
	I0805 13:04:02.106879  450884 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:04:02.106957  450884 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:04:02.106982  450884 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:04:02.107193  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetState
	I0805 13:04:02.107508  450884 main.go:141] libmachine: Using API Version  1
	I0805 13:04:02.107522  450884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:04:02.107534  450884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:04:02.107561  450884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:04:02.107903  450884 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:04:02.108458  450884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:04:02.108490  450884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:04:02.111681  450884 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-371585"
	W0805 13:04:02.111709  450884 addons.go:243] addon default-storageclass should already be in state true
	I0805 13:04:02.111775  450884 host.go:66] Checking if "default-k8s-diff-port-371585" exists ...
	I0805 13:04:02.113601  450884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:04:02.113648  450884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:04:02.127860  450884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37207
	I0805 13:04:02.128512  450884 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:04:02.128619  450884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39253
	I0805 13:04:02.129023  450884 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:04:02.129174  450884 main.go:141] libmachine: Using API Version  1
	I0805 13:04:02.129198  450884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:04:02.129495  450884 main.go:141] libmachine: Using API Version  1
	I0805 13:04:02.129516  450884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:04:02.129566  450884 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:04:02.129850  450884 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:04:02.129879  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetState
	I0805 13:04:02.130443  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetState
	I0805 13:04:02.131691  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 13:04:02.132370  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 13:04:02.133468  450884 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 13:04:02.134210  450884 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0805 13:04:02.134899  450884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37161
	I0805 13:04:02.135049  450884 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0805 13:04:02.135067  450884 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0805 13:04:02.135099  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 13:04:02.135183  450884 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 13:04:02.135201  450884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 13:04:02.135216  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 13:04:02.135404  450884 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:04:02.136704  450884 main.go:141] libmachine: Using API Version  1
	I0805 13:04:02.136723  450884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:04:02.138362  450884 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:04:02.138801  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 13:04:02.138918  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 13:04:02.139264  450884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:04:02.139290  450884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:04:02.139335  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 13:04:02.139377  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 13:04:02.139404  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 13:04:02.139448  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 13:04:02.139482  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 13:04:02.139503  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 13:04:02.139581  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 13:04:02.139637  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 13:04:02.139737  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 13:04:02.139807  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 13:04:02.139867  450884 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa Username:docker}
	I0805 13:04:02.139909  450884 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa Username:docker}
	I0805 13:04:02.159720  450884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34137
	I0805 13:04:02.160199  450884 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:04:02.160744  450884 main.go:141] libmachine: Using API Version  1
	I0805 13:04:02.160770  450884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:04:02.161048  450884 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:04:02.161246  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetState
	I0805 13:04:02.162535  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 13:04:02.162788  450884 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 13:04:02.162805  450884 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 13:04:02.162825  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 13:04:02.165787  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 13:04:02.166204  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 13:04:02.166236  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 13:04:02.166411  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 13:04:02.166594  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 13:04:02.166744  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 13:04:02.166876  450884 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa Username:docker}
	I0805 13:04:02.349175  450884 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 13:04:02.453663  450884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 13:04:02.462474  450884 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-371585" to be "Ready" ...
	I0805 13:04:02.472177  450884 node_ready.go:49] node "default-k8s-diff-port-371585" has status "Ready":"True"
	I0805 13:04:02.472201  450884 node_ready.go:38] duration metric: took 9.692872ms for node "default-k8s-diff-port-371585" to be "Ready" ...
	I0805 13:04:02.472211  450884 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 13:04:02.474341  450884 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0805 13:04:02.474363  450884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0805 13:04:02.485604  450884 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-5vxpl" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:02.514889  450884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 13:04:02.543388  450884 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0805 13:04:02.543428  450884 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0805 13:04:02.618040  450884 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0805 13:04:02.618094  450884 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0805 13:04:02.716705  450884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0805 13:04:02.784102  450884 main.go:141] libmachine: Making call to close driver server
	I0805 13:04:02.784193  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .Close
	I0805 13:04:02.784545  450884 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:04:02.784566  450884 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:04:02.784577  450884 main.go:141] libmachine: Making call to close driver server
	I0805 13:04:02.784586  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .Close
	I0805 13:04:02.784588  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | Closing plugin on server side
	I0805 13:04:02.784851  450884 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:04:02.784868  450884 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:04:02.784868  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | Closing plugin on server side
	I0805 13:04:02.797584  450884 main.go:141] libmachine: Making call to close driver server
	I0805 13:04:02.797617  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .Close
	I0805 13:04:02.797938  450884 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:04:02.797956  450884 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:04:03.431060  450884 main.go:141] libmachine: Making call to close driver server
	I0805 13:04:03.431091  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .Close
	I0805 13:04:03.431452  450884 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:04:03.431494  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | Closing plugin on server side
	I0805 13:04:03.431511  450884 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:04:03.431530  450884 main.go:141] libmachine: Making call to close driver server
	I0805 13:04:03.431539  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .Close
	I0805 13:04:03.431839  450884 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:04:03.431893  450884 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:04:03.746668  450884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.029912928s)
	I0805 13:04:03.746734  450884 main.go:141] libmachine: Making call to close driver server
	I0805 13:04:03.746750  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .Close
	I0805 13:04:03.747152  450884 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:04:03.747180  450884 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:04:03.747191  450884 main.go:141] libmachine: Making call to close driver server
	I0805 13:04:03.747200  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .Close
	I0805 13:04:03.748527  450884 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:04:03.748558  450884 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:04:03.748571  450884 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-371585"
	I0805 13:04:03.750522  450884 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0805 13:04:03.751714  450884 addons.go:510] duration metric: took 1.664163176s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0805 13:04:04.491832  450884 pod_ready.go:92] pod "coredns-7db6d8ff4d-5vxpl" in "kube-system" namespace has status "Ready":"True"
	I0805 13:04:04.491861  450884 pod_ready.go:81] duration metric: took 2.00623062s for pod "coredns-7db6d8ff4d-5vxpl" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.491870  450884 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-qtt9j" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.496173  450884 pod_ready.go:92] pod "coredns-7db6d8ff4d-qtt9j" in "kube-system" namespace has status "Ready":"True"
	I0805 13:04:04.496194  450884 pod_ready.go:81] duration metric: took 4.317446ms for pod "coredns-7db6d8ff4d-qtt9j" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.496202  450884 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.500270  450884 pod_ready.go:92] pod "etcd-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"True"
	I0805 13:04:04.500297  450884 pod_ready.go:81] duration metric: took 4.088399ms for pod "etcd-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.500309  450884 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.504892  450884 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"True"
	I0805 13:04:04.504917  450884 pod_ready.go:81] duration metric: took 4.598589ms for pod "kube-apiserver-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.504926  450884 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.509448  450884 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"True"
	I0805 13:04:04.509468  450884 pod_ready.go:81] duration metric: took 4.535174ms for pod "kube-controller-manager-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.509478  450884 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4v6sn" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.890517  450884 pod_ready.go:92] pod "kube-proxy-4v6sn" in "kube-system" namespace has status "Ready":"True"
	I0805 13:04:04.890544  450884 pod_ready.go:81] duration metric: took 381.059204ms for pod "kube-proxy-4v6sn" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.890552  450884 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:05.289670  450884 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"True"
	I0805 13:04:05.289701  450884 pod_ready.go:81] duration metric: took 399.141309ms for pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:05.289712  450884 pod_ready.go:38] duration metric: took 2.817491444s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 13:04:05.289732  450884 api_server.go:52] waiting for apiserver process to appear ...
	I0805 13:04:05.289805  450884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:04:05.305815  450884 api_server.go:72] duration metric: took 3.218344531s to wait for apiserver process to appear ...
	I0805 13:04:05.305848  450884 api_server.go:88] waiting for apiserver healthz status ...
	I0805 13:04:05.305870  450884 api_server.go:253] Checking apiserver healthz at https://192.168.50.228:8444/healthz ...
	I0805 13:04:05.311144  450884 api_server.go:279] https://192.168.50.228:8444/healthz returned 200:
	ok
	I0805 13:04:05.312427  450884 api_server.go:141] control plane version: v1.30.3
	I0805 13:04:05.312450  450884 api_server.go:131] duration metric: took 6.595933ms to wait for apiserver health ...
	I0805 13:04:05.312460  450884 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 13:04:05.493376  450884 system_pods.go:59] 9 kube-system pods found
	I0805 13:04:05.493417  450884 system_pods.go:61] "coredns-7db6d8ff4d-5vxpl" [6f6aa906-d76f-4f92-8de4-4d3a4a1ee733] Running
	I0805 13:04:05.493425  450884 system_pods.go:61] "coredns-7db6d8ff4d-qtt9j" [8dcadd0b-af8c-4d76-a1d1-ceeaffb725b8] Running
	I0805 13:04:05.493432  450884 system_pods.go:61] "etcd-default-k8s-diff-port-371585" [c3ab12b8-78ea-42c5-a1d3-e37eb9e72961] Running
	I0805 13:04:05.493438  450884 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-371585" [16d27e99-f652-4e88-907f-c2895f051a8a] Running
	I0805 13:04:05.493444  450884 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-371585" [f8d0d828-a7fb-4887-bbf9-e3ad9fd3ebf3] Running
	I0805 13:04:05.493450  450884 system_pods.go:61] "kube-proxy-4v6sn" [497a1512-cdee-49ff-92ea-ea523d3de2a4] Running
	I0805 13:04:05.493456  450884 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-371585" [48ae4522-6d11-4f79-820b-68eb06410186] Running
	I0805 13:04:05.493465  450884 system_pods.go:61] "metrics-server-569cc877fc-xf92r" [edb560ac-ddb1-4afa-b3a3-aa054ea38162] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 13:04:05.493475  450884 system_pods.go:61] "storage-provisioner" [8f3de3fc-9b34-4a46-a7cf-5487647b06ca] Running
	I0805 13:04:05.493488  450884 system_pods.go:74] duration metric: took 181.019102ms to wait for pod list to return data ...
	I0805 13:04:05.493504  450884 default_sa.go:34] waiting for default service account to be created ...
	I0805 13:04:05.688283  450884 default_sa.go:45] found service account: "default"
	I0805 13:04:05.688313  450884 default_sa.go:55] duration metric: took 194.799711ms for default service account to be created ...
	I0805 13:04:05.688323  450884 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 13:04:05.892656  450884 system_pods.go:86] 9 kube-system pods found
	I0805 13:04:05.892688  450884 system_pods.go:89] "coredns-7db6d8ff4d-5vxpl" [6f6aa906-d76f-4f92-8de4-4d3a4a1ee733] Running
	I0805 13:04:05.892696  450884 system_pods.go:89] "coredns-7db6d8ff4d-qtt9j" [8dcadd0b-af8c-4d76-a1d1-ceeaffb725b8] Running
	I0805 13:04:05.892702  450884 system_pods.go:89] "etcd-default-k8s-diff-port-371585" [c3ab12b8-78ea-42c5-a1d3-e37eb9e72961] Running
	I0805 13:04:05.892709  450884 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-371585" [16d27e99-f652-4e88-907f-c2895f051a8a] Running
	I0805 13:04:05.892715  450884 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-371585" [f8d0d828-a7fb-4887-bbf9-e3ad9fd3ebf3] Running
	I0805 13:04:05.892721  450884 system_pods.go:89] "kube-proxy-4v6sn" [497a1512-cdee-49ff-92ea-ea523d3de2a4] Running
	I0805 13:04:05.892727  450884 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-371585" [48ae4522-6d11-4f79-820b-68eb06410186] Running
	I0805 13:04:05.892737  450884 system_pods.go:89] "metrics-server-569cc877fc-xf92r" [edb560ac-ddb1-4afa-b3a3-aa054ea38162] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 13:04:05.892743  450884 system_pods.go:89] "storage-provisioner" [8f3de3fc-9b34-4a46-a7cf-5487647b06ca] Running
	I0805 13:04:05.892755  450884 system_pods.go:126] duration metric: took 204.423562ms to wait for k8s-apps to be running ...
	I0805 13:04:05.892765  450884 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 13:04:05.892819  450884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 13:04:05.907542  450884 system_svc.go:56] duration metric: took 14.764349ms WaitForService to wait for kubelet
	I0805 13:04:05.907576  450884 kubeadm.go:582] duration metric: took 3.820116927s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 13:04:05.907599  450884 node_conditions.go:102] verifying NodePressure condition ...
	I0805 13:04:06.089000  450884 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 13:04:06.089025  450884 node_conditions.go:123] node cpu capacity is 2
	I0805 13:04:06.089035  450884 node_conditions.go:105] duration metric: took 181.431221ms to run NodePressure ...
	I0805 13:04:06.089047  450884 start.go:241] waiting for startup goroutines ...
	I0805 13:04:06.089054  450884 start.go:246] waiting for cluster config update ...
	I0805 13:04:06.089065  450884 start.go:255] writing updated cluster config ...
	I0805 13:04:06.089373  450884 ssh_runner.go:195] Run: rm -f paused
	I0805 13:04:06.140202  450884 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0805 13:04:06.142149  450884 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-371585" cluster and "default" namespace by default
	I0805 13:04:02.115811  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:04:02.116057  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:04:12.115990  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:04:12.116208  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:04:32.116734  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:04:32.117001  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:05:12.119196  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:05:12.119475  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:05:12.119502  451238 kubeadm.go:310] 
	I0805 13:05:12.119564  451238 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0805 13:05:12.119622  451238 kubeadm.go:310] 		timed out waiting for the condition
	I0805 13:05:12.119634  451238 kubeadm.go:310] 
	I0805 13:05:12.119680  451238 kubeadm.go:310] 	This error is likely caused by:
	I0805 13:05:12.119724  451238 kubeadm.go:310] 		- The kubelet is not running
	I0805 13:05:12.119880  451238 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0805 13:05:12.119898  451238 kubeadm.go:310] 
	I0805 13:05:12.120029  451238 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0805 13:05:12.120114  451238 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0805 13:05:12.120169  451238 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0805 13:05:12.120179  451238 kubeadm.go:310] 
	I0805 13:05:12.120321  451238 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0805 13:05:12.120445  451238 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0805 13:05:12.120455  451238 kubeadm.go:310] 
	I0805 13:05:12.120612  451238 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0805 13:05:12.120751  451238 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0805 13:05:12.120888  451238 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0805 13:05:12.121010  451238 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0805 13:05:12.121023  451238 kubeadm.go:310] 
	I0805 13:05:12.121325  451238 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 13:05:12.121458  451238 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0805 13:05:12.121545  451238 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0805 13:05:12.121714  451238 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0805 13:05:12.121782  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0805 13:05:12.587687  451238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 13:05:12.603422  451238 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 13:05:12.614302  451238 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 13:05:12.614330  451238 kubeadm.go:157] found existing configuration files:
	
	I0805 13:05:12.614391  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 13:05:12.625131  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 13:05:12.625199  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 13:05:12.635606  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 13:05:12.644896  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 13:05:12.644953  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 13:05:12.655178  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 13:05:12.664668  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 13:05:12.664753  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 13:05:12.675174  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 13:05:12.684765  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 13:05:12.684834  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 13:05:12.694762  451238 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 13:05:12.930906  451238 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 13:07:09.256859  451238 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0805 13:07:09.257016  451238 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0805 13:07:09.258511  451238 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0805 13:07:09.258579  451238 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 13:07:09.258710  451238 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 13:07:09.258881  451238 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 13:07:09.259022  451238 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 13:07:09.259125  451238 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 13:07:09.260912  451238 out.go:204]   - Generating certificates and keys ...
	I0805 13:07:09.261023  451238 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 13:07:09.261123  451238 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 13:07:09.261232  451238 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0805 13:07:09.261319  451238 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0805 13:07:09.261411  451238 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0805 13:07:09.261507  451238 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0805 13:07:09.261601  451238 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0805 13:07:09.261690  451238 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0805 13:07:09.261801  451238 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0805 13:07:09.261946  451238 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0805 13:07:09.262015  451238 kubeadm.go:310] [certs] Using the existing "sa" key
	I0805 13:07:09.262119  451238 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 13:07:09.262198  451238 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 13:07:09.262273  451238 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 13:07:09.262369  451238 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 13:07:09.262464  451238 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 13:07:09.262615  451238 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 13:07:09.262731  451238 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 13:07:09.262770  451238 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 13:07:09.262831  451238 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 13:07:09.264428  451238 out.go:204]   - Booting up control plane ...
	I0805 13:07:09.264537  451238 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 13:07:09.264663  451238 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 13:07:09.264774  451238 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 13:07:09.264896  451238 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 13:07:09.265144  451238 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0805 13:07:09.265224  451238 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0805 13:07:09.265318  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:07:09.265554  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:07:09.265630  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:07:09.265783  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:07:09.265886  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:07:09.266143  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:07:09.266221  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:07:09.266387  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:07:09.266472  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:07:09.266656  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:07:09.266673  451238 kubeadm.go:310] 
	I0805 13:07:09.266707  451238 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0805 13:07:09.266738  451238 kubeadm.go:310] 		timed out waiting for the condition
	I0805 13:07:09.266743  451238 kubeadm.go:310] 
	I0805 13:07:09.266788  451238 kubeadm.go:310] 	This error is likely caused by:
	I0805 13:07:09.266819  451238 kubeadm.go:310] 		- The kubelet is not running
	I0805 13:07:09.266924  451238 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0805 13:07:09.266932  451238 kubeadm.go:310] 
	I0805 13:07:09.267050  451238 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0805 13:07:09.267137  451238 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0805 13:07:09.267192  451238 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0805 13:07:09.267201  451238 kubeadm.go:310] 
	I0805 13:07:09.267316  451238 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0805 13:07:09.267435  451238 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0805 13:07:09.267445  451238 kubeadm.go:310] 
	I0805 13:07:09.267570  451238 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0805 13:07:09.267683  451238 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0805 13:07:09.267802  451238 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0805 13:07:09.267898  451238 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0805 13:07:09.267986  451238 kubeadm.go:310] 
	I0805 13:07:09.268003  451238 kubeadm.go:394] duration metric: took 7m57.870990174s to StartCluster
	I0805 13:07:09.268066  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:07:09.268158  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:07:09.311436  451238 cri.go:89] found id: ""
	I0805 13:07:09.311471  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.311497  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:07:09.311509  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:07:09.311573  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:07:09.347748  451238 cri.go:89] found id: ""
	I0805 13:07:09.347776  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.347784  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:07:09.347797  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:07:09.347860  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:07:09.385418  451238 cri.go:89] found id: ""
	I0805 13:07:09.385445  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.385453  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:07:09.385460  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:07:09.385517  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:07:09.427209  451238 cri.go:89] found id: ""
	I0805 13:07:09.427255  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.427268  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:07:09.427276  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:07:09.427360  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:07:09.461763  451238 cri.go:89] found id: ""
	I0805 13:07:09.461787  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.461795  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:07:09.461801  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:07:09.461854  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:07:09.498655  451238 cri.go:89] found id: ""
	I0805 13:07:09.498692  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.498705  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:07:09.498713  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:07:09.498782  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:07:09.534100  451238 cri.go:89] found id: ""
	I0805 13:07:09.534134  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.534143  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:07:09.534149  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:07:09.534207  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:07:09.570089  451238 cri.go:89] found id: ""
	I0805 13:07:09.570125  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.570137  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:07:09.570153  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:07:09.570176  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:07:09.625158  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:07:09.625199  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:07:09.640087  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:07:09.640119  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:07:09.719851  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:07:09.719879  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:07:09.719895  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:07:09.832717  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:07:09.832758  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0805 13:07:09.878585  451238 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0805 13:07:09.878653  451238 out.go:239] * 
	W0805 13:07:09.878739  451238 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0805 13:07:09.878767  451238 out.go:239] * 
	W0805 13:07:09.879755  451238 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 13:07:09.883027  451238 out.go:177] 
	W0805 13:07:09.884197  451238 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0805 13:07:09.884243  451238 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0805 13:07:09.884265  451238 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0805 13:07:09.885783  451238 out.go:177] 
	
	
	==> CRI-O <==
	Aug 05 13:07:11 old-k8s-version-635707 crio[653]: time="2024-08-05 13:07:11.818904495Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863231818873666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e1399932-c5c4-4e8e-8624-a5da93e2dc3e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:07:11 old-k8s-version-635707 crio[653]: time="2024-08-05 13:07:11.819548915Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=db4118f2-c0de-4b49-9170-29b692b81ce9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:07:11 old-k8s-version-635707 crio[653]: time="2024-08-05 13:07:11.819605480Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=db4118f2-c0de-4b49-9170-29b692b81ce9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:07:11 old-k8s-version-635707 crio[653]: time="2024-08-05 13:07:11.819637843Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=db4118f2-c0de-4b49-9170-29b692b81ce9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:07:11 old-k8s-version-635707 crio[653]: time="2024-08-05 13:07:11.854742120Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3c66139a-154d-4209-a9d8-9c423692b9dc name=/runtime.v1.RuntimeService/Version
	Aug 05 13:07:11 old-k8s-version-635707 crio[653]: time="2024-08-05 13:07:11.854860315Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3c66139a-154d-4209-a9d8-9c423692b9dc name=/runtime.v1.RuntimeService/Version
	Aug 05 13:07:11 old-k8s-version-635707 crio[653]: time="2024-08-05 13:07:11.856153316Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=33a2b01b-c5b2-4aaf-8d6b-99c5099efd87 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:07:11 old-k8s-version-635707 crio[653]: time="2024-08-05 13:07:11.856655572Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863231856633723,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=33a2b01b-c5b2-4aaf-8d6b-99c5099efd87 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:07:11 old-k8s-version-635707 crio[653]: time="2024-08-05 13:07:11.857909820Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=53a3deda-163e-4d7f-90cb-9a3aeff573fd name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:07:11 old-k8s-version-635707 crio[653]: time="2024-08-05 13:07:11.857980444Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=53a3deda-163e-4d7f-90cb-9a3aeff573fd name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:07:11 old-k8s-version-635707 crio[653]: time="2024-08-05 13:07:11.858034129Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=53a3deda-163e-4d7f-90cb-9a3aeff573fd name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:07:11 old-k8s-version-635707 crio[653]: time="2024-08-05 13:07:11.895857127Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9add930f-cbf8-4b33-9754-8721818e65d9 name=/runtime.v1.RuntimeService/Version
	Aug 05 13:07:11 old-k8s-version-635707 crio[653]: time="2024-08-05 13:07:11.895952232Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9add930f-cbf8-4b33-9754-8721818e65d9 name=/runtime.v1.RuntimeService/Version
	Aug 05 13:07:11 old-k8s-version-635707 crio[653]: time="2024-08-05 13:07:11.897324012Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bda71b84-d1af-4e58-ad2c-df82210060ba name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:07:11 old-k8s-version-635707 crio[653]: time="2024-08-05 13:07:11.897764087Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863231897733798,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bda71b84-d1af-4e58-ad2c-df82210060ba name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:07:11 old-k8s-version-635707 crio[653]: time="2024-08-05 13:07:11.898809763Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=97d69daa-9663-4b17-bddd-2bbf70792c50 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:07:11 old-k8s-version-635707 crio[653]: time="2024-08-05 13:07:11.898881352Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=97d69daa-9663-4b17-bddd-2bbf70792c50 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:07:11 old-k8s-version-635707 crio[653]: time="2024-08-05 13:07:11.898930512Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=97d69daa-9663-4b17-bddd-2bbf70792c50 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:07:11 old-k8s-version-635707 crio[653]: time="2024-08-05 13:07:11.931064250Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=81ae74ea-0a1c-469a-98e3-632d71ea7845 name=/runtime.v1.RuntimeService/Version
	Aug 05 13:07:11 old-k8s-version-635707 crio[653]: time="2024-08-05 13:07:11.931145182Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=81ae74ea-0a1c-469a-98e3-632d71ea7845 name=/runtime.v1.RuntimeService/Version
	Aug 05 13:07:11 old-k8s-version-635707 crio[653]: time="2024-08-05 13:07:11.932704451Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=be322207-321d-4e7c-8c62-18eaac45a95b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:07:11 old-k8s-version-635707 crio[653]: time="2024-08-05 13:07:11.933253361Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863231933167133,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=be322207-321d-4e7c-8c62-18eaac45a95b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:07:11 old-k8s-version-635707 crio[653]: time="2024-08-05 13:07:11.933750951Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b174b0bb-826e-4422-bd16-e57bcc009f03 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:07:11 old-k8s-version-635707 crio[653]: time="2024-08-05 13:07:11.933824189Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b174b0bb-826e-4422-bd16-e57bcc009f03 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:07:11 old-k8s-version-635707 crio[653]: time="2024-08-05 13:07:11.933860107Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b174b0bb-826e-4422-bd16-e57bcc009f03 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug 5 12:58] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051038] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041240] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.092710] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.744514] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.605530] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug 5 12:59] systemd-fstab-generator[575]: Ignoring "noauto" option for root device
	[  +0.063666] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056001] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.204547] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.129155] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.264906] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +6.500378] systemd-fstab-generator[840]: Ignoring "noauto" option for root device
	[  +0.060609] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.866070] systemd-fstab-generator[964]: Ignoring "noauto" option for root device
	[ +12.192283] kauditd_printk_skb: 46 callbacks suppressed
	[Aug 5 13:03] systemd-fstab-generator[5024]: Ignoring "noauto" option for root device
	[Aug 5 13:05] systemd-fstab-generator[5302]: Ignoring "noauto" option for root device
	[  +0.067316] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 13:07:12 up 8 min,  0 users,  load average: 0.01, 0.13, 0.09
	Linux old-k8s-version-635707 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 05 13:07:09 old-k8s-version-635707 kubelet[5483]: net.(*Dialer).DialContext(0xc000b21da0, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000c9c4b0, 0x24, 0x0, 0x0, 0x0, ...)
	Aug 05 13:07:09 old-k8s-version-635707 kubelet[5483]:         /usr/local/go/src/net/dial.go:425 +0x6e5
	Aug 05 13:07:09 old-k8s-version-635707 kubelet[5483]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000b4ab20, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000c9c4b0, 0x24, 0x1000000000060, 0x7f53d8994188, 0x118, ...)
	Aug 05 13:07:09 old-k8s-version-635707 kubelet[5483]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Aug 05 13:07:09 old-k8s-version-635707 kubelet[5483]: net/http.(*Transport).dial(0xc000630280, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000c9c4b0, 0x24, 0x0, 0x0, 0x0, ...)
	Aug 05 13:07:09 old-k8s-version-635707 kubelet[5483]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Aug 05 13:07:09 old-k8s-version-635707 kubelet[5483]: net/http.(*Transport).dialConn(0xc000630280, 0x4f7fe00, 0xc000052030, 0x0, 0xc000410600, 0x5, 0xc000c9c4b0, 0x24, 0x0, 0xc000cc6000, ...)
	Aug 05 13:07:09 old-k8s-version-635707 kubelet[5483]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Aug 05 13:07:09 old-k8s-version-635707 kubelet[5483]: net/http.(*Transport).dialConnFor(0xc000630280, 0xc000c0f3f0)
	Aug 05 13:07:09 old-k8s-version-635707 kubelet[5483]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Aug 05 13:07:09 old-k8s-version-635707 kubelet[5483]: created by net/http.(*Transport).queueForDial
	Aug 05 13:07:09 old-k8s-version-635707 kubelet[5483]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Aug 05 13:07:09 old-k8s-version-635707 kubelet[5483]: goroutine 173 [select]:
	Aug 05 13:07:09 old-k8s-version-635707 kubelet[5483]: net.(*netFD).connect.func2(0x4f7fe40, 0xc000c77ce0, 0xc0007d3180, 0xc000c8ef00, 0xc000c8eea0)
	Aug 05 13:07:09 old-k8s-version-635707 kubelet[5483]:         /usr/local/go/src/net/fd_unix.go:118 +0xc5
	Aug 05 13:07:09 old-k8s-version-635707 kubelet[5483]: created by net.(*netFD).connect
	Aug 05 13:07:09 old-k8s-version-635707 kubelet[5483]:         /usr/local/go/src/net/fd_unix.go:117 +0x234
	Aug 05 13:07:09 old-k8s-version-635707 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Aug 05 13:07:09 old-k8s-version-635707 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 05 13:07:09 old-k8s-version-635707 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 05 13:07:10 old-k8s-version-635707 kubelet[5548]: I0805 13:07:09.997486    5548 server.go:416] Version: v1.20.0
	Aug 05 13:07:10 old-k8s-version-635707 kubelet[5548]: I0805 13:07:09.997917    5548 server.go:837] Client rotation is on, will bootstrap in background
	Aug 05 13:07:10 old-k8s-version-635707 kubelet[5548]: I0805 13:07:10.001068    5548 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 05 13:07:10 old-k8s-version-635707 kubelet[5548]: I0805 13:07:10.003252    5548 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 05 13:07:10 old-k8s-version-635707 kubelet[5548]: W0805 13:07:10.003413    5548 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-635707 -n old-k8s-version-635707
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-635707 -n old-k8s-version-635707: exit status 2 (229.564169ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-635707" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (721.67s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-669469 -n no-preload-669469
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-05 13:12:46.261221365 +0000 UTC m=+6353.218545074
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-669469 -n no-preload-669469
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-669469 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-669469 logs -n 25: (2.145106667s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-119870 sudo cat                              | bridge-119870                | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-119870 sudo                                  | bridge-119870                | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-119870 sudo                                  | bridge-119870                | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-119870 sudo                                  | bridge-119870                | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-119870 sudo find                             | bridge-119870                | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-119870 sudo crio                             | bridge-119870                | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-119870                                       | bridge-119870                | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	| delete  | -p                                                     | disable-driver-mounts-130994 | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | disable-driver-mounts-130994                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-371585 | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:51 UTC |
	|         | default-k8s-diff-port-371585                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-321139            | embed-certs-321139           | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-321139                                  | embed-certs-321139           | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-669469             | no-preload-669469            | jenkins | v1.33.1 | 05 Aug 24 12:51 UTC | 05 Aug 24 12:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-669469                                   | no-preload-669469            | jenkins | v1.33.1 | 05 Aug 24 12:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-371585  | default-k8s-diff-port-371585 | jenkins | v1.33.1 | 05 Aug 24 12:51 UTC | 05 Aug 24 12:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-371585 | jenkins | v1.33.1 | 05 Aug 24 12:51 UTC |                     |
	|         | default-k8s-diff-port-371585                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-321139                 | embed-certs-321139           | jenkins | v1.33.1 | 05 Aug 24 12:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-635707        | old-k8s-version-635707       | jenkins | v1.33.1 | 05 Aug 24 12:53 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-321139                                  | embed-certs-321139           | jenkins | v1.33.1 | 05 Aug 24 12:53 UTC | 05 Aug 24 13:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-669469                  | no-preload-669469            | jenkins | v1.33.1 | 05 Aug 24 12:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-669469                                   | no-preload-669469            | jenkins | v1.33.1 | 05 Aug 24 12:53 UTC | 05 Aug 24 13:03 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-371585       | default-k8s-diff-port-371585 | jenkins | v1.33.1 | 05 Aug 24 12:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-371585 | jenkins | v1.33.1 | 05 Aug 24 12:54 UTC | 05 Aug 24 13:04 UTC |
	|         | default-k8s-diff-port-371585                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-635707                              | old-k8s-version-635707       | jenkins | v1.33.1 | 05 Aug 24 12:55 UTC | 05 Aug 24 12:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-635707             | old-k8s-version-635707       | jenkins | v1.33.1 | 05 Aug 24 12:55 UTC | 05 Aug 24 12:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-635707                              | old-k8s-version-635707       | jenkins | v1.33.1 | 05 Aug 24 12:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 12:55:11
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 12:55:11.960192  451238 out.go:291] Setting OutFile to fd 1 ...
	I0805 12:55:11.960471  451238 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:55:11.960479  451238 out.go:304] Setting ErrFile to fd 2...
	I0805 12:55:11.960484  451238 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:55:11.960646  451238 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
	I0805 12:55:11.961145  451238 out.go:298] Setting JSON to false
	I0805 12:55:11.962063  451238 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":9459,"bootTime":1722853053,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0805 12:55:11.962121  451238 start.go:139] virtualization: kvm guest
	I0805 12:55:11.964372  451238 out.go:177] * [old-k8s-version-635707] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0805 12:55:11.965770  451238 notify.go:220] Checking for updates...
	I0805 12:55:11.965787  451238 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 12:55:11.967106  451238 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 12:55:11.968790  451238 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 12:55:11.970181  451238 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 12:55:11.971500  451238 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0805 12:55:11.973243  451238 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 12:55:11.974825  451238 config.go:182] Loaded profile config "old-k8s-version-635707": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0805 12:55:11.975239  451238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:55:11.975319  451238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:55:11.990296  451238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40583
	I0805 12:55:11.990704  451238 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:55:11.991235  451238 main.go:141] libmachine: Using API Version  1
	I0805 12:55:11.991259  451238 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:55:11.991575  451238 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:55:11.991765  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:55:11.993484  451238 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0805 12:55:11.994687  451238 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 12:55:11.994952  451238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:55:11.994984  451238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:55:12.009528  451238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37395
	I0805 12:55:12.009879  451238 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:55:12.010353  451238 main.go:141] libmachine: Using API Version  1
	I0805 12:55:12.010375  451238 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:55:12.010670  451238 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:55:12.010857  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:55:12.044634  451238 out.go:177] * Using the kvm2 driver based on existing profile
	I0805 12:55:12.045859  451238 start.go:297] selected driver: kvm2
	I0805 12:55:12.045876  451238 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-635707 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-635707 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.41 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:55:12.045987  451238 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 12:55:12.046662  451238 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 12:55:12.046731  451238 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19377-383955/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0805 12:55:12.061918  451238 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0805 12:55:12.062400  451238 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 12:55:12.062484  451238 cni.go:84] Creating CNI manager for ""
	I0805 12:55:12.062502  451238 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:55:12.062572  451238 start.go:340] cluster config:
	{Name:old-k8s-version-635707 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-635707 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.41 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:55:12.062722  451238 iso.go:125] acquiring lock: {Name:mk78a4988ea0dfb86bb6f7367e362683a39fd912 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 12:55:12.064478  451238 out.go:177] * Starting "old-k8s-version-635707" primary control-plane node in "old-k8s-version-635707" cluster
	I0805 12:55:10.820047  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:13.892041  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:12.065640  451238 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0805 12:55:12.065680  451238 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0805 12:55:12.065701  451238 cache.go:56] Caching tarball of preloaded images
	I0805 12:55:12.065786  451238 preload.go:172] Found /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0805 12:55:12.065797  451238 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0805 12:55:12.065897  451238 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/config.json ...
	I0805 12:55:12.066073  451238 start.go:360] acquireMachinesLock for old-k8s-version-635707: {Name:mk3babe91d55c30c0b650587cdec6489eb3a7ed6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 12:55:19.971977  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:23.044092  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:29.124041  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:32.196124  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:38.276045  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:41.348117  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:47.428042  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:50.500022  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:56.580074  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:59.652091  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:05.732072  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:08.804128  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:14.884085  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:17.956073  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:24.036067  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:27.108059  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:33.188012  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:36.260134  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:42.340036  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:45.412038  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:51.492022  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:54.564068  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:00.644018  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:03.716112  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:09.796041  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:12.868080  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:18.948054  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:22.020023  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:28.100099  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:31.172076  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:37.251997  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:40.324080  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:46.404055  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:49.476072  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:55.556045  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:58.627984  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:58:01.632326  450576 start.go:364] duration metric: took 4m17.994768704s to acquireMachinesLock for "no-preload-669469"
	I0805 12:58:01.632391  450576 start.go:96] Skipping create...Using existing machine configuration
	I0805 12:58:01.632403  450576 fix.go:54] fixHost starting: 
	I0805 12:58:01.632845  450576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:58:01.632880  450576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:58:01.648358  450576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43013
	I0805 12:58:01.648860  450576 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:58:01.649387  450576 main.go:141] libmachine: Using API Version  1
	I0805 12:58:01.649410  450576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:58:01.649779  450576 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:58:01.649963  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 12:58:01.650176  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetState
	I0805 12:58:01.651681  450576 fix.go:112] recreateIfNeeded on no-preload-669469: state=Stopped err=<nil>
	I0805 12:58:01.651715  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	W0805 12:58:01.651903  450576 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 12:58:01.653860  450576 out.go:177] * Restarting existing kvm2 VM for "no-preload-669469" ...
	I0805 12:58:01.655338  450576 main.go:141] libmachine: (no-preload-669469) Calling .Start
	I0805 12:58:01.655475  450576 main.go:141] libmachine: (no-preload-669469) Ensuring networks are active...
	I0805 12:58:01.656224  450576 main.go:141] libmachine: (no-preload-669469) Ensuring network default is active
	I0805 12:58:01.656565  450576 main.go:141] libmachine: (no-preload-669469) Ensuring network mk-no-preload-669469 is active
	I0805 12:58:01.656898  450576 main.go:141] libmachine: (no-preload-669469) Getting domain xml...
	I0805 12:58:01.657537  450576 main.go:141] libmachine: (no-preload-669469) Creating domain...
	I0805 12:58:02.879809  450576 main.go:141] libmachine: (no-preload-669469) Waiting to get IP...
	I0805 12:58:02.880800  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:02.881194  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:02.881270  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:02.881175  451829 retry.go:31] will retry after 303.380177ms: waiting for machine to come up
	I0805 12:58:03.185834  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:03.186259  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:03.186288  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:03.186214  451829 retry.go:31] will retry after 263.494141ms: waiting for machine to come up
	I0805 12:58:03.451923  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:03.452263  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:03.452340  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:03.452217  451829 retry.go:31] will retry after 310.615163ms: waiting for machine to come up
	I0805 12:58:01.629832  450393 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 12:58:01.629873  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetMachineName
	I0805 12:58:01.630250  450393 buildroot.go:166] provisioning hostname "embed-certs-321139"
	I0805 12:58:01.630295  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetMachineName
	I0805 12:58:01.630511  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:58:01.632158  450393 machine.go:97] duration metric: took 4m37.422562602s to provisionDockerMachine
	I0805 12:58:01.632208  450393 fix.go:56] duration metric: took 4m37.444588707s for fixHost
	I0805 12:58:01.632226  450393 start.go:83] releasing machines lock for "embed-certs-321139", held for 4m37.44461751s
	W0805 12:58:01.632250  450393 start.go:714] error starting host: provision: host is not running
	W0805 12:58:01.632431  450393 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0805 12:58:01.632445  450393 start.go:729] Will try again in 5 seconds ...
	I0805 12:58:03.764803  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:03.765280  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:03.765305  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:03.765243  451829 retry.go:31] will retry after 570.955722ms: waiting for machine to come up
	I0805 12:58:04.338423  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:04.338863  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:04.338893  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:04.338811  451829 retry.go:31] will retry after 485.490715ms: waiting for machine to come up
	I0805 12:58:04.825511  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:04.825882  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:04.825911  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:04.825823  451829 retry.go:31] will retry after 671.109731ms: waiting for machine to come up
	I0805 12:58:05.498113  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:05.498529  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:05.498557  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:05.498467  451829 retry.go:31] will retry after 997.668856ms: waiting for machine to come up
	I0805 12:58:06.497843  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:06.498144  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:06.498161  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:06.498120  451829 retry.go:31] will retry after 996.614411ms: waiting for machine to come up
	I0805 12:58:07.496801  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:07.497298  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:07.497334  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:07.497249  451829 retry.go:31] will retry after 1.384682595s: waiting for machine to come up
	I0805 12:58:06.634410  450393 start.go:360] acquireMachinesLock for embed-certs-321139: {Name:mk3babe91d55c30c0b650587cdec6489eb3a7ed6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 12:58:08.883309  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:08.883701  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:08.883732  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:08.883642  451829 retry.go:31] will retry after 2.017073843s: waiting for machine to come up
	I0805 12:58:10.903852  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:10.904279  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:10.904310  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:10.904233  451829 retry.go:31] will retry after 2.485880433s: waiting for machine to come up
	I0805 12:58:13.392693  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:13.393169  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:13.393199  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:13.393116  451829 retry.go:31] will retry after 2.986076236s: waiting for machine to come up
	I0805 12:58:16.380921  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:16.381475  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:16.381508  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:16.381432  451829 retry.go:31] will retry after 4.291617536s: waiting for machine to come up
	I0805 12:58:21.948770  450884 start.go:364] duration metric: took 4m4.773878111s to acquireMachinesLock for "default-k8s-diff-port-371585"
	I0805 12:58:21.948843  450884 start.go:96] Skipping create...Using existing machine configuration
	I0805 12:58:21.948851  450884 fix.go:54] fixHost starting: 
	I0805 12:58:21.949291  450884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:58:21.949337  450884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:58:21.966933  450884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34223
	I0805 12:58:21.967356  450884 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:58:21.967874  450884 main.go:141] libmachine: Using API Version  1
	I0805 12:58:21.967899  450884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:58:21.968326  450884 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:58:21.968638  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 12:58:21.968874  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetState
	I0805 12:58:21.970608  450884 fix.go:112] recreateIfNeeded on default-k8s-diff-port-371585: state=Stopped err=<nil>
	I0805 12:58:21.970631  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	W0805 12:58:21.970789  450884 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 12:58:21.973235  450884 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-371585" ...
	I0805 12:58:21.974564  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .Start
	I0805 12:58:21.974751  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Ensuring networks are active...
	I0805 12:58:21.975581  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Ensuring network default is active
	I0805 12:58:21.976001  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Ensuring network mk-default-k8s-diff-port-371585 is active
	I0805 12:58:21.976376  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Getting domain xml...
	I0805 12:58:21.977078  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Creating domain...
	I0805 12:58:20.678231  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.678743  450576 main.go:141] libmachine: (no-preload-669469) Found IP for machine: 192.168.72.223
	I0805 12:58:20.678771  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has current primary IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.678786  450576 main.go:141] libmachine: (no-preload-669469) Reserving static IP address...
	I0805 12:58:20.679230  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "no-preload-669469", mac: "52:54:00:55:38:0a", ip: "192.168.72.223"} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:20.679266  450576 main.go:141] libmachine: (no-preload-669469) Reserved static IP address: 192.168.72.223
	I0805 12:58:20.679288  450576 main.go:141] libmachine: (no-preload-669469) DBG | skip adding static IP to network mk-no-preload-669469 - found existing host DHCP lease matching {name: "no-preload-669469", mac: "52:54:00:55:38:0a", ip: "192.168.72.223"}
	I0805 12:58:20.679302  450576 main.go:141] libmachine: (no-preload-669469) DBG | Getting to WaitForSSH function...
	I0805 12:58:20.679317  450576 main.go:141] libmachine: (no-preload-669469) Waiting for SSH to be available...
	I0805 12:58:20.681864  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.682263  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:20.682297  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.682447  450576 main.go:141] libmachine: (no-preload-669469) DBG | Using SSH client type: external
	I0805 12:58:20.682484  450576 main.go:141] libmachine: (no-preload-669469) DBG | Using SSH private key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa (-rw-------)
	I0805 12:58:20.682539  450576 main.go:141] libmachine: (no-preload-669469) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.223 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 12:58:20.682557  450576 main.go:141] libmachine: (no-preload-669469) DBG | About to run SSH command:
	I0805 12:58:20.682568  450576 main.go:141] libmachine: (no-preload-669469) DBG | exit 0
	I0805 12:58:20.807791  450576 main.go:141] libmachine: (no-preload-669469) DBG | SSH cmd err, output: <nil>: 
	I0805 12:58:20.808168  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetConfigRaw
	I0805 12:58:20.808767  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetIP
	I0805 12:58:20.811170  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.811486  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:20.811517  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.811738  450576 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469/config.json ...
	I0805 12:58:20.811957  450576 machine.go:94] provisionDockerMachine start ...
	I0805 12:58:20.811976  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 12:58:20.812203  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:20.814305  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.814656  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:20.814693  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.814823  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:20.814996  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:20.815156  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:20.815329  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:20.815503  450576 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:20.815871  450576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.223 22 <nil> <nil>}
	I0805 12:58:20.815887  450576 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 12:58:20.920311  450576 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 12:58:20.920344  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetMachineName
	I0805 12:58:20.920642  450576 buildroot.go:166] provisioning hostname "no-preload-669469"
	I0805 12:58:20.920695  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetMachineName
	I0805 12:58:20.920951  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:20.924029  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.924583  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:20.924611  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.924770  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:20.925001  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:20.925190  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:20.925334  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:20.925514  450576 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:20.925755  450576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.223 22 <nil> <nil>}
	I0805 12:58:20.925774  450576 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-669469 && echo "no-preload-669469" | sudo tee /etc/hostname
	I0805 12:58:21.046579  450576 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-669469
	
	I0805 12:58:21.046614  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:21.049322  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.049657  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.049687  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.049851  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:21.050049  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.050239  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.050412  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:21.050588  450576 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:21.050755  450576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.223 22 <nil> <nil>}
	I0805 12:58:21.050771  450576 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-669469' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-669469/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-669469' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 12:58:21.165100  450576 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 12:58:21.165134  450576 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19377-383955/.minikube CaCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19377-383955/.minikube}
	I0805 12:58:21.165170  450576 buildroot.go:174] setting up certificates
	I0805 12:58:21.165180  450576 provision.go:84] configureAuth start
	I0805 12:58:21.165191  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetMachineName
	I0805 12:58:21.165477  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetIP
	I0805 12:58:21.168018  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.168399  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.168443  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.168703  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:21.171168  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.171536  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.171565  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.171638  450576 provision.go:143] copyHostCerts
	I0805 12:58:21.171713  450576 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem, removing ...
	I0805 12:58:21.171724  450576 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 12:58:21.171807  450576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem (1082 bytes)
	I0805 12:58:21.171920  450576 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem, removing ...
	I0805 12:58:21.171930  450576 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 12:58:21.171955  450576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem (1123 bytes)
	I0805 12:58:21.172010  450576 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem, removing ...
	I0805 12:58:21.172016  450576 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 12:58:21.172037  450576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem (1675 bytes)
	I0805 12:58:21.172095  450576 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem org=jenkins.no-preload-669469 san=[127.0.0.1 192.168.72.223 localhost minikube no-preload-669469]
	I0805 12:58:21.287395  450576 provision.go:177] copyRemoteCerts
	I0805 12:58:21.287463  450576 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 12:58:21.287505  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:21.290416  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.290765  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.290796  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.290962  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:21.291169  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.291323  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:21.291460  450576 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa Username:docker}
	I0805 12:58:21.373992  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0805 12:58:21.398249  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 12:58:21.422950  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0805 12:58:21.446469  450576 provision.go:87] duration metric: took 281.275299ms to configureAuth
	I0805 12:58:21.446500  450576 buildroot.go:189] setting minikube options for container-runtime
	I0805 12:58:21.446688  450576 config.go:182] Loaded profile config "no-preload-669469": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0805 12:58:21.446813  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:21.449833  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.450219  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.450235  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.450526  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:21.450814  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.450993  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.451168  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:21.451342  450576 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:21.451515  450576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.223 22 <nil> <nil>}
	I0805 12:58:21.451532  450576 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 12:58:21.714813  450576 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 12:58:21.714842  450576 machine.go:97] duration metric: took 902.872257ms to provisionDockerMachine
	I0805 12:58:21.714858  450576 start.go:293] postStartSetup for "no-preload-669469" (driver="kvm2")
	I0805 12:58:21.714889  450576 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 12:58:21.714940  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 12:58:21.715304  450576 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 12:58:21.715333  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:21.717989  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.718405  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.718427  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.718597  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:21.718832  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.718993  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:21.719152  450576 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa Username:docker}
	I0805 12:58:21.802634  450576 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 12:58:21.806957  450576 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 12:58:21.806985  450576 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/addons for local assets ...
	I0805 12:58:21.807079  450576 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/files for local assets ...
	I0805 12:58:21.807186  450576 filesync.go:149] local asset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> 3912192.pem in /etc/ssl/certs
	I0805 12:58:21.807293  450576 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 12:58:21.816690  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:58:21.839848  450576 start.go:296] duration metric: took 124.973515ms for postStartSetup
	I0805 12:58:21.839903  450576 fix.go:56] duration metric: took 20.207499572s for fixHost
	I0805 12:58:21.839934  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:21.842548  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.842869  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.842893  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.843090  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:21.843310  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.843502  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.843640  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:21.843815  450576 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:21.844015  450576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.223 22 <nil> <nil>}
	I0805 12:58:21.844029  450576 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 12:58:21.948584  450576 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722862701.921979093
	
	I0805 12:58:21.948613  450576 fix.go:216] guest clock: 1722862701.921979093
	I0805 12:58:21.948623  450576 fix.go:229] Guest: 2024-08-05 12:58:21.921979093 +0000 UTC Remote: 2024-08-05 12:58:21.83991063 +0000 UTC m=+278.340267839 (delta=82.068463ms)
	I0805 12:58:21.948671  450576 fix.go:200] guest clock delta is within tolerance: 82.068463ms
	I0805 12:58:21.948680  450576 start.go:83] releasing machines lock for "no-preload-669469", held for 20.316310092s
	I0805 12:58:21.948713  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 12:58:21.948990  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetIP
	I0805 12:58:21.951624  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.952086  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.952136  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.952256  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 12:58:21.952797  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 12:58:21.952984  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 12:58:21.953065  450576 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 12:58:21.953113  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:21.953227  450576 ssh_runner.go:195] Run: cat /version.json
	I0805 12:58:21.953255  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:21.955837  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.956081  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.956200  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.956227  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.956370  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:21.956504  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.956528  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.956568  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.956670  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:21.956760  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:21.956857  450576 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa Username:docker}
	I0805 12:58:21.956906  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.957058  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:21.957205  450576 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa Username:docker}
	I0805 12:58:22.058847  450576 ssh_runner.go:195] Run: systemctl --version
	I0805 12:58:22.065110  450576 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 12:58:22.211415  450576 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 12:58:22.219405  450576 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 12:58:22.219492  450576 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 12:58:22.240631  450576 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 12:58:22.240659  450576 start.go:495] detecting cgroup driver to use...
	I0805 12:58:22.240764  450576 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 12:58:22.258777  450576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 12:58:22.273312  450576 docker.go:217] disabling cri-docker service (if available) ...
	I0805 12:58:22.273400  450576 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 12:58:22.288455  450576 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 12:58:22.305028  450576 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 12:58:22.428098  450576 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 12:58:22.586232  450576 docker.go:233] disabling docker service ...
	I0805 12:58:22.586318  450576 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 12:58:22.611888  450576 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 12:58:22.627393  450576 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 12:58:22.757335  450576 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 12:58:22.878168  450576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 12:58:22.896174  450576 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 12:58:22.914395  450576 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0805 12:58:23.229202  450576 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0805 12:58:23.229300  450576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:23.242180  450576 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 12:58:23.242262  450576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:23.254577  450576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:23.265805  450576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:23.276522  450576 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 12:58:23.287288  450576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:23.297863  450576 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:23.314322  450576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:23.324662  450576 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 12:58:23.334125  450576 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0805 12:58:23.334192  450576 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0805 12:58:23.346701  450576 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 12:58:23.356256  450576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:58:23.474046  450576 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 12:58:23.617276  450576 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 12:58:23.617363  450576 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 12:58:23.622001  450576 start.go:563] Will wait 60s for crictl version
	I0805 12:58:23.622047  450576 ssh_runner.go:195] Run: which crictl
	I0805 12:58:23.626041  450576 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 12:58:23.670186  450576 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 12:58:23.670267  450576 ssh_runner.go:195] Run: crio --version
	I0805 12:58:23.700616  450576 ssh_runner.go:195] Run: crio --version
	I0805 12:58:23.733411  450576 out.go:177] * Preparing Kubernetes v1.31.0-rc.0 on CRI-O 1.29.1 ...
	I0805 12:58:23.254293  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting to get IP...
	I0805 12:58:23.255331  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:23.255802  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:23.255880  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:23.255773  451963 retry.go:31] will retry after 245.269435ms: waiting for machine to come up
	I0805 12:58:23.502617  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:23.503105  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:23.503130  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:23.503068  451963 retry.go:31] will retry after 243.155673ms: waiting for machine to come up
	I0805 12:58:23.747498  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:23.747913  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:23.747950  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:23.747867  451963 retry.go:31] will retry after 459.286566ms: waiting for machine to come up
	I0805 12:58:24.208594  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:24.209076  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:24.209127  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:24.209003  451963 retry.go:31] will retry after 499.069946ms: waiting for machine to come up
	I0805 12:58:24.709128  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:24.709554  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:24.709577  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:24.709512  451963 retry.go:31] will retry after 732.735525ms: waiting for machine to come up
	I0805 12:58:25.443632  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:25.444185  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:25.444216  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:25.444125  451963 retry.go:31] will retry after 883.69375ms: waiting for machine to come up
	I0805 12:58:26.329477  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:26.330010  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:26.330045  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:26.329947  451963 retry.go:31] will retry after 1.157298734s: waiting for machine to come up
	I0805 12:58:23.734875  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetIP
	I0805 12:58:23.737945  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:23.738460  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:23.738487  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:23.738646  450576 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0805 12:58:23.742894  450576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:58:23.756164  450576 kubeadm.go:883] updating cluster {Name:no-preload-669469 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-rc.0 ClusterName:no-preload-669469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.223 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 12:58:23.756435  450576 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0805 12:58:24.035575  450576 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0805 12:58:24.352144  450576 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0805 12:58:24.657175  450576 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0805 12:58:24.657266  450576 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:58:24.694685  450576 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-rc.0". assuming images are not preloaded.
	I0805 12:58:24.694720  450576 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-rc.0 registry.k8s.io/kube-controller-manager:v1.31.0-rc.0 registry.k8s.io/kube-scheduler:v1.31.0-rc.0 registry.k8s.io/kube-proxy:v1.31.0-rc.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0805 12:58:24.694809  450576 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0805 12:58:24.694831  450576 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0805 12:58:24.694845  450576 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0805 12:58:24.694867  450576 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0805 12:58:24.694835  450576 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:58:24.694815  450576 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0805 12:58:24.694801  450576 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0805 12:58:24.694917  450576 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0805 12:58:24.696852  450576 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0805 12:58:24.696859  450576 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0805 12:58:24.696860  450576 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0805 12:58:24.696902  450576 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0805 12:58:24.696904  450576 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:58:24.696852  450576 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0805 12:58:24.696881  450576 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0805 12:58:24.696852  450576 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0805 12:58:24.864249  450576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0805 12:58:24.867334  450576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0805 12:58:24.905018  450576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0805 12:58:24.920294  450576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0805 12:58:24.925405  450576 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" does not exist at hash "fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c" in container runtime
	I0805 12:58:24.925440  450576 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" does not exist at hash "c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0" in container runtime
	I0805 12:58:24.925456  450576 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0805 12:58:24.925476  450576 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0805 12:58:24.925508  450576 ssh_runner.go:195] Run: which crictl
	I0805 12:58:24.925520  450576 ssh_runner.go:195] Run: which crictl
	I0805 12:58:24.973191  450576 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-rc.0" does not exist at hash "41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318" in container runtime
	I0805 12:58:24.973240  450576 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0805 12:58:24.973304  450576 ssh_runner.go:195] Run: which crictl
	I0805 12:58:24.986642  450576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0805 12:58:24.986685  450576 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0805 12:58:24.986706  450576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0805 12:58:24.986723  450576 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0805 12:58:24.986642  450576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0805 12:58:24.986772  450576 ssh_runner.go:195] Run: which crictl
	I0805 12:58:25.037012  450576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0
	I0805 12:58:25.037066  450576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0805 12:58:25.037132  450576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0805 12:58:25.067311  450576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0805 12:58:25.068528  450576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0805 12:58:25.073769  450576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0
	I0805 12:58:25.073831  450576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-rc.0
	I0805 12:58:25.073872  450576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0805 12:58:25.073933  450576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0805 12:58:25.082476  450576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0805 12:58:25.126044  450576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0 (exists)
	I0805 12:58:25.126080  450576 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0805 12:58:25.126127  450576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0805 12:58:25.126144  450576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0805 12:58:25.126230  450576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0805 12:58:25.149903  450576 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0805 12:58:25.149965  450576 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0805 12:58:25.150028  450576 ssh_runner.go:195] Run: which crictl
	I0805 12:58:25.196288  450576 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" does not exist at hash "0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c" in container runtime
	I0805 12:58:25.196336  450576 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0805 12:58:25.196388  450576 ssh_runner.go:195] Run: which crictl
	I0805 12:58:25.196416  450576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0 (exists)
	I0805 12:58:25.196510  450576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0 (exists)
	I0805 12:58:25.651632  450576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:58:27.532922  450576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0: (2.406747514s)
	I0805 12:58:27.532959  450576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 from cache
	I0805 12:58:27.532994  450576 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0805 12:58:27.533010  450576 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.406755032s)
	I0805 12:58:27.533048  450576 ssh_runner.go:235] Completed: which crictl: (2.383000552s)
	I0805 12:58:27.533050  450576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0805 12:58:27.533082  450576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0805 12:58:27.533082  450576 ssh_runner.go:235] Completed: which crictl: (2.336681164s)
	I0805 12:58:27.533095  450576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0805 12:58:27.533117  450576 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.88145852s)
	I0805 12:58:27.533139  450576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0805 12:58:27.533161  450576 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0805 12:58:27.533198  450576 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:58:27.533234  450576 ssh_runner.go:195] Run: which crictl
	I0805 12:58:27.488683  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:27.489080  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:27.489108  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:27.489027  451963 retry.go:31] will retry after 997.566168ms: waiting for machine to come up
	I0805 12:58:28.488397  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:28.488846  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:28.488878  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:28.488794  451963 retry.go:31] will retry after 1.327498575s: waiting for machine to come up
	I0805 12:58:29.818339  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:29.818705  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:29.818735  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:29.818660  451963 retry.go:31] will retry after 2.105158858s: waiting for machine to come up
	I0805 12:58:31.925036  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:31.925564  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:31.925601  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:31.925492  451963 retry.go:31] will retry after 2.860711737s: waiting for machine to come up
	I0805 12:58:29.629896  450576 ssh_runner.go:235] Completed: which crictl: (2.096633143s)
	I0805 12:58:29.630000  450576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:58:29.630084  450576 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0: (2.096969259s)
	I0805 12:58:29.630184  450576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0805 12:58:29.630102  450576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0: (2.09697893s)
	I0805 12:58:29.630255  450576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 from cache
	I0805 12:58:29.630121  450576 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-rc.0: (2.096957841s)
	I0805 12:58:29.630282  450576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.15-0
	I0805 12:58:29.630286  450576 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0805 12:58:29.630313  450576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0
	I0805 12:58:29.630322  450576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0805 12:58:29.630381  450576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0805 12:58:29.675831  450576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0805 12:58:29.675914  450576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0805 12:58:29.676019  450576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0805 12:58:31.695376  450576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0: (2.06501136s)
	I0805 12:58:31.695429  450576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 from cache
	I0805 12:58:31.695458  450576 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0805 12:58:31.695476  450576 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.019437866s)
	I0805 12:58:31.695382  450576 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0: (2.064967299s)
	I0805 12:58:31.695510  450576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0805 12:58:31.695523  450576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0 (exists)
	I0805 12:58:31.695536  450576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0805 12:58:34.789126  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:34.789644  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:34.789673  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:34.789592  451963 retry.go:31] will retry after 2.763937018s: waiting for machine to come up
	I0805 12:58:33.659147  450576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.963588438s)
	I0805 12:58:33.659183  450576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0805 12:58:33.659216  450576 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0805 12:58:33.659263  450576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0805 12:58:37.466579  450576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.807281649s)
	I0805 12:58:37.466623  450576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0805 12:58:37.466657  450576 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0805 12:58:37.466709  450576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0805 12:58:38.111584  450576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0805 12:58:38.111633  450576 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0805 12:58:38.111678  450576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0805 12:58:37.554827  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:37.555233  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:37.555263  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:37.555184  451963 retry.go:31] will retry after 3.143735106s: waiting for machine to come up
	I0805 12:58:40.701139  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.701615  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Found IP for machine: 192.168.50.228
	I0805 12:58:40.701649  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has current primary IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.701660  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Reserving static IP address...
	I0805 12:58:40.702105  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-371585", mac: "52:54:00:f4:9f:83", ip: "192.168.50.228"} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:40.702126  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Reserved static IP address: 192.168.50.228
	I0805 12:58:40.702146  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | skip adding static IP to network mk-default-k8s-diff-port-371585 - found existing host DHCP lease matching {name: "default-k8s-diff-port-371585", mac: "52:54:00:f4:9f:83", ip: "192.168.50.228"}
	I0805 12:58:40.702156  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for SSH to be available...
	I0805 12:58:40.702198  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | Getting to WaitForSSH function...
	I0805 12:58:40.704600  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.704920  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:40.704950  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.705091  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | Using SSH client type: external
	I0805 12:58:40.705129  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | Using SSH private key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa (-rw-------)
	I0805 12:58:40.705179  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.228 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 12:58:40.705200  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | About to run SSH command:
	I0805 12:58:40.705218  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | exit 0
	I0805 12:58:40.836818  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | SSH cmd err, output: <nil>: 
	I0805 12:58:40.837228  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetConfigRaw
	I0805 12:58:40.837884  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetIP
	I0805 12:58:40.840503  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.840843  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:40.840870  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.841129  450884 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585/config.json ...
	I0805 12:58:40.841353  450884 machine.go:94] provisionDockerMachine start ...
	I0805 12:58:40.841373  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 12:58:40.841587  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:40.843943  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.844308  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:40.844336  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.844448  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:40.844614  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:40.844782  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:40.844922  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:40.845067  450884 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:40.845322  450884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.228 22 <nil> <nil>}
	I0805 12:58:40.845333  450884 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 12:58:40.952367  450884 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 12:58:40.952410  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetMachineName
	I0805 12:58:40.952733  450884 buildroot.go:166] provisioning hostname "default-k8s-diff-port-371585"
	I0805 12:58:40.952762  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetMachineName
	I0805 12:58:40.952968  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:40.955642  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.956045  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:40.956077  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.956216  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:40.956493  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:40.956651  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:40.956804  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:40.957027  450884 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:40.957239  450884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.228 22 <nil> <nil>}
	I0805 12:58:40.957255  450884 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-371585 && echo "default-k8s-diff-port-371585" | sudo tee /etc/hostname
	I0805 12:58:41.077775  450884 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-371585
	
	I0805 12:58:41.077808  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:41.080777  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.081230  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:41.081273  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.081406  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:41.081631  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:41.081782  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:41.081963  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:41.082139  450884 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:41.082315  450884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.228 22 <nil> <nil>}
	I0805 12:58:41.082333  450884 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-371585' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-371585/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-371585' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 12:58:41.200835  450884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 12:58:41.200871  450884 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19377-383955/.minikube CaCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19377-383955/.minikube}
	I0805 12:58:41.200923  450884 buildroot.go:174] setting up certificates
	I0805 12:58:41.200934  450884 provision.go:84] configureAuth start
	I0805 12:58:41.200945  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetMachineName
	I0805 12:58:41.201284  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetIP
	I0805 12:58:41.204107  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.204460  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:41.204494  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.204631  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:41.206634  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.206948  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:41.206977  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.207048  450884 provision.go:143] copyHostCerts
	I0805 12:58:41.207139  450884 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem, removing ...
	I0805 12:58:41.207151  450884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 12:58:41.207215  450884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem (1082 bytes)
	I0805 12:58:41.207333  450884 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem, removing ...
	I0805 12:58:41.207345  450884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 12:58:41.207372  450884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem (1123 bytes)
	I0805 12:58:41.207451  450884 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem, removing ...
	I0805 12:58:41.207462  450884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 12:58:41.207502  450884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem (1675 bytes)
	I0805 12:58:41.207573  450884 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-371585 san=[127.0.0.1 192.168.50.228 default-k8s-diff-port-371585 localhost minikube]
	I0805 12:58:41.357243  450884 provision.go:177] copyRemoteCerts
	I0805 12:58:41.357344  450884 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 12:58:41.357386  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:41.360309  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.360697  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:41.360738  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.360933  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:41.361120  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:41.361295  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:41.361474  450884 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa Username:docker}
	I0805 12:58:41.454251  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 12:58:41.480595  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0805 12:58:41.506729  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 12:58:41.533349  450884 provision.go:87] duration metric: took 332.399026ms to configureAuth
	I0805 12:58:41.533402  450884 buildroot.go:189] setting minikube options for container-runtime
	I0805 12:58:41.533575  450884 config.go:182] Loaded profile config "default-k8s-diff-port-371585": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:58:41.533655  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:41.536469  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.536831  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:41.536862  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.537006  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:41.537197  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:41.537386  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:41.537541  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:41.537734  450884 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:41.537946  450884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.228 22 <nil> <nil>}
	I0805 12:58:41.537968  450884 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 12:58:41.827043  450884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 12:58:41.827078  450884 machine.go:97] duration metric: took 985.710155ms to provisionDockerMachine
	I0805 12:58:41.827095  450884 start.go:293] postStartSetup for "default-k8s-diff-port-371585" (driver="kvm2")
	I0805 12:58:41.827109  450884 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 12:58:41.827145  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 12:58:41.827564  450884 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 12:58:41.827605  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:41.830350  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.830724  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:41.830761  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.830853  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:41.831034  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:41.831206  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:41.831329  450884 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa Username:docker}
	I0805 12:58:41.915261  450884 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 12:58:41.919719  450884 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 12:58:41.919760  450884 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/addons for local assets ...
	I0805 12:58:41.919835  450884 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/files for local assets ...
	I0805 12:58:41.919930  450884 filesync.go:149] local asset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> 3912192.pem in /etc/ssl/certs
	I0805 12:58:41.920062  450884 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 12:58:41.929842  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:58:41.958933  450884 start.go:296] duration metric: took 131.820227ms for postStartSetup
	I0805 12:58:41.958981  450884 fix.go:56] duration metric: took 20.010130311s for fixHost
	I0805 12:58:41.959012  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:41.962092  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.962510  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:41.962540  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.962726  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:41.962968  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:41.963153  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:41.963309  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:41.963479  450884 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:41.963687  450884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.228 22 <nil> <nil>}
	I0805 12:58:41.963700  450884 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 12:58:42.080993  451238 start.go:364] duration metric: took 3m30.014883629s to acquireMachinesLock for "old-k8s-version-635707"
	I0805 12:58:42.081066  451238 start.go:96] Skipping create...Using existing machine configuration
	I0805 12:58:42.081076  451238 fix.go:54] fixHost starting: 
	I0805 12:58:42.081569  451238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:58:42.081611  451238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:58:42.101889  451238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43379
	I0805 12:58:42.102366  451238 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:58:42.102910  451238 main.go:141] libmachine: Using API Version  1
	I0805 12:58:42.102947  451238 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:58:42.103310  451238 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:58:42.103552  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:58:42.103718  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetState
	I0805 12:58:42.105465  451238 fix.go:112] recreateIfNeeded on old-k8s-version-635707: state=Stopped err=<nil>
	I0805 12:58:42.105504  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	W0805 12:58:42.105674  451238 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 12:58:42.107563  451238 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-635707" ...
	I0805 12:58:39.567840  450576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0: (1.456137011s)
	I0805 12:58:39.567879  450576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 from cache
	I0805 12:58:39.567905  450576 cache_images.go:123] Successfully loaded all cached images
	I0805 12:58:39.567911  450576 cache_images.go:92] duration metric: took 14.873174481s to LoadCachedImages
	I0805 12:58:39.567921  450576 kubeadm.go:934] updating node { 192.168.72.223 8443 v1.31.0-rc.0 crio true true} ...
	I0805 12:58:39.568053  450576 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-669469 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.223
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-669469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 12:58:39.568137  450576 ssh_runner.go:195] Run: crio config
	I0805 12:58:39.616607  450576 cni.go:84] Creating CNI manager for ""
	I0805 12:58:39.616634  450576 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:58:39.616660  450576 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 12:58:39.616683  450576 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.223 APIServerPort:8443 KubernetesVersion:v1.31.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-669469 NodeName:no-preload-669469 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.223"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.223 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 12:58:39.616822  450576 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.223
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-669469"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.223
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.223"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 12:58:39.616896  450576 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-rc.0
	I0805 12:58:39.627827  450576 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 12:58:39.627901  450576 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 12:58:39.637348  450576 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0805 12:58:39.653917  450576 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0805 12:58:39.670196  450576 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0805 12:58:39.686922  450576 ssh_runner.go:195] Run: grep 192.168.72.223	control-plane.minikube.internal$ /etc/hosts
	I0805 12:58:39.690804  450576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.223	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:58:39.703146  450576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:58:39.834718  450576 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 12:58:39.857015  450576 certs.go:68] Setting up /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469 for IP: 192.168.72.223
	I0805 12:58:39.857036  450576 certs.go:194] generating shared ca certs ...
	I0805 12:58:39.857057  450576 certs.go:226] acquiring lock for ca certs: {Name:mk0abfcaff3883fbb5243c47b487f9200d9166d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:58:39.857229  450576 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key
	I0805 12:58:39.857286  450576 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key
	I0805 12:58:39.857300  450576 certs.go:256] generating profile certs ...
	I0805 12:58:39.857431  450576 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469/client.key
	I0805 12:58:39.857489  450576 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469/apiserver.key.dd0884bb
	I0805 12:58:39.857535  450576 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469/proxy-client.key
	I0805 12:58:39.857683  450576 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem (1338 bytes)
	W0805 12:58:39.857723  450576 certs.go:480] ignoring /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219_empty.pem, impossibly tiny 0 bytes
	I0805 12:58:39.857739  450576 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 12:58:39.857769  450576 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem (1082 bytes)
	I0805 12:58:39.857834  450576 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem (1123 bytes)
	I0805 12:58:39.857872  450576 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem (1675 bytes)
	I0805 12:58:39.857923  450576 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:58:39.858695  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 12:58:39.895944  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 12:58:39.925816  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 12:58:39.960150  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 12:58:39.993307  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0805 12:58:40.027900  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0805 12:58:40.053492  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 12:58:40.077331  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 12:58:40.101010  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /usr/share/ca-certificates/3912192.pem (1708 bytes)
	I0805 12:58:40.123991  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 12:58:40.147563  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem --> /usr/share/ca-certificates/391219.pem (1338 bytes)
	I0805 12:58:40.170414  450576 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 12:58:40.188256  450576 ssh_runner.go:195] Run: openssl version
	I0805 12:58:40.193955  450576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3912192.pem && ln -fs /usr/share/ca-certificates/3912192.pem /etc/ssl/certs/3912192.pem"
	I0805 12:58:40.204793  450576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3912192.pem
	I0805 12:58:40.209061  450576 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 11:39 /usr/share/ca-certificates/3912192.pem
	I0805 12:58:40.209115  450576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3912192.pem
	I0805 12:58:40.214948  450576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3912192.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 12:58:40.226193  450576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 12:58:40.237723  450576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:58:40.241960  450576 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:58:40.242019  450576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:58:40.247502  450576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 12:58:40.258791  450576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391219.pem && ln -fs /usr/share/ca-certificates/391219.pem /etc/ssl/certs/391219.pem"
	I0805 12:58:40.270176  450576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/391219.pem
	I0805 12:58:40.274717  450576 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 11:39 /usr/share/ca-certificates/391219.pem
	I0805 12:58:40.274786  450576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391219.pem
	I0805 12:58:40.280457  450576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391219.pem /etc/ssl/certs/51391683.0"
	I0805 12:58:40.292091  450576 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 12:58:40.296842  450576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 12:58:40.303003  450576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 12:58:40.309009  450576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 12:58:40.314951  450576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 12:58:40.320674  450576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 12:58:40.326433  450576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 12:58:40.331848  450576 kubeadm.go:392] StartCluster: {Name:no-preload-669469 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-rc.0 ClusterName:no-preload-669469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.223 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:58:40.331938  450576 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 12:58:40.331975  450576 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:58:40.374390  450576 cri.go:89] found id: ""
	I0805 12:58:40.374482  450576 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 12:58:40.385467  450576 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0805 12:58:40.385485  450576 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0805 12:58:40.385531  450576 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0805 12:58:40.395411  450576 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0805 12:58:40.396455  450576 kubeconfig.go:125] found "no-preload-669469" server: "https://192.168.72.223:8443"
	I0805 12:58:40.400090  450576 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0805 12:58:40.410942  450576 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.223
	I0805 12:58:40.410971  450576 kubeadm.go:1160] stopping kube-system containers ...
	I0805 12:58:40.410985  450576 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0805 12:58:40.411032  450576 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:58:40.453021  450576 cri.go:89] found id: ""
	I0805 12:58:40.453115  450576 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0805 12:58:40.470389  450576 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 12:58:40.480421  450576 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 12:58:40.480445  450576 kubeadm.go:157] found existing configuration files:
	
	I0805 12:58:40.480502  450576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 12:58:40.489625  450576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 12:58:40.489672  450576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 12:58:40.499261  450576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 12:58:40.508571  450576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 12:58:40.508634  450576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 12:58:40.517811  450576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 12:58:40.526563  450576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 12:58:40.526620  450576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 12:58:40.535753  450576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 12:58:40.544981  450576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 12:58:40.545040  450576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 12:58:40.555237  450576 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 12:58:40.565180  450576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:40.683889  450576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:41.632122  450576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:41.866665  450576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:41.944022  450576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:42.048030  450576 api_server.go:52] waiting for apiserver process to appear ...
	I0805 12:58:42.048127  450576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:58:42.548995  450576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:58:43.048336  450576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:58:43.086457  450576 api_server.go:72] duration metric: took 1.038426772s to wait for apiserver process to appear ...
	I0805 12:58:43.086487  450576 api_server.go:88] waiting for apiserver healthz status ...
	I0805 12:58:43.086509  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:43.086982  450576 api_server.go:269] stopped: https://192.168.72.223:8443/healthz: Get "https://192.168.72.223:8443/healthz": dial tcp 192.168.72.223:8443: connect: connection refused
	I0805 12:58:42.080800  450884 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722862722.053648046
	
	I0805 12:58:42.080828  450884 fix.go:216] guest clock: 1722862722.053648046
	I0805 12:58:42.080839  450884 fix.go:229] Guest: 2024-08-05 12:58:42.053648046 +0000 UTC Remote: 2024-08-05 12:58:41.958987261 +0000 UTC m=+264.923354352 (delta=94.660785ms)
	I0805 12:58:42.080867  450884 fix.go:200] guest clock delta is within tolerance: 94.660785ms
	I0805 12:58:42.080876  450884 start.go:83] releasing machines lock for "default-k8s-diff-port-371585", held for 20.132054114s
	I0805 12:58:42.080916  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 12:58:42.081260  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetIP
	I0805 12:58:42.084196  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:42.084662  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:42.084695  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:42.084867  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 12:58:42.085589  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 12:58:42.085786  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 12:58:42.085875  450884 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 12:58:42.085925  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:42.086064  450884 ssh_runner.go:195] Run: cat /version.json
	I0805 12:58:42.086091  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:42.088693  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:42.089018  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:42.089042  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:42.089197  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:42.089260  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:42.089455  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:42.089729  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:42.089730  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:42.089785  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:42.089881  450884 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa Username:docker}
	I0805 12:58:42.089970  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:42.090128  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:42.090286  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:42.090457  450884 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa Username:docker}
	I0805 12:58:42.193160  450884 ssh_runner.go:195] Run: systemctl --version
	I0805 12:58:42.199341  450884 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 12:58:42.344713  450884 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 12:58:42.350944  450884 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 12:58:42.351026  450884 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 12:58:42.368162  450884 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 12:58:42.368196  450884 start.go:495] detecting cgroup driver to use...
	I0805 12:58:42.368260  450884 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 12:58:42.384477  450884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 12:58:42.401847  450884 docker.go:217] disabling cri-docker service (if available) ...
	I0805 12:58:42.401907  450884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 12:58:42.416318  450884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 12:58:42.430994  450884 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 12:58:42.545944  450884 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 12:58:42.721877  450884 docker.go:233] disabling docker service ...
	I0805 12:58:42.721961  450884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 12:58:42.743504  450884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 12:58:42.763111  450884 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 12:58:42.914270  450884 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 12:58:43.064816  450884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 12:58:43.090748  450884 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 12:58:43.115493  450884 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0805 12:58:43.115565  450884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:43.132497  450884 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 12:58:43.132583  450884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:43.146700  450884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:43.159880  450884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:43.175598  450884 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 12:58:43.191263  450884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:43.207573  450884 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:43.229567  450884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:43.248604  450884 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 12:58:43.261272  450884 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0805 12:58:43.261350  450884 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0805 12:58:43.276740  450884 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 12:58:43.288473  450884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:58:43.436066  450884 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 12:58:43.593264  450884 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 12:58:43.593355  450884 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 12:58:43.599342  450884 start.go:563] Will wait 60s for crictl version
	I0805 12:58:43.599419  450884 ssh_runner.go:195] Run: which crictl
	I0805 12:58:43.603681  450884 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 12:58:43.651181  450884 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 12:58:43.651296  450884 ssh_runner.go:195] Run: crio --version
	I0805 12:58:43.691418  450884 ssh_runner.go:195] Run: crio --version
	I0805 12:58:43.725036  450884 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0805 12:58:42.109016  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .Start
	I0805 12:58:42.109214  451238 main.go:141] libmachine: (old-k8s-version-635707) Ensuring networks are active...
	I0805 12:58:42.110192  451238 main.go:141] libmachine: (old-k8s-version-635707) Ensuring network default is active
	I0805 12:58:42.110686  451238 main.go:141] libmachine: (old-k8s-version-635707) Ensuring network mk-old-k8s-version-635707 is active
	I0805 12:58:42.111108  451238 main.go:141] libmachine: (old-k8s-version-635707) Getting domain xml...
	I0805 12:58:42.112194  451238 main.go:141] libmachine: (old-k8s-version-635707) Creating domain...
	I0805 12:58:43.453015  451238 main.go:141] libmachine: (old-k8s-version-635707) Waiting to get IP...
	I0805 12:58:43.453994  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:43.454435  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:43.454504  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:43.454435  452186 retry.go:31] will retry after 270.355403ms: waiting for machine to come up
	I0805 12:58:43.727101  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:43.727583  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:43.727641  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:43.727568  452186 retry.go:31] will retry after 313.75466ms: waiting for machine to come up
	I0805 12:58:44.043303  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:44.043954  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:44.043981  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:44.043855  452186 retry.go:31] will retry after 308.608573ms: waiting for machine to come up
	I0805 12:58:44.354830  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:44.355396  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:44.355421  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:44.355305  452186 retry.go:31] will retry after 510.256657ms: waiting for machine to come up
	I0805 12:58:44.866970  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:44.867534  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:44.867559  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:44.867424  452186 retry.go:31] will retry after 668.55006ms: waiting for machine to come up
	I0805 12:58:45.537377  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:45.537959  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:45.537989  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:45.537909  452186 retry.go:31] will retry after 677.549944ms: waiting for machine to come up
	I0805 12:58:46.217077  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:46.217591  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:46.217625  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:46.217483  452186 retry.go:31] will retry after 847.636867ms: waiting for machine to come up
	I0805 12:58:43.726277  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetIP
	I0805 12:58:43.729689  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:43.730162  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:43.730195  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:43.730391  450884 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0805 12:58:43.735448  450884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:58:43.749640  450884 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-371585 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-371585 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.228 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 12:58:43.749808  450884 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 12:58:43.749886  450884 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:58:43.798507  450884 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0805 12:58:43.798584  450884 ssh_runner.go:195] Run: which lz4
	I0805 12:58:43.803306  450884 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0805 12:58:43.809104  450884 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 12:58:43.809144  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0805 12:58:45.333758  450884 crio.go:462] duration metric: took 1.530500213s to copy over tarball
	I0805 12:58:45.333831  450884 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 12:58:43.587275  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:46.303995  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:46.304038  450576 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:46.304057  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:46.308815  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:46.308849  450576 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:46.587239  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:46.595116  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:46.595151  450576 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:47.087372  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:47.094319  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:47.094363  450576 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:47.586909  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:47.592210  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:47.592252  450576 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:48.086763  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:48.095151  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:48.095182  450576 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:48.586840  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:48.593834  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:48.593870  450576 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:49.087516  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:49.093647  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:49.093677  450576 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:49.587309  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:49.593592  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 200:
	ok
	I0805 12:58:49.602960  450576 api_server.go:141] control plane version: v1.31.0-rc.0
	I0805 12:58:49.603001  450576 api_server.go:131] duration metric: took 6.516505116s to wait for apiserver health ...
	I0805 12:58:49.603013  450576 cni.go:84] Creating CNI manager for ""
	I0805 12:58:49.603024  450576 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:58:49.851135  450576 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0805 12:58:47.067245  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:47.067895  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:47.067930  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:47.067838  452186 retry.go:31] will retry after 1.275228928s: waiting for machine to come up
	I0805 12:58:48.344881  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:48.345295  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:48.345319  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:48.345258  452186 retry.go:31] will retry after 1.826891386s: waiting for machine to come up
	I0805 12:58:50.174583  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:50.175111  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:50.175138  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:50.175074  452186 retry.go:31] will retry after 1.53756677s: waiting for machine to come up
	I0805 12:58:51.714025  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:51.714529  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:51.714553  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:51.714485  452186 retry.go:31] will retry after 2.762270002s: waiting for machine to come up
	I0805 12:58:47.908896  450884 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.575029516s)
	I0805 12:58:47.908929  450884 crio.go:469] duration metric: took 2.575138566s to extract the tarball
	I0805 12:58:47.908938  450884 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0805 12:58:47.964757  450884 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:58:48.013358  450884 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 12:58:48.013392  450884 cache_images.go:84] Images are preloaded, skipping loading
	I0805 12:58:48.013404  450884 kubeadm.go:934] updating node { 192.168.50.228 8444 v1.30.3 crio true true} ...
	I0805 12:58:48.013533  450884 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-371585 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.228
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-371585 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 12:58:48.013623  450884 ssh_runner.go:195] Run: crio config
	I0805 12:58:48.062183  450884 cni.go:84] Creating CNI manager for ""
	I0805 12:58:48.062219  450884 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:58:48.062238  450884 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 12:58:48.062274  450884 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.228 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-371585 NodeName:default-k8s-diff-port-371585 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.228"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.228 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 12:58:48.062474  450884 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.228
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-371585"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.228
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.228"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 12:58:48.062552  450884 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 12:58:48.076490  450884 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 12:58:48.076583  450884 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 12:58:48.090058  450884 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0805 12:58:48.110202  450884 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 12:58:48.131420  450884 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0805 12:58:48.151774  450884 ssh_runner.go:195] Run: grep 192.168.50.228	control-plane.minikube.internal$ /etc/hosts
	I0805 12:58:48.156904  450884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.228	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:58:48.172398  450884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:58:48.292999  450884 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 12:58:48.310331  450884 certs.go:68] Setting up /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585 for IP: 192.168.50.228
	I0805 12:58:48.310366  450884 certs.go:194] generating shared ca certs ...
	I0805 12:58:48.310389  450884 certs.go:226] acquiring lock for ca certs: {Name:mk0abfcaff3883fbb5243c47b487f9200d9166d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:58:48.310576  450884 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key
	I0805 12:58:48.310640  450884 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key
	I0805 12:58:48.310658  450884 certs.go:256] generating profile certs ...
	I0805 12:58:48.310803  450884 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585/client.key
	I0805 12:58:48.310881  450884 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585/apiserver.key.f7891227
	I0805 12:58:48.310946  450884 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585/proxy-client.key
	I0805 12:58:48.311231  450884 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem (1338 bytes)
	W0805 12:58:48.311317  450884 certs.go:480] ignoring /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219_empty.pem, impossibly tiny 0 bytes
	I0805 12:58:48.311354  450884 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 12:58:48.311408  450884 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem (1082 bytes)
	I0805 12:58:48.311447  450884 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem (1123 bytes)
	I0805 12:58:48.311485  450884 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem (1675 bytes)
	I0805 12:58:48.311545  450884 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:58:48.312365  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 12:58:48.363733  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 12:58:48.395662  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 12:58:48.450822  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 12:58:48.495611  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0805 12:58:48.529393  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0805 12:58:48.557543  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 12:58:48.584777  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 12:58:48.611987  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /usr/share/ca-certificates/3912192.pem (1708 bytes)
	I0805 12:58:48.637500  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 12:58:48.664469  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem --> /usr/share/ca-certificates/391219.pem (1338 bytes)
	I0805 12:58:48.690221  450884 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 12:58:48.709082  450884 ssh_runner.go:195] Run: openssl version
	I0805 12:58:48.716181  450884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3912192.pem && ln -fs /usr/share/ca-certificates/3912192.pem /etc/ssl/certs/3912192.pem"
	I0805 12:58:48.728455  450884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3912192.pem
	I0805 12:58:48.733395  450884 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 11:39 /usr/share/ca-certificates/3912192.pem
	I0805 12:58:48.733456  450884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3912192.pem
	I0805 12:58:48.739295  450884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3912192.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 12:58:48.750515  450884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 12:58:48.761506  450884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:58:48.765995  450884 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:58:48.766052  450884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:58:48.772121  450884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 12:58:48.783123  450884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391219.pem && ln -fs /usr/share/ca-certificates/391219.pem /etc/ssl/certs/391219.pem"
	I0805 12:58:48.794318  450884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/391219.pem
	I0805 12:58:48.798795  450884 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 11:39 /usr/share/ca-certificates/391219.pem
	I0805 12:58:48.798843  450884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391219.pem
	I0805 12:58:48.804878  450884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391219.pem /etc/ssl/certs/51391683.0"
	I0805 12:58:48.816757  450884 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 12:58:48.821686  450884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 12:58:48.828121  450884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 12:58:48.834386  450884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 12:58:48.840425  450884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 12:58:48.846218  450884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 12:58:48.852035  450884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 12:58:48.857997  450884 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-371585 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-371585 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.228 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:58:48.858131  450884 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 12:58:48.858179  450884 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:58:48.908402  450884 cri.go:89] found id: ""
	I0805 12:58:48.908471  450884 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 12:58:48.921185  450884 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0805 12:58:48.921207  450884 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0805 12:58:48.921258  450884 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0805 12:58:48.932907  450884 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0805 12:58:48.933927  450884 kubeconfig.go:125] found "default-k8s-diff-port-371585" server: "https://192.168.50.228:8444"
	I0805 12:58:48.936058  450884 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0805 12:58:48.947233  450884 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.228
	I0805 12:58:48.947262  450884 kubeadm.go:1160] stopping kube-system containers ...
	I0805 12:58:48.947273  450884 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0805 12:58:48.947313  450884 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:58:48.988179  450884 cri.go:89] found id: ""
	I0805 12:58:48.988281  450884 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0805 12:58:49.005901  450884 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 12:58:49.016576  450884 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 12:58:49.016597  450884 kubeadm.go:157] found existing configuration files:
	
	I0805 12:58:49.016648  450884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0805 12:58:49.029718  450884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 12:58:49.029822  450884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 12:58:49.041670  450884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0805 12:58:49.051650  450884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 12:58:49.051724  450884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 12:58:49.061671  450884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0805 12:58:49.071671  450884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 12:58:49.071755  450884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 12:58:49.082022  450884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0805 12:58:49.092013  450884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 12:58:49.092103  450884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 12:58:49.105446  450884 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 12:58:49.118581  450884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:49.233260  450884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:50.199462  450884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:50.418823  450884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:50.500350  450884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:50.594991  450884 api_server.go:52] waiting for apiserver process to appear ...
	I0805 12:58:50.595109  450884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:58:51.096171  450884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:58:51.596111  450884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:58:51.633309  450884 api_server.go:72] duration metric: took 1.038316986s to wait for apiserver process to appear ...
	I0805 12:58:51.633350  450884 api_server.go:88] waiting for apiserver healthz status ...
	I0805 12:58:51.633377  450884 api_server.go:253] Checking apiserver healthz at https://192.168.50.228:8444/healthz ...
	I0805 12:58:51.634005  450884 api_server.go:269] stopped: https://192.168.50.228:8444/healthz: Get "https://192.168.50.228:8444/healthz": dial tcp 192.168.50.228:8444: connect: connection refused
	I0805 12:58:50.021635  450576 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0805 12:58:50.036338  450576 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0805 12:58:50.060746  450576 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 12:58:50.159670  450576 system_pods.go:59] 8 kube-system pods found
	I0805 12:58:50.159724  450576 system_pods.go:61] "coredns-6f6b679f8f-nkv88" [ee7e59fb-2500-4d7a-9537-e38e08fb2445] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0805 12:58:50.159737  450576 system_pods.go:61] "etcd-no-preload-669469" [095df0f1-069a-419f-815b-ddbec3a2291f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0805 12:58:50.159762  450576 system_pods.go:61] "kube-apiserver-no-preload-669469" [20b45902-b807-457a-93b3-d2b9b76d2598] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0805 12:58:50.159772  450576 system_pods.go:61] "kube-controller-manager-no-preload-669469" [122a47ed-7f6f-4b2e-980a-45f41b997dda] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0805 12:58:50.159780  450576 system_pods.go:61] "kube-proxy-cwq69" [78e0333b-a0f4-40a6-a04d-6971bb4d09a8] Running
	I0805 12:58:50.159788  450576 system_pods.go:61] "kube-scheduler-no-preload-669469" [88010c2b-b32f-4fe1-952d-262e881b76dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0805 12:58:50.159796  450576 system_pods.go:61] "metrics-server-6867b74b74-p7b2r" [7e4dd805-07c8-4339-bf1a-57a98fd674cd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 12:58:50.159808  450576 system_pods.go:61] "storage-provisioner" [207c46c5-c3c0-4f0b-b3ea-9b42b9e5f761] Running
	I0805 12:58:50.159817  450576 system_pods.go:74] duration metric: took 99.038765ms to wait for pod list to return data ...
	I0805 12:58:50.159830  450576 node_conditions.go:102] verifying NodePressure condition ...
	I0805 12:58:50.163888  450576 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 12:58:50.163923  450576 node_conditions.go:123] node cpu capacity is 2
	I0805 12:58:50.163956  450576 node_conditions.go:105] duration metric: took 4.11869ms to run NodePressure ...
	I0805 12:58:50.163980  450576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:50.849885  450576 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0805 12:58:50.854483  450576 kubeadm.go:739] kubelet initialised
	I0805 12:58:50.854505  450576 kubeadm.go:740] duration metric: took 4.588388ms waiting for restarted kubelet to initialise ...
	I0805 12:58:50.854514  450576 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 12:58:50.861245  450576 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-nkv88" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:52.869370  450576 pod_ready.go:102] pod "coredns-6f6b679f8f-nkv88" in "kube-system" namespace has status "Ready":"False"
	I0805 12:58:52.134427  450884 api_server.go:253] Checking apiserver healthz at https://192.168.50.228:8444/healthz ...
	I0805 12:58:54.933253  450884 api_server.go:279] https://192.168.50.228:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0805 12:58:54.933288  450884 api_server.go:103] status: https://192.168.50.228:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0805 12:58:54.933305  450884 api_server.go:253] Checking apiserver healthz at https://192.168.50.228:8444/healthz ...
	I0805 12:58:54.970883  450884 api_server.go:279] https://192.168.50.228:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0805 12:58:54.970928  450884 api_server.go:103] status: https://192.168.50.228:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0805 12:58:55.134250  450884 api_server.go:253] Checking apiserver healthz at https://192.168.50.228:8444/healthz ...
	I0805 12:58:55.139762  450884 api_server.go:279] https://192.168.50.228:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:55.139798  450884 api_server.go:103] status: https://192.168.50.228:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:55.634499  450884 api_server.go:253] Checking apiserver healthz at https://192.168.50.228:8444/healthz ...
	I0805 12:58:55.644495  450884 api_server.go:279] https://192.168.50.228:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:55.644532  450884 api_server.go:103] status: https://192.168.50.228:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:56.134123  450884 api_server.go:253] Checking apiserver healthz at https://192.168.50.228:8444/healthz ...
	I0805 12:58:56.141958  450884 api_server.go:279] https://192.168.50.228:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:56.142002  450884 api_server.go:103] status: https://192.168.50.228:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:56.633573  450884 api_server.go:253] Checking apiserver healthz at https://192.168.50.228:8444/healthz ...
	I0805 12:58:56.640578  450884 api_server.go:279] https://192.168.50.228:8444/healthz returned 200:
	ok
	I0805 12:58:56.649624  450884 api_server.go:141] control plane version: v1.30.3
	I0805 12:58:56.649659  450884 api_server.go:131] duration metric: took 5.016299114s to wait for apiserver health ...
	I0805 12:58:56.649671  450884 cni.go:84] Creating CNI manager for ""
	I0805 12:58:56.649681  450884 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:58:56.651587  450884 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0805 12:58:54.478201  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:54.478619  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:54.478650  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:54.478579  452186 retry.go:31] will retry after 2.992766963s: waiting for machine to come up
	I0805 12:58:56.652853  450884 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0805 12:58:56.663878  450884 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0805 12:58:56.699765  450884 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 12:58:56.715040  450884 system_pods.go:59] 8 kube-system pods found
	I0805 12:58:56.715078  450884 system_pods.go:61] "coredns-7db6d8ff4d-8rzb7" [df42e41d-4544-493f-a09d-678df1fb5258] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0805 12:58:56.715085  450884 system_pods.go:61] "etcd-default-k8s-diff-port-371585" [1ab6cd59-432a-44b8-95f2-948c585d9bbf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0805 12:58:56.715092  450884 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-371585" [c9173b98-c77e-4ad0-aea5-c894c045e0c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0805 12:58:56.715101  450884 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-371585" [283737ec-1afa-4994-9cee-b655a8397a37] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0805 12:58:56.715105  450884 system_pods.go:61] "kube-proxy-5dr9v" [767ccb8b-2db0-4b59-b3b0-e099185bc725] Running
	I0805 12:58:56.715111  450884 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-371585" [fb3cfdea-9370-4842-a5ab-5ac24804f59e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0805 12:58:56.715116  450884 system_pods.go:61] "metrics-server-569cc877fc-dsrqr" [0d4c79e4-aa6c-42f5-840b-91b9d714d078] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 12:58:56.715125  450884 system_pods.go:61] "storage-provisioner" [2dba6f50-5cdc-4195-8daf-c19dac38f488] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0805 12:58:56.715133  450884 system_pods.go:74] duration metric: took 15.343284ms to wait for pod list to return data ...
	I0805 12:58:56.715144  450884 node_conditions.go:102] verifying NodePressure condition ...
	I0805 12:58:56.720006  450884 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 12:58:56.720031  450884 node_conditions.go:123] node cpu capacity is 2
	I0805 12:58:56.720042  450884 node_conditions.go:105] duration metric: took 4.893566ms to run NodePressure ...
	I0805 12:58:56.720059  450884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:56.985822  450884 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0805 12:58:56.990461  450884 kubeadm.go:739] kubelet initialised
	I0805 12:58:56.990484  450884 kubeadm.go:740] duration metric: took 4.636814ms waiting for restarted kubelet to initialise ...
	I0805 12:58:56.990493  450884 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 12:58:56.996266  450884 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-8rzb7" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:57.001407  450884 pod_ready.go:97] node "default-k8s-diff-port-371585" hosting pod "coredns-7db6d8ff4d-8rzb7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-371585" has status "Ready":"False"
	I0805 12:58:57.001434  450884 pod_ready.go:81] duration metric: took 5.140963ms for pod "coredns-7db6d8ff4d-8rzb7" in "kube-system" namespace to be "Ready" ...
	E0805 12:58:57.001446  450884 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-371585" hosting pod "coredns-7db6d8ff4d-8rzb7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-371585" has status "Ready":"False"
	I0805 12:58:57.001456  450884 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:57.005437  450884 pod_ready.go:97] node "default-k8s-diff-port-371585" hosting pod "etcd-default-k8s-diff-port-371585" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-371585" has status "Ready":"False"
	I0805 12:58:57.005473  450884 pod_ready.go:81] duration metric: took 3.995646ms for pod "etcd-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	E0805 12:58:57.005486  450884 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-371585" hosting pod "etcd-default-k8s-diff-port-371585" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-371585" has status "Ready":"False"
	I0805 12:58:57.005495  450884 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:57.009923  450884 pod_ready.go:97] node "default-k8s-diff-port-371585" hosting pod "kube-apiserver-default-k8s-diff-port-371585" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-371585" has status "Ready":"False"
	I0805 12:58:57.009943  450884 pod_ready.go:81] duration metric: took 4.439871ms for pod "kube-apiserver-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	E0805 12:58:57.009952  450884 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-371585" hosting pod "kube-apiserver-default-k8s-diff-port-371585" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-371585" has status "Ready":"False"
	I0805 12:58:57.009958  450884 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:54.869534  450576 pod_ready.go:102] pod "coredns-6f6b679f8f-nkv88" in "kube-system" namespace has status "Ready":"False"
	I0805 12:58:56.370007  450576 pod_ready.go:92] pod "coredns-6f6b679f8f-nkv88" in "kube-system" namespace has status "Ready":"True"
	I0805 12:58:56.370035  450576 pod_ready.go:81] duration metric: took 5.508756413s for pod "coredns-6f6b679f8f-nkv88" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:56.370045  450576 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.376357  450576 pod_ready.go:92] pod "etcd-no-preload-669469" in "kube-system" namespace has status "Ready":"True"
	I0805 12:58:58.376386  450576 pod_ready.go:81] duration metric: took 2.006334873s for pod "etcd-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.376396  450576 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:57.473094  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:57.473555  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:57.473587  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:57.473495  452186 retry.go:31] will retry after 4.27138033s: waiting for machine to come up
	I0805 12:59:01.750111  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.750558  451238 main.go:141] libmachine: (old-k8s-version-635707) Found IP for machine: 192.168.61.41
	I0805 12:59:01.750586  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has current primary IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.750593  451238 main.go:141] libmachine: (old-k8s-version-635707) Reserving static IP address...
	I0805 12:59:01.751003  451238 main.go:141] libmachine: (old-k8s-version-635707) Reserved static IP address: 192.168.61.41
	I0805 12:59:01.751061  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "old-k8s-version-635707", mac: "52:54:00:2a:da:c5", ip: "192.168.61.41"} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:01.751081  451238 main.go:141] libmachine: (old-k8s-version-635707) Waiting for SSH to be available...
	I0805 12:59:01.751112  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | skip adding static IP to network mk-old-k8s-version-635707 - found existing host DHCP lease matching {name: "old-k8s-version-635707", mac: "52:54:00:2a:da:c5", ip: "192.168.61.41"}
	I0805 12:59:01.751130  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | Getting to WaitForSSH function...
	I0805 12:59:01.753240  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.753634  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:01.753672  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.753810  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | Using SSH client type: external
	I0805 12:59:01.753854  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | Using SSH private key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/id_rsa (-rw-------)
	I0805 12:59:01.753900  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.41 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 12:59:01.753919  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | About to run SSH command:
	I0805 12:59:01.753933  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | exit 0
	I0805 12:59:01.875919  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | SSH cmd err, output: <nil>: 
	I0805 12:59:01.876298  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetConfigRaw
	I0805 12:59:01.877028  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetIP
	I0805 12:59:01.879644  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.880120  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:01.880164  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.880508  451238 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/config.json ...
	I0805 12:59:01.880778  451238 machine.go:94] provisionDockerMachine start ...
	I0805 12:59:01.880805  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:59:01.881039  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:01.882998  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.883362  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:01.883389  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.883553  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:01.883755  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:01.883900  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:01.884012  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:01.884248  451238 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:01.884496  451238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I0805 12:59:01.884511  451238 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 12:58:57.103049  450884 pod_ready.go:97] node "default-k8s-diff-port-371585" hosting pod "kube-controller-manager-default-k8s-diff-port-371585" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-371585" has status "Ready":"False"
	I0805 12:58:57.103095  450884 pod_ready.go:81] duration metric: took 93.113727ms for pod "kube-controller-manager-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	E0805 12:58:57.103109  450884 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-371585" hosting pod "kube-controller-manager-default-k8s-diff-port-371585" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-371585" has status "Ready":"False"
	I0805 12:58:57.103116  450884 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5dr9v" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:57.503531  450884 pod_ready.go:92] pod "kube-proxy-5dr9v" in "kube-system" namespace has status "Ready":"True"
	I0805 12:58:57.503556  450884 pod_ready.go:81] duration metric: took 400.433562ms for pod "kube-proxy-5dr9v" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:57.503565  450884 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:59.514591  450884 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:02.011308  450884 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:03.148902  450393 start.go:364] duration metric: took 56.514427046s to acquireMachinesLock for "embed-certs-321139"
	I0805 12:59:03.148967  450393 start.go:96] Skipping create...Using existing machine configuration
	I0805 12:59:03.148976  450393 fix.go:54] fixHost starting: 
	I0805 12:59:03.149432  450393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:59:03.149473  450393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:59:03.166485  450393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43007
	I0805 12:59:03.166934  450393 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:59:03.167443  450393 main.go:141] libmachine: Using API Version  1
	I0805 12:59:03.167469  450393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:59:03.167808  450393 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:59:03.168062  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:03.168258  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetState
	I0805 12:59:03.170011  450393 fix.go:112] recreateIfNeeded on embed-certs-321139: state=Stopped err=<nil>
	I0805 12:59:03.170036  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	W0805 12:59:03.170221  450393 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 12:59:03.172109  450393 out.go:177] * Restarting existing kvm2 VM for "embed-certs-321139" ...
	I0805 12:58:58.886766  450576 pod_ready.go:92] pod "kube-apiserver-no-preload-669469" in "kube-system" namespace has status "Ready":"True"
	I0805 12:58:58.886792  450576 pod_ready.go:81] duration metric: took 510.389529ms for pod "kube-apiserver-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.886804  450576 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.891878  450576 pod_ready.go:92] pod "kube-controller-manager-no-preload-669469" in "kube-system" namespace has status "Ready":"True"
	I0805 12:58:58.891907  450576 pod_ready.go:81] duration metric: took 5.094036ms for pod "kube-controller-manager-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.891919  450576 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cwq69" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.896953  450576 pod_ready.go:92] pod "kube-proxy-cwq69" in "kube-system" namespace has status "Ready":"True"
	I0805 12:58:58.896981  450576 pod_ready.go:81] duration metric: took 5.054422ms for pod "kube-proxy-cwq69" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.896995  450576 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.902437  450576 pod_ready.go:92] pod "kube-scheduler-no-preload-669469" in "kube-system" namespace has status "Ready":"True"
	I0805 12:58:58.902456  450576 pod_ready.go:81] duration metric: took 5.453487ms for pod "kube-scheduler-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.902465  450576 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:00.909633  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:03.410487  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:03.173728  450393 main.go:141] libmachine: (embed-certs-321139) Calling .Start
	I0805 12:59:03.173932  450393 main.go:141] libmachine: (embed-certs-321139) Ensuring networks are active...
	I0805 12:59:03.174932  450393 main.go:141] libmachine: (embed-certs-321139) Ensuring network default is active
	I0805 12:59:03.175441  450393 main.go:141] libmachine: (embed-certs-321139) Ensuring network mk-embed-certs-321139 is active
	I0805 12:59:03.176102  450393 main.go:141] libmachine: (embed-certs-321139) Getting domain xml...
	I0805 12:59:03.176848  450393 main.go:141] libmachine: (embed-certs-321139) Creating domain...
	I0805 12:59:01.984198  451238 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 12:59:01.984237  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetMachineName
	I0805 12:59:01.984501  451238 buildroot.go:166] provisioning hostname "old-k8s-version-635707"
	I0805 12:59:01.984534  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetMachineName
	I0805 12:59:01.984750  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:01.987690  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.988085  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:01.988115  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.988240  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:01.988470  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:01.988782  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:01.988945  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:01.989173  451238 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:01.989407  451238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I0805 12:59:01.989425  451238 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-635707 && echo "old-k8s-version-635707" | sudo tee /etc/hostname
	I0805 12:59:02.108368  451238 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-635707
	
	I0805 12:59:02.108406  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:02.111301  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.111669  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:02.111712  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.111837  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:02.112027  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:02.112212  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:02.112393  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:02.112563  451238 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:02.112797  451238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I0805 12:59:02.112824  451238 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-635707' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-635707/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-635707' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 12:59:02.225638  451238 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 12:59:02.225681  451238 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19377-383955/.minikube CaCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19377-383955/.minikube}
	I0805 12:59:02.225731  451238 buildroot.go:174] setting up certificates
	I0805 12:59:02.225745  451238 provision.go:84] configureAuth start
	I0805 12:59:02.225760  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetMachineName
	I0805 12:59:02.226099  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetIP
	I0805 12:59:02.229252  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.229643  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:02.229671  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.229885  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:02.232479  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.232912  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:02.232951  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.233125  451238 provision.go:143] copyHostCerts
	I0805 12:59:02.233188  451238 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem, removing ...
	I0805 12:59:02.233201  451238 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 12:59:02.233271  451238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem (1123 bytes)
	I0805 12:59:02.233412  451238 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem, removing ...
	I0805 12:59:02.233426  451238 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 12:59:02.233459  451238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem (1675 bytes)
	I0805 12:59:02.233543  451238 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem, removing ...
	I0805 12:59:02.233553  451238 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 12:59:02.233581  451238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem (1082 bytes)
	I0805 12:59:02.233661  451238 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-635707 san=[127.0.0.1 192.168.61.41 localhost minikube old-k8s-version-635707]
	I0805 12:59:02.470213  451238 provision.go:177] copyRemoteCerts
	I0805 12:59:02.470328  451238 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 12:59:02.470369  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:02.473450  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.473791  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:02.473829  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.473964  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:02.474173  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:02.474313  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:02.474429  451238 sshutil.go:53] new ssh client: &{IP:192.168.61.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/id_rsa Username:docker}
	I0805 12:59:02.558831  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 12:59:02.583652  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0805 12:59:02.609154  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0805 12:59:02.635827  451238 provision.go:87] duration metric: took 410.067115ms to configureAuth
	I0805 12:59:02.635862  451238 buildroot.go:189] setting minikube options for container-runtime
	I0805 12:59:02.636109  451238 config.go:182] Loaded profile config "old-k8s-version-635707": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0805 12:59:02.636357  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:02.638964  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.639466  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:02.639489  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.639644  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:02.639953  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:02.640197  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:02.640454  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:02.640733  451238 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:02.640975  451238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I0805 12:59:02.641000  451238 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 12:59:02.917466  451238 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 12:59:02.917499  451238 machine.go:97] duration metric: took 1.036701572s to provisionDockerMachine
	I0805 12:59:02.917512  451238 start.go:293] postStartSetup for "old-k8s-version-635707" (driver="kvm2")
	I0805 12:59:02.917522  451238 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 12:59:02.917539  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:59:02.917946  451238 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 12:59:02.917979  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:02.920900  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.921383  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:02.921426  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.921552  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:02.921773  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:02.921958  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:02.922220  451238 sshutil.go:53] new ssh client: &{IP:192.168.61.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/id_rsa Username:docker}
	I0805 12:59:03.003670  451238 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 12:59:03.008348  451238 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 12:59:03.008384  451238 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/addons for local assets ...
	I0805 12:59:03.008468  451238 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/files for local assets ...
	I0805 12:59:03.008588  451238 filesync.go:149] local asset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> 3912192.pem in /etc/ssl/certs
	I0805 12:59:03.008727  451238 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 12:59:03.019098  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:59:03.042969  451238 start.go:296] duration metric: took 125.441712ms for postStartSetup
	I0805 12:59:03.043011  451238 fix.go:56] duration metric: took 20.961935899s for fixHost
	I0805 12:59:03.043034  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:03.045667  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.046030  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:03.046062  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.046254  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:03.046508  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:03.046701  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:03.046824  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:03.047002  451238 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:03.047182  451238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I0805 12:59:03.047192  451238 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 12:59:03.148773  451238 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722862743.120260193
	
	I0805 12:59:03.148798  451238 fix.go:216] guest clock: 1722862743.120260193
	I0805 12:59:03.148807  451238 fix.go:229] Guest: 2024-08-05 12:59:03.120260193 +0000 UTC Remote: 2024-08-05 12:59:03.043015059 +0000 UTC m=+231.118249223 (delta=77.245134ms)
	I0805 12:59:03.148831  451238 fix.go:200] guest clock delta is within tolerance: 77.245134ms
	I0805 12:59:03.148836  451238 start.go:83] releasing machines lock for "old-k8s-version-635707", held for 21.067801046s
	I0805 12:59:03.148857  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:59:03.149131  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetIP
	I0805 12:59:03.152026  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.152444  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:03.152475  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.152645  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:59:03.153237  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:59:03.153423  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:59:03.153495  451238 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 12:59:03.153551  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:03.153860  451238 ssh_runner.go:195] Run: cat /version.json
	I0805 12:59:03.153895  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:03.156566  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.156903  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:03.156963  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.156994  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.157187  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:03.157411  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:03.157479  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:03.157508  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.157594  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:03.157770  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:03.157782  451238 sshutil.go:53] new ssh client: &{IP:192.168.61.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/id_rsa Username:docker}
	I0805 12:59:03.157924  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:03.158107  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:03.158344  451238 sshutil.go:53] new ssh client: &{IP:192.168.61.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/id_rsa Username:docker}
	I0805 12:59:03.254162  451238 ssh_runner.go:195] Run: systemctl --version
	I0805 12:59:03.260684  451238 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 12:59:03.409837  451238 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 12:59:03.416010  451238 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 12:59:03.416093  451238 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 12:59:03.433548  451238 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 12:59:03.433584  451238 start.go:495] detecting cgroup driver to use...
	I0805 12:59:03.433667  451238 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 12:59:03.450756  451238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 12:59:03.467281  451238 docker.go:217] disabling cri-docker service (if available) ...
	I0805 12:59:03.467341  451238 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 12:59:03.482537  451238 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 12:59:03.498623  451238 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 12:59:03.621224  451238 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 12:59:03.781777  451238 docker.go:233] disabling docker service ...
	I0805 12:59:03.781842  451238 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 12:59:03.798020  451238 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 12:59:03.818262  451238 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 12:59:03.940897  451238 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 12:59:04.075622  451238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 12:59:04.092487  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 12:59:04.112699  451238 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0805 12:59:04.112769  451238 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:04.124102  451238 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 12:59:04.124181  451238 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:04.136339  451238 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:04.147689  451238 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:04.158552  451238 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 12:59:04.171412  451238 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 12:59:04.183284  451238 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0805 12:59:04.183336  451238 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0805 12:59:04.199465  451238 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 12:59:04.215571  451238 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:59:04.342540  451238 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 12:59:04.521705  451238 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 12:59:04.521786  451238 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 12:59:04.526734  451238 start.go:563] Will wait 60s for crictl version
	I0805 12:59:04.526795  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:04.530528  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 12:59:04.572468  451238 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 12:59:04.572557  451238 ssh_runner.go:195] Run: crio --version
	I0805 12:59:04.602411  451238 ssh_runner.go:195] Run: crio --version
	I0805 12:59:04.636641  451238 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0805 12:59:04.638062  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetIP
	I0805 12:59:04.641240  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:04.641734  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:04.641763  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:04.641991  451238 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0805 12:59:04.646446  451238 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:59:04.659876  451238 kubeadm.go:883] updating cluster {Name:old-k8s-version-635707 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-635707 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.41 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 12:59:04.660037  451238 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0805 12:59:04.660105  451238 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:59:04.709636  451238 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0805 12:59:04.709725  451238 ssh_runner.go:195] Run: which lz4
	I0805 12:59:04.714439  451238 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0805 12:59:04.719014  451238 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 12:59:04.719047  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0805 12:59:06.414858  451238 crio.go:462] duration metric: took 1.70045694s to copy over tarball
	I0805 12:59:06.414950  451238 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 12:59:04.513198  450884 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:07.018197  450884 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:05.911274  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:07.911405  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:04.478626  450393 main.go:141] libmachine: (embed-certs-321139) Waiting to get IP...
	I0805 12:59:04.479615  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:04.480147  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:04.480209  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:04.480103  452359 retry.go:31] will retry after 236.369287ms: waiting for machine to come up
	I0805 12:59:04.718716  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:04.719184  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:04.719209  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:04.719125  452359 retry.go:31] will retry after 296.553947ms: waiting for machine to come up
	I0805 12:59:05.017667  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:05.018198  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:05.018235  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:05.018143  452359 retry.go:31] will retry after 427.78496ms: waiting for machine to come up
	I0805 12:59:05.447507  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:05.448075  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:05.448105  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:05.448038  452359 retry.go:31] will retry after 469.229133ms: waiting for machine to come up
	I0805 12:59:05.918469  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:05.919013  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:05.919047  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:05.918998  452359 retry.go:31] will retry after 720.005641ms: waiting for machine to come up
	I0805 12:59:06.641103  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:06.641679  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:06.641708  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:06.641634  452359 retry.go:31] will retry after 591.439327ms: waiting for machine to come up
	I0805 12:59:07.234573  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:07.235179  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:07.235207  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:07.235063  452359 retry.go:31] will retry after 1.087958168s: waiting for machine to come up
	I0805 12:59:08.324599  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:08.325179  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:08.325212  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:08.325129  452359 retry.go:31] will retry after 1.316276197s: waiting for machine to come up
	I0805 12:59:09.473711  451238 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.058718584s)
	I0805 12:59:09.473740  451238 crio.go:469] duration metric: took 3.058854233s to extract the tarball
	I0805 12:59:09.473748  451238 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0805 12:59:09.524420  451238 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:59:09.562003  451238 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0805 12:59:09.562035  451238 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0805 12:59:09.562107  451238 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:59:09.562159  451238 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0805 12:59:09.562156  451238 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0805 12:59:09.562194  451238 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0805 12:59:09.562228  451238 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0805 12:59:09.562256  451238 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0805 12:59:09.562374  451238 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0805 12:59:09.562274  451238 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0805 12:59:09.563981  451238 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0805 12:59:09.563993  451238 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0805 12:59:09.564007  451238 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0805 12:59:09.564015  451238 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0805 12:59:09.564032  451238 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0805 12:59:09.564041  451238 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0805 12:59:09.564076  451238 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:59:09.564075  451238 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0805 12:59:09.727888  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0805 12:59:09.732060  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0805 12:59:09.732150  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0805 12:59:09.736408  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0805 12:59:09.748051  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0805 12:59:09.753579  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0805 12:59:09.762561  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0805 12:59:09.822623  451238 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0805 12:59:09.822681  451238 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0805 12:59:09.822742  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:09.824314  451238 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0805 12:59:09.824360  451238 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0805 12:59:09.824404  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:09.905619  451238 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0805 12:59:09.905778  451238 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0805 12:59:09.905738  451238 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0805 12:59:09.905944  451238 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0805 12:59:09.905998  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:09.905851  451238 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0805 12:59:09.906075  451238 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0805 12:59:09.906133  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:09.905861  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:09.916767  451238 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0805 12:59:09.916796  451238 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0805 12:59:09.916812  451238 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0805 12:59:09.916830  451238 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0805 12:59:09.916864  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:09.916868  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:09.916905  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0805 12:59:09.916958  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0805 12:59:09.918683  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0805 12:59:09.918718  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0805 12:59:09.918776  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0805 12:59:10.007687  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0805 12:59:10.007721  451238 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0805 12:59:10.007871  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0805 12:59:10.042432  451238 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0805 12:59:10.061343  451238 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0805 12:59:10.061400  451238 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0805 12:59:10.061469  451238 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0805 12:59:10.073852  451238 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0805 12:59:10.084957  451238 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0805 12:59:10.423355  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:59:10.563992  451238 cache_images.go:92] duration metric: took 1.001937985s to LoadCachedImages
	W0805 12:59:10.564184  451238 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0805 12:59:10.564211  451238 kubeadm.go:934] updating node { 192.168.61.41 8443 v1.20.0 crio true true} ...
	I0805 12:59:10.564345  451238 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-635707 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.41
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-635707 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 12:59:10.564427  451238 ssh_runner.go:195] Run: crio config
	I0805 12:59:10.612146  451238 cni.go:84] Creating CNI manager for ""
	I0805 12:59:10.612180  451238 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:59:10.612197  451238 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 12:59:10.612226  451238 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.41 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-635707 NodeName:old-k8s-version-635707 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.41"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.41 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0805 12:59:10.612415  451238 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.41
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-635707"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.41
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.41"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 12:59:10.612507  451238 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0805 12:59:10.623036  451238 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 12:59:10.623121  451238 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 12:59:10.633484  451238 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0805 12:59:10.652444  451238 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 12:59:10.673192  451238 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0805 12:59:10.694533  451238 ssh_runner.go:195] Run: grep 192.168.61.41	control-plane.minikube.internal$ /etc/hosts
	I0805 12:59:10.699901  451238 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.41	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:59:10.714251  451238 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:59:10.838992  451238 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 12:59:10.857248  451238 certs.go:68] Setting up /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707 for IP: 192.168.61.41
	I0805 12:59:10.857279  451238 certs.go:194] generating shared ca certs ...
	I0805 12:59:10.857303  451238 certs.go:226] acquiring lock for ca certs: {Name:mk0abfcaff3883fbb5243c47b487f9200d9166d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:59:10.857515  451238 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key
	I0805 12:59:10.857587  451238 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key
	I0805 12:59:10.857602  451238 certs.go:256] generating profile certs ...
	I0805 12:59:10.857746  451238 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/client.key
	I0805 12:59:10.857847  451238 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/apiserver.key.3f42c485
	I0805 12:59:10.857907  451238 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/proxy-client.key
	I0805 12:59:10.858072  451238 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem (1338 bytes)
	W0805 12:59:10.858122  451238 certs.go:480] ignoring /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219_empty.pem, impossibly tiny 0 bytes
	I0805 12:59:10.858143  451238 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 12:59:10.858177  451238 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem (1082 bytes)
	I0805 12:59:10.858207  451238 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem (1123 bytes)
	I0805 12:59:10.858235  451238 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem (1675 bytes)
	I0805 12:59:10.858294  451238 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:59:10.859247  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 12:59:10.908518  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 12:59:10.949310  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 12:59:10.981447  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 12:59:11.008085  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0805 12:59:11.035539  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0805 12:59:11.071371  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 12:59:11.099842  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 12:59:11.135629  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 12:59:11.164194  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem --> /usr/share/ca-certificates/391219.pem (1338 bytes)
	I0805 12:59:11.190595  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /usr/share/ca-certificates/3912192.pem (1708 bytes)
	I0805 12:59:11.219765  451238 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 12:59:11.240836  451238 ssh_runner.go:195] Run: openssl version
	I0805 12:59:11.247516  451238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3912192.pem && ln -fs /usr/share/ca-certificates/3912192.pem /etc/ssl/certs/3912192.pem"
	I0805 12:59:11.260736  451238 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3912192.pem
	I0805 12:59:11.266004  451238 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 11:39 /usr/share/ca-certificates/3912192.pem
	I0805 12:59:11.266100  451238 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3912192.pem
	I0805 12:59:11.273012  451238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3912192.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 12:59:11.285453  451238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 12:59:11.296934  451238 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:59:11.301588  451238 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:59:11.301655  451238 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:59:11.307459  451238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 12:59:11.318833  451238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391219.pem && ln -fs /usr/share/ca-certificates/391219.pem /etc/ssl/certs/391219.pem"
	I0805 12:59:11.330224  451238 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/391219.pem
	I0805 12:59:11.334864  451238 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 11:39 /usr/share/ca-certificates/391219.pem
	I0805 12:59:11.334917  451238 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391219.pem
	I0805 12:59:11.341338  451238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391219.pem /etc/ssl/certs/51391683.0"
	I0805 12:59:11.353084  451238 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 12:59:11.358532  451238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 12:59:11.365419  451238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 12:59:11.371581  451238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 12:59:11.378308  451238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 12:59:11.384640  451238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 12:59:11.390622  451238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 12:59:11.397027  451238 kubeadm.go:392] StartCluster: {Name:old-k8s-version-635707 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-635707 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.41 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:59:11.397199  451238 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 12:59:11.397286  451238 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:59:11.436612  451238 cri.go:89] found id: ""
	I0805 12:59:11.436689  451238 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 12:59:11.447906  451238 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0805 12:59:11.447927  451238 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0805 12:59:11.447984  451238 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0805 12:59:11.459282  451238 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0805 12:59:11.460548  451238 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-635707" does not appear in /home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 12:59:11.461355  451238 kubeconfig.go:62] /home/jenkins/minikube-integration/19377-383955/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-635707" cluster setting kubeconfig missing "old-k8s-version-635707" context setting]
	I0805 12:59:11.462324  451238 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/kubeconfig: {Name:mkf2ea766e58530103015ce4ba9d1ed3336f3926 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:59:11.476306  451238 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0805 12:59:11.487869  451238 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.41
	I0805 12:59:11.487911  451238 kubeadm.go:1160] stopping kube-system containers ...
	I0805 12:59:11.487927  451238 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0805 12:59:11.487988  451238 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:59:11.526601  451238 cri.go:89] found id: ""
	I0805 12:59:11.526674  451238 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0805 12:59:11.545429  451238 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 12:59:11.556725  451238 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 12:59:11.556755  451238 kubeadm.go:157] found existing configuration files:
	
	I0805 12:59:11.556820  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 12:59:11.566564  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 12:59:11.566648  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 12:59:11.576859  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 12:59:11.586237  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 12:59:11.586329  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 12:59:11.596721  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 12:59:11.607239  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 12:59:11.607340  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 12:59:11.617626  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 12:59:11.627179  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 12:59:11.627251  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 12:59:11.637566  451238 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 12:59:11.648889  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:11.780270  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:08.018320  450884 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"True"
	I0805 12:59:08.018363  450884 pod_ready.go:81] duration metric: took 10.514788401s for pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:08.018379  450884 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:10.270876  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:10.409419  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:12.410565  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:09.643077  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:09.643655  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:09.643692  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:09.643554  452359 retry.go:31] will retry after 1.473183692s: waiting for machine to come up
	I0805 12:59:11.118468  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:11.119005  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:11.119035  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:11.118943  452359 retry.go:31] will retry after 2.036333626s: waiting for machine to come up
	I0805 12:59:13.156866  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:13.157390  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:13.157419  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:13.157339  452359 retry.go:31] will retry after 2.095065362s: waiting for machine to come up
	I0805 12:59:12.549918  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:12.781853  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:12.877381  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:12.978141  451238 api_server.go:52] waiting for apiserver process to appear ...
	I0805 12:59:12.978250  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:13.479242  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:13.978456  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:14.478575  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:14.978783  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:15.479342  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:15.978307  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:16.479180  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:12.526543  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:15.027362  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:14.909480  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:16.911090  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:15.253589  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:15.254081  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:15.254111  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:15.254020  452359 retry.go:31] will retry after 2.859783781s: waiting for machine to come up
	I0805 12:59:18.116972  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:18.117528  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:18.117559  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:18.117486  452359 retry.go:31] will retry after 4.456427854s: waiting for machine to come up
	I0805 12:59:16.978915  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:17.479019  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:17.978574  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:18.478343  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:18.978820  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:19.478488  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:19.978335  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:20.478945  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:20.979040  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:21.479324  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:17.525332  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:19.525407  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:22.025092  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:19.410416  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:21.908646  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:22.576842  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.577261  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has current primary IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.577291  450393 main.go:141] libmachine: (embed-certs-321139) Found IP for machine: 192.168.39.196
	I0805 12:59:22.577306  450393 main.go:141] libmachine: (embed-certs-321139) Reserving static IP address...
	I0805 12:59:22.577834  450393 main.go:141] libmachine: (embed-certs-321139) Reserved static IP address: 192.168.39.196
	I0805 12:59:22.577877  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "embed-certs-321139", mac: "52:54:00:6c:ad:fd", ip: "192.168.39.196"} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:22.577893  450393 main.go:141] libmachine: (embed-certs-321139) Waiting for SSH to be available...
	I0805 12:59:22.577915  450393 main.go:141] libmachine: (embed-certs-321139) DBG | skip adding static IP to network mk-embed-certs-321139 - found existing host DHCP lease matching {name: "embed-certs-321139", mac: "52:54:00:6c:ad:fd", ip: "192.168.39.196"}
	I0805 12:59:22.577922  450393 main.go:141] libmachine: (embed-certs-321139) DBG | Getting to WaitForSSH function...
	I0805 12:59:22.580080  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.580520  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:22.580552  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.580707  450393 main.go:141] libmachine: (embed-certs-321139) DBG | Using SSH client type: external
	I0805 12:59:22.580742  450393 main.go:141] libmachine: (embed-certs-321139) DBG | Using SSH private key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa (-rw-------)
	I0805 12:59:22.580764  450393 main.go:141] libmachine: (embed-certs-321139) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.196 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 12:59:22.580778  450393 main.go:141] libmachine: (embed-certs-321139) DBG | About to run SSH command:
	I0805 12:59:22.580793  450393 main.go:141] libmachine: (embed-certs-321139) DBG | exit 0
	I0805 12:59:22.703872  450393 main.go:141] libmachine: (embed-certs-321139) DBG | SSH cmd err, output: <nil>: 
	I0805 12:59:22.704333  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetConfigRaw
	I0805 12:59:22.705046  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetIP
	I0805 12:59:22.707544  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.707919  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:22.707951  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.708240  450393 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139/config.json ...
	I0805 12:59:22.708474  450393 machine.go:94] provisionDockerMachine start ...
	I0805 12:59:22.708501  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:22.708755  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:22.711177  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.711488  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:22.711510  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.711639  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:22.711842  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:22.711998  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:22.712157  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:22.712378  450393 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:22.712581  450393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0805 12:59:22.712595  450393 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 12:59:22.816371  450393 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 12:59:22.816433  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetMachineName
	I0805 12:59:22.816708  450393 buildroot.go:166] provisioning hostname "embed-certs-321139"
	I0805 12:59:22.816743  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetMachineName
	I0805 12:59:22.816959  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:22.819715  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.820085  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:22.820108  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.820321  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:22.820510  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:22.820656  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:22.820794  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:22.820952  450393 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:22.821203  450393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0805 12:59:22.821229  450393 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-321139 && echo "embed-certs-321139" | sudo tee /etc/hostname
	I0805 12:59:22.938845  450393 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-321139
	
	I0805 12:59:22.938888  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:22.942264  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.942651  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:22.942684  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.942904  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:22.943161  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:22.943383  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:22.943568  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:22.943777  450393 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:22.943987  450393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0805 12:59:22.944011  450393 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-321139' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-321139/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-321139' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 12:59:23.062700  450393 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 12:59:23.062734  450393 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19377-383955/.minikube CaCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19377-383955/.minikube}
	I0805 12:59:23.062762  450393 buildroot.go:174] setting up certificates
	I0805 12:59:23.062774  450393 provision.go:84] configureAuth start
	I0805 12:59:23.062800  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetMachineName
	I0805 12:59:23.063142  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetIP
	I0805 12:59:23.065839  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.066140  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.066175  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.066359  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:23.069214  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.069562  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.069597  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.069746  450393 provision.go:143] copyHostCerts
	I0805 12:59:23.069813  450393 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem, removing ...
	I0805 12:59:23.069827  450393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 12:59:23.069897  450393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem (1082 bytes)
	I0805 12:59:23.070014  450393 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem, removing ...
	I0805 12:59:23.070025  450393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 12:59:23.070083  450393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem (1123 bytes)
	I0805 12:59:23.070185  450393 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem, removing ...
	I0805 12:59:23.070197  450393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 12:59:23.070226  450393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem (1675 bytes)
	I0805 12:59:23.070308  450393 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem org=jenkins.embed-certs-321139 san=[127.0.0.1 192.168.39.196 embed-certs-321139 localhost minikube]
	I0805 12:59:23.223660  450393 provision.go:177] copyRemoteCerts
	I0805 12:59:23.223759  450393 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 12:59:23.223799  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:23.226548  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.226980  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.227014  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.227195  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:23.227449  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:23.227624  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:23.227801  450393 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa Username:docker}
	I0805 12:59:23.311952  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0805 12:59:23.336888  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0805 12:59:23.363397  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 12:59:23.388197  450393 provision.go:87] duration metric: took 325.408192ms to configureAuth
	I0805 12:59:23.388234  450393 buildroot.go:189] setting minikube options for container-runtime
	I0805 12:59:23.388470  450393 config.go:182] Loaded profile config "embed-certs-321139": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:59:23.388596  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:23.391247  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.391597  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.391626  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.391843  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:23.392054  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:23.392240  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:23.392371  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:23.392528  450393 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:23.392825  450393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0805 12:59:23.392853  450393 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 12:59:23.675427  450393 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 12:59:23.675459  450393 machine.go:97] duration metric: took 966.969142ms to provisionDockerMachine
	I0805 12:59:23.675472  450393 start.go:293] postStartSetup for "embed-certs-321139" (driver="kvm2")
	I0805 12:59:23.675484  450393 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 12:59:23.675515  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:23.675885  450393 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 12:59:23.675912  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:23.678780  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.679100  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.679152  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.679333  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:23.679524  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:23.679657  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:23.679860  450393 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa Username:docker}
	I0805 12:59:23.764372  450393 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 12:59:23.769059  450393 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 12:59:23.769088  450393 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/addons for local assets ...
	I0805 12:59:23.769162  450393 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/files for local assets ...
	I0805 12:59:23.769231  450393 filesync.go:149] local asset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> 3912192.pem in /etc/ssl/certs
	I0805 12:59:23.769334  450393 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 12:59:23.781287  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:59:23.808609  450393 start.go:296] duration metric: took 133.117086ms for postStartSetup
	I0805 12:59:23.808665  450393 fix.go:56] duration metric: took 20.659690035s for fixHost
	I0805 12:59:23.808694  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:23.811519  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.811948  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.811978  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.812164  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:23.812366  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:23.812539  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:23.812708  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:23.812897  450393 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:23.813137  450393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0805 12:59:23.813151  450393 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 12:59:23.916498  450393 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722862763.883942670
	
	I0805 12:59:23.916521  450393 fix.go:216] guest clock: 1722862763.883942670
	I0805 12:59:23.916536  450393 fix.go:229] Guest: 2024-08-05 12:59:23.88394267 +0000 UTC Remote: 2024-08-05 12:59:23.8086712 +0000 UTC m=+359.764794687 (delta=75.27147ms)
	I0805 12:59:23.916570  450393 fix.go:200] guest clock delta is within tolerance: 75.27147ms
	I0805 12:59:23.916578  450393 start.go:83] releasing machines lock for "embed-certs-321139", held for 20.767637373s
	I0805 12:59:23.916598  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:23.916867  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetIP
	I0805 12:59:23.919570  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.919972  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.919999  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.920142  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:23.920666  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:23.920837  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:23.920930  450393 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 12:59:23.920981  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:23.921063  450393 ssh_runner.go:195] Run: cat /version.json
	I0805 12:59:23.921083  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:23.924176  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.924209  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.924557  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.924588  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.924613  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.924635  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.924749  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:23.924936  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:23.925021  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:23.925127  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:23.925219  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:23.925286  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:23.925369  450393 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa Username:docker}
	I0805 12:59:23.925454  450393 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa Username:docker}
	I0805 12:59:24.000693  450393 ssh_runner.go:195] Run: systemctl --version
	I0805 12:59:24.023194  450393 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 12:59:24.178807  450393 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 12:59:24.184954  450393 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 12:59:24.185031  450393 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 12:59:24.201420  450393 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 12:59:24.201453  450393 start.go:495] detecting cgroup driver to use...
	I0805 12:59:24.201543  450393 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 12:59:24.218603  450393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 12:59:24.233928  450393 docker.go:217] disabling cri-docker service (if available) ...
	I0805 12:59:24.233999  450393 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 12:59:24.248455  450393 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 12:59:24.263355  450393 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 12:59:24.386806  450393 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 12:59:24.565128  450393 docker.go:233] disabling docker service ...
	I0805 12:59:24.565229  450393 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 12:59:24.581053  450393 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 12:59:24.594297  450393 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 12:59:24.716615  450393 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 12:59:24.835687  450393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 12:59:24.850666  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 12:59:24.870993  450393 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0805 12:59:24.871055  450393 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:24.881731  450393 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 12:59:24.881815  450393 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:24.893156  450393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:24.903802  450393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:24.915189  450393 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 12:59:24.926967  450393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:24.938008  450393 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:24.956033  450393 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:24.967863  450393 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 12:59:24.977758  450393 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0805 12:59:24.977822  450393 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0805 12:59:24.993837  450393 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 12:59:25.005009  450393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:59:25.135856  450393 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 12:59:25.277425  450393 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 12:59:25.277513  450393 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 12:59:25.282628  450393 start.go:563] Will wait 60s for crictl version
	I0805 12:59:25.282704  450393 ssh_runner.go:195] Run: which crictl
	I0805 12:59:25.287324  450393 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 12:59:25.335315  450393 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 12:59:25.335396  450393 ssh_runner.go:195] Run: crio --version
	I0805 12:59:25.367574  450393 ssh_runner.go:195] Run: crio --version
	I0805 12:59:25.398926  450393 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0805 12:59:21.979289  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:22.478367  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:22.978424  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:23.478877  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:23.978841  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:24.478635  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:24.978824  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:25.479076  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:25.979222  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:26.478928  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:24.025234  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:26.028817  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:23.909428  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:25.910877  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:27.911235  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:25.400219  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetIP
	I0805 12:59:25.403052  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:25.403508  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:25.403552  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:25.403849  450393 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0805 12:59:25.408402  450393 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:59:25.423146  450393 kubeadm.go:883] updating cluster {Name:embed-certs-321139 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-321139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 12:59:25.423301  450393 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 12:59:25.423368  450393 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:59:25.460713  450393 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0805 12:59:25.460795  450393 ssh_runner.go:195] Run: which lz4
	I0805 12:59:25.464997  450393 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0805 12:59:25.469397  450393 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 12:59:25.469452  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0805 12:59:26.966110  450393 crio.go:462] duration metric: took 1.501152522s to copy over tarball
	I0805 12:59:26.966207  450393 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 12:59:26.978648  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:27.478951  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:27.978405  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:28.479008  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:28.978521  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:29.479199  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:29.979288  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:30.479030  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:30.978372  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:31.479194  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:28.525888  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:31.025690  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:30.410973  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:32.910889  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:29.287605  450393 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.321364872s)
	I0805 12:59:29.287636  450393 crio.go:469] duration metric: took 2.321487153s to extract the tarball
	I0805 12:59:29.287647  450393 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0805 12:59:29.329182  450393 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:59:29.372183  450393 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 12:59:29.372211  450393 cache_images.go:84] Images are preloaded, skipping loading
	I0805 12:59:29.372220  450393 kubeadm.go:934] updating node { 192.168.39.196 8443 v1.30.3 crio true true} ...
	I0805 12:59:29.372349  450393 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-321139 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.196
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-321139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 12:59:29.372433  450393 ssh_runner.go:195] Run: crio config
	I0805 12:59:29.426003  450393 cni.go:84] Creating CNI manager for ""
	I0805 12:59:29.426025  450393 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:59:29.426036  450393 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 12:59:29.426059  450393 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.196 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-321139 NodeName:embed-certs-321139 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.196"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.196 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 12:59:29.426192  450393 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.196
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-321139"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.196
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.196"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 12:59:29.426250  450393 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 12:59:29.436248  450393 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 12:59:29.436315  450393 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 12:59:29.445844  450393 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0805 12:59:29.463125  450393 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 12:59:29.479685  450393 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0805 12:59:29.499033  450393 ssh_runner.go:195] Run: grep 192.168.39.196	control-plane.minikube.internal$ /etc/hosts
	I0805 12:59:29.503175  450393 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.196	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:59:29.516141  450393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:59:29.645914  450393 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 12:59:29.664578  450393 certs.go:68] Setting up /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139 for IP: 192.168.39.196
	I0805 12:59:29.664608  450393 certs.go:194] generating shared ca certs ...
	I0805 12:59:29.664626  450393 certs.go:226] acquiring lock for ca certs: {Name:mk0abfcaff3883fbb5243c47b487f9200d9166d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:59:29.664853  450393 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key
	I0805 12:59:29.664922  450393 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key
	I0805 12:59:29.664939  450393 certs.go:256] generating profile certs ...
	I0805 12:59:29.665058  450393 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139/client.key
	I0805 12:59:29.665143  450393 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139/apiserver.key.ce53eda3
	I0805 12:59:29.665183  450393 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139/proxy-client.key
	I0805 12:59:29.665293  450393 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem (1338 bytes)
	W0805 12:59:29.665324  450393 certs.go:480] ignoring /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219_empty.pem, impossibly tiny 0 bytes
	I0805 12:59:29.665331  450393 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 12:59:29.665360  450393 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem (1082 bytes)
	I0805 12:59:29.665382  450393 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem (1123 bytes)
	I0805 12:59:29.665405  450393 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem (1675 bytes)
	I0805 12:59:29.665442  450393 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:59:29.666287  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 12:59:29.705969  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 12:59:29.752700  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 12:59:29.779819  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 12:59:29.806578  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0805 12:59:29.832277  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0805 12:59:29.861682  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 12:59:29.888113  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0805 12:59:29.915023  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem --> /usr/share/ca-certificates/391219.pem (1338 bytes)
	I0805 12:59:29.942582  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /usr/share/ca-certificates/3912192.pem (1708 bytes)
	I0805 12:59:29.971225  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 12:59:29.999278  450393 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 12:59:30.018294  450393 ssh_runner.go:195] Run: openssl version
	I0805 12:59:30.024645  450393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 12:59:30.035446  450393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:59:30.040216  450393 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:59:30.040279  450393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:59:30.046151  450393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 12:59:30.057664  450393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391219.pem && ln -fs /usr/share/ca-certificates/391219.pem /etc/ssl/certs/391219.pem"
	I0805 12:59:30.068822  450393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/391219.pem
	I0805 12:59:30.074073  450393 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 11:39 /usr/share/ca-certificates/391219.pem
	I0805 12:59:30.074138  450393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391219.pem
	I0805 12:59:30.080126  450393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391219.pem /etc/ssl/certs/51391683.0"
	I0805 12:59:30.091168  450393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3912192.pem && ln -fs /usr/share/ca-certificates/3912192.pem /etc/ssl/certs/3912192.pem"
	I0805 12:59:30.103171  450393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3912192.pem
	I0805 12:59:30.108840  450393 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 11:39 /usr/share/ca-certificates/3912192.pem
	I0805 12:59:30.108924  450393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3912192.pem
	I0805 12:59:30.115469  450393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3912192.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 12:59:30.126742  450393 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 12:59:30.132008  450393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 12:59:30.138285  450393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 12:59:30.144251  450393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 12:59:30.150718  450393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 12:59:30.157183  450393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 12:59:30.163709  450393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 12:59:30.170852  450393 kubeadm.go:392] StartCluster: {Name:embed-certs-321139 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-321139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:59:30.170987  450393 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 12:59:30.171055  450393 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:59:30.216014  450393 cri.go:89] found id: ""
	I0805 12:59:30.216103  450393 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 12:59:30.234046  450393 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0805 12:59:30.234076  450393 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0805 12:59:30.234151  450393 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0805 12:59:30.245861  450393 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0805 12:59:30.247434  450393 kubeconfig.go:125] found "embed-certs-321139" server: "https://192.168.39.196:8443"
	I0805 12:59:30.250024  450393 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0805 12:59:30.261066  450393 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.196
	I0805 12:59:30.261116  450393 kubeadm.go:1160] stopping kube-system containers ...
	I0805 12:59:30.261140  450393 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0805 12:59:30.261201  450393 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:59:30.306587  450393 cri.go:89] found id: ""
	I0805 12:59:30.306678  450393 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0805 12:59:30.326818  450393 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 12:59:30.336908  450393 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 12:59:30.336931  450393 kubeadm.go:157] found existing configuration files:
	
	I0805 12:59:30.336984  450393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 12:59:30.346004  450393 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 12:59:30.346105  450393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 12:59:30.355979  450393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 12:59:30.366124  450393 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 12:59:30.366185  450393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 12:59:30.376923  450393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 12:59:30.386526  450393 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 12:59:30.386599  450393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 12:59:30.396661  450393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 12:59:30.406693  450393 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 12:59:30.406765  450393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 12:59:30.417789  450393 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 12:59:30.428214  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:30.554777  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:31.703579  450393 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.14876196s)
	I0805 12:59:31.703620  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:31.925724  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:31.999840  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:32.089948  450393 api_server.go:52] waiting for apiserver process to appear ...
	I0805 12:59:32.090084  450393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:32.590152  450393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:33.090222  450393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:33.115351  450393 api_server.go:72] duration metric: took 1.025404322s to wait for apiserver process to appear ...
	I0805 12:59:33.115385  450393 api_server.go:88] waiting for apiserver healthz status ...
	I0805 12:59:33.115411  450393 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0805 12:59:33.115983  450393 api_server.go:269] stopped: https://192.168.39.196:8443/healthz: Get "https://192.168.39.196:8443/healthz": dial tcp 192.168.39.196:8443: connect: connection refused
	I0805 12:59:33.616210  450393 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0805 12:59:31.978481  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:32.479031  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:32.978796  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:33.478677  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:33.979377  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:34.478595  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:34.979227  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:35.478695  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:35.978911  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:36.479327  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:33.027363  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:35.525528  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:36.274855  450393 api_server.go:279] https://192.168.39.196:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0805 12:59:36.274895  450393 api_server.go:103] status: https://192.168.39.196:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0805 12:59:36.274912  450393 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0805 12:59:36.314290  450393 api_server.go:279] https://192.168.39.196:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0805 12:59:36.314325  450393 api_server.go:103] status: https://192.168.39.196:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0805 12:59:36.615566  450393 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0805 12:59:36.620594  450393 api_server.go:279] https://192.168.39.196:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:59:36.620626  450393 api_server.go:103] status: https://192.168.39.196:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:59:37.116251  450393 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0805 12:59:37.120719  450393 api_server.go:279] https://192.168.39.196:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:59:37.120749  450393 api_server.go:103] status: https://192.168.39.196:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:59:37.616330  450393 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0805 12:59:37.620778  450393 api_server.go:279] https://192.168.39.196:8443/healthz returned 200:
	ok
	I0805 12:59:37.627608  450393 api_server.go:141] control plane version: v1.30.3
	I0805 12:59:37.627640  450393 api_server.go:131] duration metric: took 4.512246076s to wait for apiserver health ...
	I0805 12:59:37.627652  450393 cni.go:84] Creating CNI manager for ""
	I0805 12:59:37.627661  450393 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:59:37.628987  450393 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0805 12:59:35.410070  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:37.411719  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:37.630068  450393 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0805 12:59:37.650034  450393 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0805 12:59:37.691891  450393 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 12:59:37.704810  450393 system_pods.go:59] 8 kube-system pods found
	I0805 12:59:37.704855  450393 system_pods.go:61] "coredns-7db6d8ff4d-wm7lh" [e3851d79-431c-4629-bfdc-ed9615cd46aa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0805 12:59:37.704866  450393 system_pods.go:61] "etcd-embed-certs-321139" [98de664b-92d7-432d-9881-496dd8edd9f3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0805 12:59:37.704887  450393 system_pods.go:61] "kube-apiserver-embed-certs-321139" [2d93e6df-1933-4ac1-82f6-d0d8f74f6d4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0805 12:59:37.704900  450393 system_pods.go:61] "kube-controller-manager-embed-certs-321139" [84165f78-f74b-4714-81b9-eeac2771b86b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0805 12:59:37.704916  450393 system_pods.go:61] "kube-proxy-shgv2" [a19c5991-505f-4105-8c20-7afd63dd8e61] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0805 12:59:37.704928  450393 system_pods.go:61] "kube-scheduler-embed-certs-321139" [961a5013-fd55-48a2-adc2-acde33f6aed5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0805 12:59:37.704946  450393 system_pods.go:61] "metrics-server-569cc877fc-k8mrt" [6d400b20-5de5-4046-b773-39766c67cdb4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 12:59:37.704956  450393 system_pods.go:61] "storage-provisioner" [8b2db057-5262-4648-93ea-f2f0ed51a19b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0805 12:59:37.704967  450393 system_pods.go:74] duration metric: took 13.04358ms to wait for pod list to return data ...
	I0805 12:59:37.704980  450393 node_conditions.go:102] verifying NodePressure condition ...
	I0805 12:59:37.710340  450393 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 12:59:37.710367  450393 node_conditions.go:123] node cpu capacity is 2
	I0805 12:59:37.710382  450393 node_conditions.go:105] duration metric: took 5.392102ms to run NodePressure ...
	I0805 12:59:37.710402  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:37.995945  450393 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0805 12:59:38.000274  450393 kubeadm.go:739] kubelet initialised
	I0805 12:59:38.000295  450393 kubeadm.go:740] duration metric: took 4.323835ms waiting for restarted kubelet to initialise ...
	I0805 12:59:38.000302  450393 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 12:59:38.006122  450393 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-wm7lh" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:38.012368  450393 pod_ready.go:97] node "embed-certs-321139" hosting pod "coredns-7db6d8ff4d-wm7lh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.012392  450393 pod_ready.go:81] duration metric: took 6.243837ms for pod "coredns-7db6d8ff4d-wm7lh" in "kube-system" namespace to be "Ready" ...
	E0805 12:59:38.012400  450393 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-321139" hosting pod "coredns-7db6d8ff4d-wm7lh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.012406  450393 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:38.016338  450393 pod_ready.go:97] node "embed-certs-321139" hosting pod "etcd-embed-certs-321139" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.016357  450393 pod_ready.go:81] duration metric: took 3.943012ms for pod "etcd-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	E0805 12:59:38.016364  450393 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-321139" hosting pod "etcd-embed-certs-321139" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.016369  450393 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:38.021019  450393 pod_ready.go:97] node "embed-certs-321139" hosting pod "kube-apiserver-embed-certs-321139" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.021044  450393 pod_ready.go:81] duration metric: took 4.667242ms for pod "kube-apiserver-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	E0805 12:59:38.021055  450393 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-321139" hosting pod "kube-apiserver-embed-certs-321139" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.021063  450393 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:38.096303  450393 pod_ready.go:97] node "embed-certs-321139" hosting pod "kube-controller-manager-embed-certs-321139" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.096334  450393 pod_ready.go:81] duration metric: took 75.253785ms for pod "kube-controller-manager-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	E0805 12:59:38.096345  450393 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-321139" hosting pod "kube-controller-manager-embed-certs-321139" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.096351  450393 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-shgv2" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:38.495648  450393 pod_ready.go:97] node "embed-certs-321139" hosting pod "kube-proxy-shgv2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.495677  450393 pod_ready.go:81] duration metric: took 399.318117ms for pod "kube-proxy-shgv2" in "kube-system" namespace to be "Ready" ...
	E0805 12:59:38.495687  450393 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-321139" hosting pod "kube-proxy-shgv2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.495694  450393 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:38.896066  450393 pod_ready.go:97] node "embed-certs-321139" hosting pod "kube-scheduler-embed-certs-321139" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.896091  450393 pod_ready.go:81] duration metric: took 400.39101ms for pod "kube-scheduler-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	E0805 12:59:38.896101  450393 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-321139" hosting pod "kube-scheduler-embed-certs-321139" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.896108  450393 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:39.295587  450393 pod_ready.go:97] node "embed-certs-321139" hosting pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:39.295618  450393 pod_ready.go:81] duration metric: took 399.499354ms for pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace to be "Ready" ...
	E0805 12:59:39.295632  450393 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-321139" hosting pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:39.295653  450393 pod_ready.go:38] duration metric: took 1.295340252s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 12:59:39.295675  450393 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 12:59:39.308136  450393 ops.go:34] apiserver oom_adj: -16
	I0805 12:59:39.308161  450393 kubeadm.go:597] duration metric: took 9.07407738s to restartPrimaryControlPlane
	I0805 12:59:39.308170  450393 kubeadm.go:394] duration metric: took 9.137335392s to StartCluster
	I0805 12:59:39.308188  450393 settings.go:142] acquiring lock: {Name:mkef693333292ed53a03690c72ec170ce2e26d3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:59:39.308272  450393 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 12:59:39.310750  450393 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/kubeconfig: {Name:mkf2ea766e58530103015ce4ba9d1ed3336f3926 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:59:39.311015  450393 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 12:59:39.311149  450393 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 12:59:39.311240  450393 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-321139"
	I0805 12:59:39.311289  450393 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-321139"
	W0805 12:59:39.311303  450393 addons.go:243] addon storage-provisioner should already be in state true
	I0805 12:59:39.311301  450393 addons.go:69] Setting metrics-server=true in profile "embed-certs-321139"
	I0805 12:59:39.311305  450393 addons.go:69] Setting default-storageclass=true in profile "embed-certs-321139"
	I0805 12:59:39.311351  450393 host.go:66] Checking if "embed-certs-321139" exists ...
	I0805 12:59:39.311360  450393 addons.go:234] Setting addon metrics-server=true in "embed-certs-321139"
	W0805 12:59:39.311371  450393 addons.go:243] addon metrics-server should already be in state true
	I0805 12:59:39.311371  450393 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-321139"
	I0805 12:59:39.311454  450393 host.go:66] Checking if "embed-certs-321139" exists ...
	I0805 12:59:39.311287  450393 config.go:182] Loaded profile config "embed-certs-321139": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:59:39.311848  450393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:59:39.311897  450393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:59:39.311906  450393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:59:39.311912  450393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:59:39.311964  450393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:59:39.312115  450393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:59:39.313050  450393 out.go:177] * Verifying Kubernetes components...
	I0805 12:59:39.314390  450393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:59:39.327427  450393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36355
	I0805 12:59:39.327687  450393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39217
	I0805 12:59:39.328016  450393 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:59:39.328155  450393 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:59:39.328609  450393 main.go:141] libmachine: Using API Version  1
	I0805 12:59:39.328649  450393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:59:39.328735  450393 main.go:141] libmachine: Using API Version  1
	I0805 12:59:39.328786  450393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:59:39.329013  450393 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:59:39.329086  450393 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:59:39.329560  450393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:59:39.329599  450393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:59:39.329676  450393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:59:39.329721  450393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:59:39.330884  450393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34247
	I0805 12:59:39.331381  450393 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:59:39.331878  450393 main.go:141] libmachine: Using API Version  1
	I0805 12:59:39.331902  450393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:59:39.332289  450393 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:59:39.332529  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetState
	I0805 12:59:39.336244  450393 addons.go:234] Setting addon default-storageclass=true in "embed-certs-321139"
	W0805 12:59:39.336269  450393 addons.go:243] addon default-storageclass should already be in state true
	I0805 12:59:39.336305  450393 host.go:66] Checking if "embed-certs-321139" exists ...
	I0805 12:59:39.336688  450393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:59:39.336735  450393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:59:39.347255  450393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41715
	I0805 12:59:39.347411  450393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43729
	I0805 12:59:39.347776  450393 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:59:39.347910  450393 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:59:39.348271  450393 main.go:141] libmachine: Using API Version  1
	I0805 12:59:39.348291  450393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:59:39.348464  450393 main.go:141] libmachine: Using API Version  1
	I0805 12:59:39.348476  450393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:59:39.348603  450393 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:59:39.348760  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetState
	I0805 12:59:39.348817  450393 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:59:39.348955  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetState
	I0805 12:59:39.350697  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:39.350906  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:39.352896  450393 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:59:39.352895  450393 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0805 12:59:39.354185  450393 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0805 12:59:39.354207  450393 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0805 12:59:39.354224  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:39.354266  450393 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 12:59:39.354277  450393 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 12:59:39.354292  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:39.356641  450393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41381
	I0805 12:59:39.357213  450393 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:59:39.357546  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:39.357791  450393 main.go:141] libmachine: Using API Version  1
	I0805 12:59:39.357814  450393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:59:39.357867  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:39.358001  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:39.358020  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:39.359294  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:39.359322  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:39.359337  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:39.359345  450393 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:59:39.359353  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:39.359488  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:39.359624  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:39.359669  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:39.359783  450393 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa Username:docker}
	I0805 12:59:39.359977  450393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:59:39.360009  450393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:59:39.360077  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:39.360210  450393 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa Username:docker}
	I0805 12:59:39.380935  450393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33787
	I0805 12:59:39.381394  450393 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:59:39.381987  450393 main.go:141] libmachine: Using API Version  1
	I0805 12:59:39.382029  450393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:59:39.382362  450393 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:59:39.382603  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetState
	I0805 12:59:39.384225  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:39.384497  450393 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 12:59:39.384515  450393 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 12:59:39.384536  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:39.389471  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:39.389972  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:39.390001  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:39.390124  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:39.390303  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:39.390604  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:39.390791  450393 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa Username:docker}
	I0805 12:59:39.513696  450393 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 12:59:39.533291  450393 node_ready.go:35] waiting up to 6m0s for node "embed-certs-321139" to be "Ready" ...
	I0805 12:59:39.597816  450393 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 12:59:39.700234  450393 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 12:59:39.719936  450393 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0805 12:59:39.719958  450393 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0805 12:59:39.760405  450393 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0805 12:59:39.760441  450393 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0805 12:59:39.808765  450393 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0805 12:59:39.808794  450393 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0805 12:59:39.833073  450393 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0805 12:59:39.946594  450393 main.go:141] libmachine: Making call to close driver server
	I0805 12:59:39.946633  450393 main.go:141] libmachine: (embed-certs-321139) Calling .Close
	I0805 12:59:39.946968  450393 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:59:39.946995  450393 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:59:39.947052  450393 main.go:141] libmachine: (embed-certs-321139) DBG | Closing plugin on server side
	I0805 12:59:39.947121  450393 main.go:141] libmachine: Making call to close driver server
	I0805 12:59:39.947137  450393 main.go:141] libmachine: (embed-certs-321139) Calling .Close
	I0805 12:59:39.947456  450393 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:59:39.947477  450393 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:59:39.947490  450393 main.go:141] libmachine: (embed-certs-321139) DBG | Closing plugin on server side
	I0805 12:59:39.953919  450393 main.go:141] libmachine: Making call to close driver server
	I0805 12:59:39.953942  450393 main.go:141] libmachine: (embed-certs-321139) Calling .Close
	I0805 12:59:39.954189  450393 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:59:39.954209  450393 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:59:40.636249  450393 main.go:141] libmachine: Making call to close driver server
	I0805 12:59:40.636274  450393 main.go:141] libmachine: (embed-certs-321139) Calling .Close
	I0805 12:59:40.636638  450393 main.go:141] libmachine: (embed-certs-321139) DBG | Closing plugin on server side
	I0805 12:59:40.636715  450393 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:59:40.636729  450393 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:59:40.636745  450393 main.go:141] libmachine: Making call to close driver server
	I0805 12:59:40.636757  450393 main.go:141] libmachine: (embed-certs-321139) Calling .Close
	I0805 12:59:40.636989  450393 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:59:40.637008  450393 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:59:40.671789  450393 main.go:141] libmachine: Making call to close driver server
	I0805 12:59:40.671819  450393 main.go:141] libmachine: (embed-certs-321139) Calling .Close
	I0805 12:59:40.672189  450393 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:59:40.672207  450393 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:59:40.672217  450393 main.go:141] libmachine: Making call to close driver server
	I0805 12:59:40.672225  450393 main.go:141] libmachine: (embed-certs-321139) Calling .Close
	I0805 12:59:40.672468  450393 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:59:40.672485  450393 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:59:40.672499  450393 addons.go:475] Verifying addon metrics-server=true in "embed-certs-321139"
	I0805 12:59:40.674497  450393 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0805 12:59:36.978361  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:37.478380  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:37.978354  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:38.478283  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:38.979257  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:39.478407  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:39.978772  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:40.478395  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:40.979309  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:41.478302  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:38.026001  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:40.026706  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:39.909336  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:41.910240  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:40.675778  450393 addons.go:510] duration metric: took 1.364642066s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0805 12:59:41.537321  450393 node_ready.go:53] node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:44.037571  450393 node_ready.go:53] node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:41.978791  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:42.478841  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:42.979289  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:43.478344  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:43.978613  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:44.478756  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:44.978392  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:45.478363  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:45.978354  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:46.478417  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:42.524568  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:45.024950  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:47.025453  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:44.408846  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:46.410085  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:46.537183  450393 node_ready.go:53] node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:47.037178  450393 node_ready.go:49] node "embed-certs-321139" has status "Ready":"True"
	I0805 12:59:47.037206  450393 node_ready.go:38] duration metric: took 7.503884334s for node "embed-certs-321139" to be "Ready" ...
	I0805 12:59:47.037221  450393 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 12:59:47.043159  450393 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wm7lh" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:47.048037  450393 pod_ready.go:92] pod "coredns-7db6d8ff4d-wm7lh" in "kube-system" namespace has status "Ready":"True"
	I0805 12:59:47.048088  450393 pod_ready.go:81] duration metric: took 4.901694ms for pod "coredns-7db6d8ff4d-wm7lh" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:47.048102  450393 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.055429  450393 pod_ready.go:92] pod "etcd-embed-certs-321139" in "kube-system" namespace has status "Ready":"True"
	I0805 12:59:49.055454  450393 pod_ready.go:81] duration metric: took 2.007345086s for pod "etcd-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.055464  450393 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.060072  450393 pod_ready.go:92] pod "kube-apiserver-embed-certs-321139" in "kube-system" namespace has status "Ready":"True"
	I0805 12:59:49.060095  450393 pod_ready.go:81] duration metric: took 4.624968ms for pod "kube-apiserver-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.060103  450393 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.065663  450393 pod_ready.go:92] pod "kube-controller-manager-embed-certs-321139" in "kube-system" namespace has status "Ready":"True"
	I0805 12:59:49.065689  450393 pod_ready.go:81] duration metric: took 5.578205ms for pod "kube-controller-manager-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.065708  450393 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-shgv2" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.071143  450393 pod_ready.go:92] pod "kube-proxy-shgv2" in "kube-system" namespace has status "Ready":"True"
	I0805 12:59:49.071166  450393 pod_ready.go:81] duration metric: took 5.450104ms for pod "kube-proxy-shgv2" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.071174  450393 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:46.978356  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:47.478322  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:47.978417  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:48.478966  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:48.979317  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:49.478449  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:49.978364  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:50.479294  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:50.978435  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:51.478614  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:49.028075  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:51.524299  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:48.908177  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:50.908490  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:52.909257  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:49.438002  450393 pod_ready.go:92] pod "kube-scheduler-embed-certs-321139" in "kube-system" namespace has status "Ready":"True"
	I0805 12:59:49.438032  450393 pod_ready.go:81] duration metric: took 366.851004ms for pod "kube-scheduler-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.438042  450393 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:51.443490  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:53.444534  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:51.978526  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:52.479187  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:52.979090  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:53.478733  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:53.978571  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:54.478525  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:54.979125  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:55.478711  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:55.979266  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:56.478956  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:53.525369  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:55.526660  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:54.909757  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:57.409489  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:55.445189  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:57.944983  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:56.979226  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:57.479019  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:57.978634  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:58.478338  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:58.978987  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:59.479290  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:59.978383  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:00.478373  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:00.978412  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:01.479312  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:57.527240  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:00.024177  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:02.024749  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:59.908362  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:01.909101  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:00.445471  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:02.944535  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:01.978392  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:02.479119  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:02.978313  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:03.478401  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:03.979029  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:04.478963  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:04.978393  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:05.478418  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:05.978381  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:06.479229  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:04.028522  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:06.525385  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:04.409119  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:06.409863  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:05.444313  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:07.452452  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:06.979172  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:07.479251  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:07.979183  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:08.478722  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:08.979248  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:09.478527  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:09.978581  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:10.478499  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:10.978520  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:11.478843  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:09.025651  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:11.525086  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:08.909528  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:11.408408  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:13.410472  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:09.945614  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:12.443723  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:11.978536  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:12.478504  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:12.979179  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:12.979258  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:13.022653  451238 cri.go:89] found id: ""
	I0805 13:00:13.022680  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.022689  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:13.022696  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:13.022766  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:13.059292  451238 cri.go:89] found id: ""
	I0805 13:00:13.059326  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.059336  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:13.059343  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:13.059399  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:13.098750  451238 cri.go:89] found id: ""
	I0805 13:00:13.098782  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.098793  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:13.098802  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:13.098866  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:13.133307  451238 cri.go:89] found id: ""
	I0805 13:00:13.133338  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.133346  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:13.133353  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:13.133420  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:13.171124  451238 cri.go:89] found id: ""
	I0805 13:00:13.171160  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.171170  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:13.171177  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:13.171237  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:13.209200  451238 cri.go:89] found id: ""
	I0805 13:00:13.209235  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.209247  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:13.209254  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:13.209312  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:13.244261  451238 cri.go:89] found id: ""
	I0805 13:00:13.244302  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.244313  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:13.244324  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:13.244397  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:13.283295  451238 cri.go:89] found id: ""
	I0805 13:00:13.283331  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.283342  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:13.283356  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:13.283372  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:13.344134  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:13.344174  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:13.384084  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:13.384119  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:13.433784  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:13.433821  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:13.449756  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:13.449786  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:13.573090  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:16.074053  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:16.087817  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:16.087900  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:16.130938  451238 cri.go:89] found id: ""
	I0805 13:00:16.130970  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.130981  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:16.130989  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:16.131058  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:16.184208  451238 cri.go:89] found id: ""
	I0805 13:00:16.184245  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.184259  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:16.184269  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:16.184346  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:16.230959  451238 cri.go:89] found id: ""
	I0805 13:00:16.230998  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.231011  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:16.231020  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:16.231100  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:16.282886  451238 cri.go:89] found id: ""
	I0805 13:00:16.282940  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.282954  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:16.282963  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:16.283024  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:16.320345  451238 cri.go:89] found id: ""
	I0805 13:00:16.320381  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.320397  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:16.320404  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:16.320521  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:16.356390  451238 cri.go:89] found id: ""
	I0805 13:00:16.356427  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.356439  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:16.356447  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:16.356503  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:16.400477  451238 cri.go:89] found id: ""
	I0805 13:00:16.400510  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.400529  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:16.400539  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:16.400612  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:16.440634  451238 cri.go:89] found id: ""
	I0805 13:00:16.440662  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.440673  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:16.440685  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:16.440702  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:16.510879  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:16.510922  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:16.554294  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:16.554332  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:16.607798  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:16.607853  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:16.622618  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:16.622655  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:16.702599  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:14.025025  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:16.025182  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:15.909245  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:18.409729  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:14.445222  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:16.445451  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:18.944533  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:19.202789  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:19.215776  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:19.215851  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:19.250503  451238 cri.go:89] found id: ""
	I0805 13:00:19.250540  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.250551  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:19.250558  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:19.250630  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:19.287358  451238 cri.go:89] found id: ""
	I0805 13:00:19.287392  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.287403  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:19.287412  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:19.287484  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:19.322167  451238 cri.go:89] found id: ""
	I0805 13:00:19.322195  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.322203  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:19.322209  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:19.322262  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:19.356874  451238 cri.go:89] found id: ""
	I0805 13:00:19.356905  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.356923  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:19.356931  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:19.357006  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:19.395172  451238 cri.go:89] found id: ""
	I0805 13:00:19.395206  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.395217  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:19.395227  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:19.395294  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:19.438404  451238 cri.go:89] found id: ""
	I0805 13:00:19.438431  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.438439  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:19.438445  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:19.438510  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:19.474727  451238 cri.go:89] found id: ""
	I0805 13:00:19.474755  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.474762  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:19.474769  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:19.474832  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:19.513906  451238 cri.go:89] found id: ""
	I0805 13:00:19.513945  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.513953  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:19.513963  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:19.513977  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:19.528337  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:19.528378  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:19.601135  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:19.601168  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:19.601185  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:19.676792  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:19.676844  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:19.716861  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:19.716894  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:18.025634  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:20.027525  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:20.909150  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:22.910153  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:20.945009  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:23.444529  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:22.266971  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:22.280346  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:22.280422  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:22.314788  451238 cri.go:89] found id: ""
	I0805 13:00:22.314816  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.314824  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:22.314831  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:22.314884  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:22.357357  451238 cri.go:89] found id: ""
	I0805 13:00:22.357394  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.357405  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:22.357414  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:22.357483  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:22.393254  451238 cri.go:89] found id: ""
	I0805 13:00:22.393288  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.393296  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:22.393302  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:22.393366  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:22.434766  451238 cri.go:89] found id: ""
	I0805 13:00:22.434796  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.434807  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:22.434815  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:22.434887  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:22.475649  451238 cri.go:89] found id: ""
	I0805 13:00:22.475676  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.475684  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:22.475690  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:22.475754  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:22.515633  451238 cri.go:89] found id: ""
	I0805 13:00:22.515662  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.515670  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:22.515677  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:22.515757  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:22.550716  451238 cri.go:89] found id: ""
	I0805 13:00:22.550749  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.550759  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:22.550767  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:22.550849  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:22.588537  451238 cri.go:89] found id: ""
	I0805 13:00:22.588571  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.588583  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:22.588595  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:22.588609  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:22.638535  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:22.638577  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:22.654879  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:22.654919  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:22.721482  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:22.721513  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:22.721529  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:22.801442  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:22.801489  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:25.343805  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:25.358068  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:25.358176  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:25.393734  451238 cri.go:89] found id: ""
	I0805 13:00:25.393767  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.393778  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:25.393785  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:25.393849  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:25.428217  451238 cri.go:89] found id: ""
	I0805 13:00:25.428244  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.428252  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:25.428257  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:25.428316  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:25.462826  451238 cri.go:89] found id: ""
	I0805 13:00:25.462858  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.462869  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:25.462877  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:25.462961  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:25.502960  451238 cri.go:89] found id: ""
	I0805 13:00:25.502989  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.502998  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:25.503006  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:25.503072  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:25.538859  451238 cri.go:89] found id: ""
	I0805 13:00:25.538888  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.538897  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:25.538902  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:25.538964  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:25.577850  451238 cri.go:89] found id: ""
	I0805 13:00:25.577883  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.577894  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:25.577901  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:25.577988  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:25.611728  451238 cri.go:89] found id: ""
	I0805 13:00:25.611773  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.611785  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:25.611793  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:25.611865  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:25.654987  451238 cri.go:89] found id: ""
	I0805 13:00:25.655018  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.655027  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:25.655039  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:25.655052  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:25.669124  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:25.669160  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:25.747354  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:25.747380  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:25.747398  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:25.825198  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:25.825241  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:25.865511  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:25.865546  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:22.526638  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:25.024414  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:27.025393  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:25.409361  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:27.411148  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:25.444607  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:27.447460  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:28.418263  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:28.431831  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:28.431895  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:28.470249  451238 cri.go:89] found id: ""
	I0805 13:00:28.470280  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.470291  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:28.470301  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:28.470373  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:28.506935  451238 cri.go:89] found id: ""
	I0805 13:00:28.506968  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.506977  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:28.506985  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:28.507053  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:28.546621  451238 cri.go:89] found id: ""
	I0805 13:00:28.546652  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.546663  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:28.546671  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:28.546749  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:28.584699  451238 cri.go:89] found id: ""
	I0805 13:00:28.584734  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.584745  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:28.584753  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:28.584820  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:28.620693  451238 cri.go:89] found id: ""
	I0805 13:00:28.620726  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.620736  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:28.620744  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:28.620814  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:28.657340  451238 cri.go:89] found id: ""
	I0805 13:00:28.657370  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.657379  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:28.657385  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:28.657438  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:28.695126  451238 cri.go:89] found id: ""
	I0805 13:00:28.695156  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.695166  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:28.695174  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:28.695239  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:28.729757  451238 cri.go:89] found id: ""
	I0805 13:00:28.729808  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.729821  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:28.729834  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:28.729852  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:28.769642  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:28.769675  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:28.818076  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:28.818114  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:28.831466  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:28.831496  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:28.902788  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:28.902818  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:28.902836  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:31.482482  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:31.497767  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:31.497867  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:31.536922  451238 cri.go:89] found id: ""
	I0805 13:00:31.536948  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.536960  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:31.536969  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:31.537040  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:31.572422  451238 cri.go:89] found id: ""
	I0805 13:00:31.572456  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.572466  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:31.572472  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:31.572531  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:31.607961  451238 cri.go:89] found id: ""
	I0805 13:00:31.607996  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.608008  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:31.608016  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:31.608082  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:31.641771  451238 cri.go:89] found id: ""
	I0805 13:00:31.641800  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.641822  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:31.641830  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:31.641904  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:31.681661  451238 cri.go:89] found id: ""
	I0805 13:00:31.681695  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.681707  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:31.681715  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:31.681791  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:31.723777  451238 cri.go:89] found id: ""
	I0805 13:00:31.723814  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.723823  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:31.723829  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:31.723922  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:31.759898  451238 cri.go:89] found id: ""
	I0805 13:00:31.759935  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.759948  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:31.759957  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:31.760022  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:31.798433  451238 cri.go:89] found id: ""
	I0805 13:00:31.798462  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.798470  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:31.798480  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:31.798497  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:31.872005  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:31.872030  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:31.872045  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:31.952201  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:31.952240  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:29.524445  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:31.525646  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:29.909901  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:32.408826  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:29.944170  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:31.944427  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:31.995920  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:31.995955  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:32.047453  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:32.047493  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:34.562369  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:34.576644  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:34.576708  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:34.613002  451238 cri.go:89] found id: ""
	I0805 13:00:34.613036  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.613047  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:34.613056  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:34.613127  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:34.650723  451238 cri.go:89] found id: ""
	I0805 13:00:34.650757  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.650769  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:34.650777  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:34.650851  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:34.689047  451238 cri.go:89] found id: ""
	I0805 13:00:34.689073  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.689081  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:34.689088  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:34.689148  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:34.727552  451238 cri.go:89] found id: ""
	I0805 13:00:34.727592  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.727604  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:34.727612  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:34.727683  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:34.761661  451238 cri.go:89] found id: ""
	I0805 13:00:34.761696  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.761707  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:34.761715  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:34.761791  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:34.800062  451238 cri.go:89] found id: ""
	I0805 13:00:34.800116  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.800128  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:34.800137  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:34.800198  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:34.833536  451238 cri.go:89] found id: ""
	I0805 13:00:34.833566  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.833578  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:34.833586  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:34.833654  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:34.868079  451238 cri.go:89] found id: ""
	I0805 13:00:34.868117  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.868126  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:34.868135  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:34.868149  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:34.920092  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:34.920124  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:34.934484  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:34.934510  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:35.007716  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:35.007751  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:35.007768  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:35.088183  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:35.088233  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:34.024704  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:36.025754  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:34.409917  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:36.409993  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:34.444842  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:36.943985  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:38.944649  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:37.633443  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:37.647405  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:37.647470  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:37.684682  451238 cri.go:89] found id: ""
	I0805 13:00:37.684711  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.684720  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:37.684727  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:37.684779  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:37.723413  451238 cri.go:89] found id: ""
	I0805 13:00:37.723442  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.723449  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:37.723455  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:37.723506  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:37.758388  451238 cri.go:89] found id: ""
	I0805 13:00:37.758418  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.758428  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:37.758437  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:37.758501  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:37.797846  451238 cri.go:89] found id: ""
	I0805 13:00:37.797879  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.797890  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:37.797901  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:37.797971  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:37.837053  451238 cri.go:89] found id: ""
	I0805 13:00:37.837082  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.837092  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:37.837104  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:37.837163  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:37.876185  451238 cri.go:89] found id: ""
	I0805 13:00:37.876211  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.876220  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:37.876226  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:37.876294  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:37.915318  451238 cri.go:89] found id: ""
	I0805 13:00:37.915350  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.915362  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:37.915370  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:37.915429  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:37.953916  451238 cri.go:89] found id: ""
	I0805 13:00:37.953944  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.953954  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:37.953964  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:37.953976  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:37.991116  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:37.991154  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:38.043796  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:38.043838  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:38.058636  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:38.058669  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:38.143022  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:38.143051  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:38.143067  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:40.721468  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:40.735679  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:40.735774  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:40.773583  451238 cri.go:89] found id: ""
	I0805 13:00:40.773609  451238 logs.go:276] 0 containers: []
	W0805 13:00:40.773617  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:40.773626  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:40.773685  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:40.819857  451238 cri.go:89] found id: ""
	I0805 13:00:40.819886  451238 logs.go:276] 0 containers: []
	W0805 13:00:40.819895  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:40.819901  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:40.819963  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:40.857156  451238 cri.go:89] found id: ""
	I0805 13:00:40.857184  451238 logs.go:276] 0 containers: []
	W0805 13:00:40.857192  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:40.857198  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:40.857251  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:40.892933  451238 cri.go:89] found id: ""
	I0805 13:00:40.892970  451238 logs.go:276] 0 containers: []
	W0805 13:00:40.892981  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:40.892990  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:40.893046  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:40.927128  451238 cri.go:89] found id: ""
	I0805 13:00:40.927163  451238 logs.go:276] 0 containers: []
	W0805 13:00:40.927173  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:40.927182  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:40.927237  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:40.961790  451238 cri.go:89] found id: ""
	I0805 13:00:40.961817  451238 logs.go:276] 0 containers: []
	W0805 13:00:40.961826  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:40.961832  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:40.961886  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:40.996249  451238 cri.go:89] found id: ""
	I0805 13:00:40.996282  451238 logs.go:276] 0 containers: []
	W0805 13:00:40.996293  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:40.996300  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:40.996371  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:41.032305  451238 cri.go:89] found id: ""
	I0805 13:00:41.032332  451238 logs.go:276] 0 containers: []
	W0805 13:00:41.032342  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:41.032358  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:41.032375  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:41.075993  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:41.076027  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:41.126020  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:41.126057  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:41.140263  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:41.140288  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:41.216648  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:41.216670  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:41.216683  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:38.524812  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:41.024597  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:38.909518  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:40.910256  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:43.410062  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:41.443930  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:43.945026  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:43.796367  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:43.810086  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:43.810162  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:43.844373  451238 cri.go:89] found id: ""
	I0805 13:00:43.844410  451238 logs.go:276] 0 containers: []
	W0805 13:00:43.844422  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:43.844430  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:43.844502  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:43.880249  451238 cri.go:89] found id: ""
	I0805 13:00:43.880285  451238 logs.go:276] 0 containers: []
	W0805 13:00:43.880295  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:43.880303  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:43.880376  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:43.921279  451238 cri.go:89] found id: ""
	I0805 13:00:43.921313  451238 logs.go:276] 0 containers: []
	W0805 13:00:43.921323  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:43.921329  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:43.921382  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:43.963736  451238 cri.go:89] found id: ""
	I0805 13:00:43.963782  451238 logs.go:276] 0 containers: []
	W0805 13:00:43.963794  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:43.963803  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:43.963869  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:44.009001  451238 cri.go:89] found id: ""
	I0805 13:00:44.009038  451238 logs.go:276] 0 containers: []
	W0805 13:00:44.009050  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:44.009057  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:44.009128  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:44.059484  451238 cri.go:89] found id: ""
	I0805 13:00:44.059514  451238 logs.go:276] 0 containers: []
	W0805 13:00:44.059526  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:44.059534  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:44.059605  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:44.102043  451238 cri.go:89] found id: ""
	I0805 13:00:44.102075  451238 logs.go:276] 0 containers: []
	W0805 13:00:44.102088  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:44.102094  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:44.102170  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:44.137518  451238 cri.go:89] found id: ""
	I0805 13:00:44.137558  451238 logs.go:276] 0 containers: []
	W0805 13:00:44.137569  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:44.137584  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:44.137600  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:44.188139  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:44.188175  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:44.202544  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:44.202588  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:44.278486  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:44.278508  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:44.278521  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:44.363419  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:44.363458  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:46.905665  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:46.922141  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:46.922206  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:43.025461  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:45.523997  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:45.908437  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:48.409410  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:46.445919  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:48.944243  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:46.963468  451238 cri.go:89] found id: ""
	I0805 13:00:46.963494  451238 logs.go:276] 0 containers: []
	W0805 13:00:46.963502  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:46.963508  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:46.963557  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:47.003445  451238 cri.go:89] found id: ""
	I0805 13:00:47.003472  451238 logs.go:276] 0 containers: []
	W0805 13:00:47.003480  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:47.003486  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:47.003537  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:47.043271  451238 cri.go:89] found id: ""
	I0805 13:00:47.043306  451238 logs.go:276] 0 containers: []
	W0805 13:00:47.043318  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:47.043326  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:47.043394  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:47.079843  451238 cri.go:89] found id: ""
	I0805 13:00:47.079874  451238 logs.go:276] 0 containers: []
	W0805 13:00:47.079884  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:47.079893  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:47.079954  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:47.116819  451238 cri.go:89] found id: ""
	I0805 13:00:47.116847  451238 logs.go:276] 0 containers: []
	W0805 13:00:47.116856  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:47.116861  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:47.116917  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:47.156302  451238 cri.go:89] found id: ""
	I0805 13:00:47.156331  451238 logs.go:276] 0 containers: []
	W0805 13:00:47.156340  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:47.156353  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:47.156410  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:47.200419  451238 cri.go:89] found id: ""
	I0805 13:00:47.200449  451238 logs.go:276] 0 containers: []
	W0805 13:00:47.200463  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:47.200469  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:47.200533  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:47.237483  451238 cri.go:89] found id: ""
	I0805 13:00:47.237515  451238 logs.go:276] 0 containers: []
	W0805 13:00:47.237522  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:47.237532  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:47.237545  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:47.251598  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:47.251632  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:47.326457  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:47.326483  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:47.326501  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:47.410413  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:47.410455  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:47.452696  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:47.452732  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:50.005335  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:50.019610  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:50.019679  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:50.057401  451238 cri.go:89] found id: ""
	I0805 13:00:50.057435  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.057447  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:50.057456  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:50.057516  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:50.101710  451238 cri.go:89] found id: ""
	I0805 13:00:50.101743  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.101751  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:50.101758  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:50.101822  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:50.139624  451238 cri.go:89] found id: ""
	I0805 13:00:50.139658  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.139669  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:50.139677  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:50.139761  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:50.176004  451238 cri.go:89] found id: ""
	I0805 13:00:50.176031  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.176039  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:50.176045  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:50.176123  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:50.219319  451238 cri.go:89] found id: ""
	I0805 13:00:50.219352  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.219362  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:50.219369  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:50.219437  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:50.287443  451238 cri.go:89] found id: ""
	I0805 13:00:50.287478  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.287489  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:50.287498  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:50.287582  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:50.321018  451238 cri.go:89] found id: ""
	I0805 13:00:50.321047  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.321056  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:50.321063  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:50.321124  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:50.354559  451238 cri.go:89] found id: ""
	I0805 13:00:50.354597  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.354610  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:50.354625  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:50.354642  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:50.398621  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:50.398657  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:50.451693  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:50.451735  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:50.466810  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:50.466851  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:50.542431  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:50.542461  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:50.542482  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:47.525977  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:50.025280  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:52.025760  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:50.410198  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:52.908466  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:50.946086  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:53.445962  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:53.128466  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:53.144139  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:53.144216  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:53.178383  451238 cri.go:89] found id: ""
	I0805 13:00:53.178427  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.178438  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:53.178447  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:53.178516  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:53.220312  451238 cri.go:89] found id: ""
	I0805 13:00:53.220348  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.220358  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:53.220365  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:53.220432  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:53.255352  451238 cri.go:89] found id: ""
	I0805 13:00:53.255380  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.255390  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:53.255398  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:53.255473  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:53.293254  451238 cri.go:89] found id: ""
	I0805 13:00:53.293292  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.293311  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:53.293320  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:53.293395  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:53.329407  451238 cri.go:89] found id: ""
	I0805 13:00:53.329436  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.329448  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:53.329455  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:53.329523  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:53.362838  451238 cri.go:89] found id: ""
	I0805 13:00:53.362868  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.362876  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:53.362883  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:53.362957  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:53.399283  451238 cri.go:89] found id: ""
	I0805 13:00:53.399313  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.399324  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:53.399332  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:53.399405  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:53.438527  451238 cri.go:89] found id: ""
	I0805 13:00:53.438558  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.438567  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:53.438578  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:53.438597  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:53.492709  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:53.492760  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:53.507522  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:53.507555  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:53.581690  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:53.581710  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:53.581724  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:53.664402  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:53.664451  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:56.209640  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:56.224403  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:56.224487  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:56.266214  451238 cri.go:89] found id: ""
	I0805 13:00:56.266243  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.266254  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:56.266263  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:56.266328  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:56.304034  451238 cri.go:89] found id: ""
	I0805 13:00:56.304070  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.304082  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:56.304091  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:56.304172  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:56.342133  451238 cri.go:89] found id: ""
	I0805 13:00:56.342159  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.342167  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:56.342173  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:56.342225  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:56.378549  451238 cri.go:89] found id: ""
	I0805 13:00:56.378588  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.378599  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:56.378606  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:56.378667  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:56.415613  451238 cri.go:89] found id: ""
	I0805 13:00:56.415641  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.415651  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:56.415657  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:56.415715  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:56.451915  451238 cri.go:89] found id: ""
	I0805 13:00:56.451944  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.451953  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:56.451960  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:56.452021  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:56.492219  451238 cri.go:89] found id: ""
	I0805 13:00:56.492255  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.492267  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:56.492275  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:56.492347  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:56.534564  451238 cri.go:89] found id: ""
	I0805 13:00:56.534606  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.534618  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:56.534632  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:56.534652  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:56.548772  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:56.548813  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:56.625649  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:56.625678  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:56.625695  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:56.716735  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:56.716787  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:56.771881  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:56.771910  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:54.525355  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:57.025659  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:54.908805  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:56.909601  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:55.943885  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:57.945233  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:59.325624  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:59.338796  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:59.338869  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:59.375002  451238 cri.go:89] found id: ""
	I0805 13:00:59.375039  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.375050  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:59.375059  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:59.375138  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:59.410778  451238 cri.go:89] found id: ""
	I0805 13:00:59.410800  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.410810  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:59.410817  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:59.410873  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:59.453728  451238 cri.go:89] found id: ""
	I0805 13:00:59.453760  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.453771  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:59.453779  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:59.453845  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:59.492968  451238 cri.go:89] found id: ""
	I0805 13:00:59.493002  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.493013  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:59.493021  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:59.493091  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:59.533342  451238 cri.go:89] found id: ""
	I0805 13:00:59.533372  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.533383  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:59.533390  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:59.533445  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:59.569677  451238 cri.go:89] found id: ""
	I0805 13:00:59.569705  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.569715  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:59.569722  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:59.569789  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:59.605106  451238 cri.go:89] found id: ""
	I0805 13:00:59.605139  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.605150  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:59.605158  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:59.605228  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:59.639948  451238 cri.go:89] found id: ""
	I0805 13:00:59.639980  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.639989  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:59.640000  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:59.640016  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:59.679926  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:59.679956  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:59.731545  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:59.731591  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:59.746286  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:59.746320  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:59.828398  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:59.828420  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:59.828439  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:59.524365  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:01.525092  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:59.410713  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:01.909619  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:59.945483  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:02.445780  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:02.412560  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:02.429633  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:02.429718  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:02.475916  451238 cri.go:89] found id: ""
	I0805 13:01:02.475951  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.475963  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:02.475971  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:02.476061  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:02.528807  451238 cri.go:89] found id: ""
	I0805 13:01:02.528837  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.528849  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:02.528856  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:02.528924  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:02.575164  451238 cri.go:89] found id: ""
	I0805 13:01:02.575194  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.575210  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:02.575218  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:02.575286  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:02.614709  451238 cri.go:89] found id: ""
	I0805 13:01:02.614800  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.614815  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:02.614824  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:02.614902  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:02.654941  451238 cri.go:89] found id: ""
	I0805 13:01:02.654979  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.654990  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:02.654997  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:02.655069  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:02.690552  451238 cri.go:89] found id: ""
	I0805 13:01:02.690586  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.690595  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:02.690602  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:02.690657  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:02.725607  451238 cri.go:89] found id: ""
	I0805 13:01:02.725644  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.725656  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:02.725665  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:02.725745  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:02.760180  451238 cri.go:89] found id: ""
	I0805 13:01:02.760211  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.760223  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:02.760244  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:02.760262  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:02.813071  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:02.813128  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:02.828633  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:02.828665  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:02.898049  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:02.898074  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:02.898087  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:02.988077  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:02.988124  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:05.532719  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:05.546423  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:05.546489  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:05.590978  451238 cri.go:89] found id: ""
	I0805 13:01:05.591006  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.591013  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:05.591019  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:05.591071  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:05.631251  451238 cri.go:89] found id: ""
	I0805 13:01:05.631287  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.631298  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:05.631306  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:05.631391  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:05.671826  451238 cri.go:89] found id: ""
	I0805 13:01:05.671863  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.671875  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:05.671883  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:05.671951  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:05.708147  451238 cri.go:89] found id: ""
	I0805 13:01:05.708176  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.708186  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:05.708194  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:05.708262  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:05.741962  451238 cri.go:89] found id: ""
	I0805 13:01:05.741994  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.742006  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:05.742015  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:05.742087  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:05.777930  451238 cri.go:89] found id: ""
	I0805 13:01:05.777965  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.777976  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:05.777985  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:05.778061  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:05.813066  451238 cri.go:89] found id: ""
	I0805 13:01:05.813099  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.813111  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:05.813119  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:05.813189  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:05.849382  451238 cri.go:89] found id: ""
	I0805 13:01:05.849410  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.849418  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:05.849428  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:05.849440  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:05.903376  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:05.903423  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:05.918540  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:05.918575  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:05.990608  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:05.990637  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:05.990658  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:06.072524  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:06.072571  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:04.025528  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:06.525325  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:04.409190  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:06.409231  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:04.944649  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:07.445278  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:08.617528  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:08.631637  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:08.631713  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:08.669999  451238 cri.go:89] found id: ""
	I0805 13:01:08.670039  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.670050  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:08.670065  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:08.670147  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:08.705322  451238 cri.go:89] found id: ""
	I0805 13:01:08.705356  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.705365  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:08.705370  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:08.705442  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:08.744884  451238 cri.go:89] found id: ""
	I0805 13:01:08.744915  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.744927  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:08.744936  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:08.745018  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:08.782394  451238 cri.go:89] found id: ""
	I0805 13:01:08.782428  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.782440  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:08.782448  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:08.782518  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:08.816989  451238 cri.go:89] found id: ""
	I0805 13:01:08.817018  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.817027  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:08.817034  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:08.817106  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:08.856389  451238 cri.go:89] found id: ""
	I0805 13:01:08.856420  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.856431  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:08.856439  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:08.856506  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:08.891942  451238 cri.go:89] found id: ""
	I0805 13:01:08.891975  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.891986  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:08.891995  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:08.892064  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:08.930329  451238 cri.go:89] found id: ""
	I0805 13:01:08.930364  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.930375  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:08.930389  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:08.930406  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:08.972574  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:08.972610  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:09.026194  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:09.026228  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:09.040973  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:09.041002  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:09.115094  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:09.115121  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:09.115143  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:11.698322  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:11.711841  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:11.711927  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:11.749152  451238 cri.go:89] found id: ""
	I0805 13:01:11.749187  451238 logs.go:276] 0 containers: []
	W0805 13:01:11.749199  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:11.749207  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:11.749274  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:11.785395  451238 cri.go:89] found id: ""
	I0805 13:01:11.785430  451238 logs.go:276] 0 containers: []
	W0805 13:01:11.785441  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:11.785449  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:11.785516  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:11.822240  451238 cri.go:89] found id: ""
	I0805 13:01:11.822282  451238 logs.go:276] 0 containers: []
	W0805 13:01:11.822293  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:11.822302  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:11.822372  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:11.858755  451238 cri.go:89] found id: ""
	I0805 13:01:11.858794  451238 logs.go:276] 0 containers: []
	W0805 13:01:11.858805  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:11.858814  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:11.858884  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:11.893064  451238 cri.go:89] found id: ""
	I0805 13:01:11.893101  451238 logs.go:276] 0 containers: []
	W0805 13:01:11.893113  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:11.893121  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:11.893195  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:11.930965  451238 cri.go:89] found id: ""
	I0805 13:01:11.931003  451238 logs.go:276] 0 containers: []
	W0805 13:01:11.931015  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:11.931025  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:11.931089  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:09.025566  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:11.525069  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:08.910618  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:11.409157  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:09.944797  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:12.445029  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:11.967594  451238 cri.go:89] found id: ""
	I0805 13:01:11.967620  451238 logs.go:276] 0 containers: []
	W0805 13:01:11.967630  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:11.967638  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:11.967697  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:12.004978  451238 cri.go:89] found id: ""
	I0805 13:01:12.005007  451238 logs.go:276] 0 containers: []
	W0805 13:01:12.005015  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:12.005025  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:12.005037  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:12.087476  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:12.087500  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:12.087515  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:12.177690  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:12.177757  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:12.222858  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:12.222889  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:12.273322  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:12.273362  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:14.788210  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:14.802351  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:14.802426  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:14.837705  451238 cri.go:89] found id: ""
	I0805 13:01:14.837736  451238 logs.go:276] 0 containers: []
	W0805 13:01:14.837746  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:14.837755  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:14.837824  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:14.873389  451238 cri.go:89] found id: ""
	I0805 13:01:14.873420  451238 logs.go:276] 0 containers: []
	W0805 13:01:14.873430  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:14.873438  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:14.873506  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:14.913969  451238 cri.go:89] found id: ""
	I0805 13:01:14.913999  451238 logs.go:276] 0 containers: []
	W0805 13:01:14.914009  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:14.914018  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:14.914081  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:14.953478  451238 cri.go:89] found id: ""
	I0805 13:01:14.953510  451238 logs.go:276] 0 containers: []
	W0805 13:01:14.953521  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:14.953528  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:14.953584  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:14.992166  451238 cri.go:89] found id: ""
	I0805 13:01:14.992197  451238 logs.go:276] 0 containers: []
	W0805 13:01:14.992206  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:14.992212  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:14.992291  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:15.031258  451238 cri.go:89] found id: ""
	I0805 13:01:15.031285  451238 logs.go:276] 0 containers: []
	W0805 13:01:15.031293  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:15.031300  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:15.031353  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:15.068944  451238 cri.go:89] found id: ""
	I0805 13:01:15.068972  451238 logs.go:276] 0 containers: []
	W0805 13:01:15.068980  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:15.068986  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:15.069042  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:15.105413  451238 cri.go:89] found id: ""
	I0805 13:01:15.105443  451238 logs.go:276] 0 containers: []
	W0805 13:01:15.105454  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:15.105467  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:15.105489  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:15.161925  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:15.161969  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:15.177174  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:15.177206  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:15.257950  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:15.257975  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:15.257989  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:15.336672  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:15.336716  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:13.526088  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:16.025513  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:13.908773  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:15.908817  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:17.910431  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:14.945842  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:17.444869  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:17.876314  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:17.889842  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:17.889909  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:17.928050  451238 cri.go:89] found id: ""
	I0805 13:01:17.928077  451238 logs.go:276] 0 containers: []
	W0805 13:01:17.928086  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:17.928092  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:17.928150  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:17.965713  451238 cri.go:89] found id: ""
	I0805 13:01:17.965751  451238 logs.go:276] 0 containers: []
	W0805 13:01:17.965762  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:17.965770  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:17.965837  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:18.002938  451238 cri.go:89] found id: ""
	I0805 13:01:18.002972  451238 logs.go:276] 0 containers: []
	W0805 13:01:18.002984  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:18.002992  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:18.003062  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:18.040140  451238 cri.go:89] found id: ""
	I0805 13:01:18.040178  451238 logs.go:276] 0 containers: []
	W0805 13:01:18.040190  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:18.040198  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:18.040269  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:18.075427  451238 cri.go:89] found id: ""
	I0805 13:01:18.075463  451238 logs.go:276] 0 containers: []
	W0805 13:01:18.075475  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:18.075490  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:18.075558  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:18.113469  451238 cri.go:89] found id: ""
	I0805 13:01:18.113507  451238 logs.go:276] 0 containers: []
	W0805 13:01:18.113521  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:18.113528  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:18.113587  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:18.152626  451238 cri.go:89] found id: ""
	I0805 13:01:18.152662  451238 logs.go:276] 0 containers: []
	W0805 13:01:18.152672  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:18.152678  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:18.152745  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:18.189540  451238 cri.go:89] found id: ""
	I0805 13:01:18.189577  451238 logs.go:276] 0 containers: []
	W0805 13:01:18.189590  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:18.189602  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:18.189618  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:18.244314  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:18.244353  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:18.257912  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:18.257939  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:18.339659  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:18.339682  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:18.339699  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:18.425391  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:18.425449  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:20.975889  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:20.989798  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:20.989868  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:21.030858  451238 cri.go:89] found id: ""
	I0805 13:01:21.030894  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.030906  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:21.030915  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:21.030979  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:21.067367  451238 cri.go:89] found id: ""
	I0805 13:01:21.067402  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.067411  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:21.067419  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:21.067476  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:21.104307  451238 cri.go:89] found id: ""
	I0805 13:01:21.104337  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.104352  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:21.104361  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:21.104424  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:21.141486  451238 cri.go:89] found id: ""
	I0805 13:01:21.141519  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.141531  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:21.141539  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:21.141606  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:21.179247  451238 cri.go:89] found id: ""
	I0805 13:01:21.179305  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.179317  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:21.179330  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:21.179406  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:21.215030  451238 cri.go:89] found id: ""
	I0805 13:01:21.215065  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.215075  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:21.215083  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:21.215152  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:21.252982  451238 cri.go:89] found id: ""
	I0805 13:01:21.253008  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.253016  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:21.253022  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:21.253097  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:21.290256  451238 cri.go:89] found id: ""
	I0805 13:01:21.290292  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.290302  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:21.290325  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:21.290343  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:21.342809  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:21.342855  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:21.357959  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:21.358000  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:21.433087  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:21.433120  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:21.433143  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:21.514261  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:21.514312  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:18.025965  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:20.524832  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:20.409943  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:22.909233  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:19.445074  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:21.445547  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:23.445637  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:24.060402  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:24.076056  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:24.076131  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:24.115976  451238 cri.go:89] found id: ""
	I0805 13:01:24.116009  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.116022  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:24.116031  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:24.116111  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:24.158411  451238 cri.go:89] found id: ""
	I0805 13:01:24.158440  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.158448  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:24.158454  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:24.158520  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:24.194589  451238 cri.go:89] found id: ""
	I0805 13:01:24.194624  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.194635  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:24.194644  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:24.194720  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:24.231528  451238 cri.go:89] found id: ""
	I0805 13:01:24.231562  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.231569  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:24.231576  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:24.231649  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:24.268491  451238 cri.go:89] found id: ""
	I0805 13:01:24.268523  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.268532  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:24.268538  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:24.268602  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:24.306718  451238 cri.go:89] found id: ""
	I0805 13:01:24.306752  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.306763  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:24.306772  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:24.306839  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:24.343552  451238 cri.go:89] found id: ""
	I0805 13:01:24.343578  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.343586  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:24.343593  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:24.343649  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:24.384555  451238 cri.go:89] found id: ""
	I0805 13:01:24.384590  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.384602  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:24.384615  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:24.384633  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:24.430256  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:24.430298  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:24.484616  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:24.484661  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:24.500926  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:24.500958  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:24.581379  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:24.581410  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:24.581424  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:22.525806  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:24.526411  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:27.024452  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:25.408887  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:27.409717  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:25.945113  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:28.444740  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:27.167538  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:27.181959  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:27.182035  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:27.223243  451238 cri.go:89] found id: ""
	I0805 13:01:27.223282  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.223293  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:27.223301  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:27.223374  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:27.257806  451238 cri.go:89] found id: ""
	I0805 13:01:27.257843  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.257856  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:27.257864  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:27.257940  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:27.304306  451238 cri.go:89] found id: ""
	I0805 13:01:27.304342  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.304353  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:27.304370  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:27.304439  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:27.342595  451238 cri.go:89] found id: ""
	I0805 13:01:27.342623  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.342631  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:27.342638  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:27.342707  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:27.385628  451238 cri.go:89] found id: ""
	I0805 13:01:27.385661  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.385670  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:27.385677  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:27.385760  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:27.425059  451238 cri.go:89] found id: ""
	I0805 13:01:27.425091  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.425100  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:27.425106  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:27.425175  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:27.465739  451238 cri.go:89] found id: ""
	I0805 13:01:27.465783  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.465794  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:27.465807  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:27.465869  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:27.506431  451238 cri.go:89] found id: ""
	I0805 13:01:27.506460  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.506468  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:27.506477  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:27.506494  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:27.586440  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:27.586467  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:27.586482  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:27.667826  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:27.667869  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:27.710458  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:27.710496  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:27.763057  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:27.763100  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:30.278799  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:30.293788  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:30.293874  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:30.336209  451238 cri.go:89] found id: ""
	I0805 13:01:30.336240  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.336248  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:30.336255  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:30.336323  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:30.371593  451238 cri.go:89] found id: ""
	I0805 13:01:30.371627  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.371642  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:30.371649  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:30.371714  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:30.408266  451238 cri.go:89] found id: ""
	I0805 13:01:30.408298  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.408317  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:30.408325  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:30.408388  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:30.448841  451238 cri.go:89] found id: ""
	I0805 13:01:30.448864  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.448872  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:30.448878  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:30.448940  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:30.488367  451238 cri.go:89] found id: ""
	I0805 13:01:30.488403  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.488411  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:30.488418  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:30.488485  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:30.527131  451238 cri.go:89] found id: ""
	I0805 13:01:30.527163  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.527173  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:30.527181  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:30.527249  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:30.568089  451238 cri.go:89] found id: ""
	I0805 13:01:30.568122  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.568131  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:30.568138  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:30.568203  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:30.605952  451238 cri.go:89] found id: ""
	I0805 13:01:30.605990  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.606007  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:30.606021  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:30.606041  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:30.656449  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:30.656491  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:30.710124  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:30.710164  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:30.724417  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:30.724455  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:30.820639  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:30.820669  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:30.820687  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:29.025377  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:31.525340  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:29.909043  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:32.410359  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:30.445047  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:32.445931  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:33.403497  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:33.419581  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:33.419651  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:33.462011  451238 cri.go:89] found id: ""
	I0805 13:01:33.462042  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.462051  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:33.462057  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:33.462126  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:33.502476  451238 cri.go:89] found id: ""
	I0805 13:01:33.502509  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.502519  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:33.502527  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:33.502601  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:33.547392  451238 cri.go:89] found id: ""
	I0805 13:01:33.547421  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.547430  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:33.547437  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:33.547490  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:33.584013  451238 cri.go:89] found id: ""
	I0805 13:01:33.584040  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.584048  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:33.584054  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:33.584125  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:33.617325  451238 cri.go:89] found id: ""
	I0805 13:01:33.617359  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.617367  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:33.617374  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:33.617429  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:33.651922  451238 cri.go:89] found id: ""
	I0805 13:01:33.651959  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.651971  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:33.651980  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:33.652049  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:33.689487  451238 cri.go:89] found id: ""
	I0805 13:01:33.689515  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.689522  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:33.689529  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:33.689580  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:33.723220  451238 cri.go:89] found id: ""
	I0805 13:01:33.723251  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.723260  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:33.723270  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:33.723282  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:33.777271  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:33.777311  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:33.792497  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:33.792532  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:33.866801  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:33.866826  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:33.866842  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:33.946739  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:33.946774  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:36.486108  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:36.501316  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:36.501397  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:36.542082  451238 cri.go:89] found id: ""
	I0805 13:01:36.542118  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.542130  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:36.542139  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:36.542217  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:36.581005  451238 cri.go:89] found id: ""
	I0805 13:01:36.581047  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.581059  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:36.581068  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:36.581148  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:36.623945  451238 cri.go:89] found id: ""
	I0805 13:01:36.623974  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.623982  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:36.623987  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:36.624041  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:36.661632  451238 cri.go:89] found id: ""
	I0805 13:01:36.661665  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.661673  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:36.661680  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:36.661738  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:36.701808  451238 cri.go:89] found id: ""
	I0805 13:01:36.701839  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.701850  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:36.701857  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:36.701941  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:36.742287  451238 cri.go:89] found id: ""
	I0805 13:01:36.742320  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.742331  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:36.742340  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:36.742410  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:36.794581  451238 cri.go:89] found id: ""
	I0805 13:01:36.794610  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.794621  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:36.794629  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:36.794690  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:36.833271  451238 cri.go:89] found id: ""
	I0805 13:01:36.833301  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.833311  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:36.833325  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:36.833346  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:36.921427  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:36.921467  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:34.024353  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:36.025557  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:34.909401  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:36.909529  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:34.945077  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:36.945632  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:36.965468  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:36.965503  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:37.018475  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:37.018515  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:37.033671  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:37.033697  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:37.105339  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:39.606042  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:39.619215  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:39.619296  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:39.655614  451238 cri.go:89] found id: ""
	I0805 13:01:39.655648  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.655660  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:39.655668  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:39.655760  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:39.691489  451238 cri.go:89] found id: ""
	I0805 13:01:39.691523  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.691535  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:39.691543  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:39.691610  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:39.726394  451238 cri.go:89] found id: ""
	I0805 13:01:39.726427  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.726438  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:39.726446  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:39.726518  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:39.759847  451238 cri.go:89] found id: ""
	I0805 13:01:39.759897  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.759909  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:39.759918  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:39.759988  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:39.795011  451238 cri.go:89] found id: ""
	I0805 13:01:39.795043  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.795051  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:39.795057  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:39.795120  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:39.831302  451238 cri.go:89] found id: ""
	I0805 13:01:39.831336  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.831346  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:39.831356  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:39.831432  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:39.866506  451238 cri.go:89] found id: ""
	I0805 13:01:39.866540  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.866547  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:39.866554  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:39.866622  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:39.898083  451238 cri.go:89] found id: ""
	I0805 13:01:39.898108  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.898115  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:39.898128  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:39.898147  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:39.912192  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:39.912221  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:39.989216  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:39.989246  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:39.989262  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:40.069702  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:40.069746  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:40.118390  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:40.118428  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:38.525929  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:40.527120  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:38.909905  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:41.408953  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:43.409966  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:39.445474  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:41.944704  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:43.944956  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:42.669421  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:42.682287  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:42.682359  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:42.722933  451238 cri.go:89] found id: ""
	I0805 13:01:42.722961  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.722969  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:42.722975  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:42.723037  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:42.757604  451238 cri.go:89] found id: ""
	I0805 13:01:42.757635  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.757646  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:42.757654  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:42.757723  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:42.795825  451238 cri.go:89] found id: ""
	I0805 13:01:42.795852  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.795863  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:42.795871  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:42.795939  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:42.831749  451238 cri.go:89] found id: ""
	I0805 13:01:42.831779  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.831791  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:42.831800  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:42.831862  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:42.866280  451238 cri.go:89] found id: ""
	I0805 13:01:42.866310  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.866322  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:42.866330  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:42.866390  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:42.904393  451238 cri.go:89] found id: ""
	I0805 13:01:42.904427  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.904436  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:42.904445  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:42.904510  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:42.943175  451238 cri.go:89] found id: ""
	I0805 13:01:42.943204  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.943215  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:42.943223  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:42.943292  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:42.979117  451238 cri.go:89] found id: ""
	I0805 13:01:42.979144  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.979152  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:42.979174  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:42.979191  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:43.032032  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:43.032070  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:43.046285  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:43.046315  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:43.120300  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:43.120327  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:43.120347  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:43.209800  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:43.209851  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:45.759057  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:45.771984  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:45.772056  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:45.805421  451238 cri.go:89] found id: ""
	I0805 13:01:45.805451  451238 logs.go:276] 0 containers: []
	W0805 13:01:45.805459  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:45.805466  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:45.805521  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:45.841552  451238 cri.go:89] found id: ""
	I0805 13:01:45.841579  451238 logs.go:276] 0 containers: []
	W0805 13:01:45.841588  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:45.841597  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:45.841672  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:45.878502  451238 cri.go:89] found id: ""
	I0805 13:01:45.878529  451238 logs.go:276] 0 containers: []
	W0805 13:01:45.878537  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:45.878546  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:45.878622  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:45.921145  451238 cri.go:89] found id: ""
	I0805 13:01:45.921187  451238 logs.go:276] 0 containers: []
	W0805 13:01:45.921198  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:45.921207  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:45.921273  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:45.958408  451238 cri.go:89] found id: ""
	I0805 13:01:45.958437  451238 logs.go:276] 0 containers: []
	W0805 13:01:45.958445  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:45.958452  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:45.958521  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:45.994632  451238 cri.go:89] found id: ""
	I0805 13:01:45.994660  451238 logs.go:276] 0 containers: []
	W0805 13:01:45.994669  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:45.994676  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:45.994727  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:46.032930  451238 cri.go:89] found id: ""
	I0805 13:01:46.032961  451238 logs.go:276] 0 containers: []
	W0805 13:01:46.032971  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:46.032978  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:46.033041  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:46.074396  451238 cri.go:89] found id: ""
	I0805 13:01:46.074429  451238 logs.go:276] 0 containers: []
	W0805 13:01:46.074441  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:46.074454  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:46.074475  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:46.131977  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:46.132020  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:46.147924  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:46.147957  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:46.222005  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:46.222038  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:46.222054  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:46.306799  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:46.306842  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:43.024643  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:45.524936  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:45.410385  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:47.909281  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:46.444746  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:48.950198  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:48.856982  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:48.870945  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:48.871025  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:48.930811  451238 cri.go:89] found id: ""
	I0805 13:01:48.930837  451238 logs.go:276] 0 containers: []
	W0805 13:01:48.930852  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:48.930858  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:48.930917  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:48.986604  451238 cri.go:89] found id: ""
	I0805 13:01:48.986629  451238 logs.go:276] 0 containers: []
	W0805 13:01:48.986637  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:48.986643  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:48.986706  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:49.039433  451238 cri.go:89] found id: ""
	I0805 13:01:49.039468  451238 logs.go:276] 0 containers: []
	W0805 13:01:49.039479  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:49.039487  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:49.039555  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:49.079593  451238 cri.go:89] found id: ""
	I0805 13:01:49.079625  451238 logs.go:276] 0 containers: []
	W0805 13:01:49.079637  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:49.079645  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:49.079714  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:49.116243  451238 cri.go:89] found id: ""
	I0805 13:01:49.116274  451238 logs.go:276] 0 containers: []
	W0805 13:01:49.116284  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:49.116292  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:49.116360  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:49.158744  451238 cri.go:89] found id: ""
	I0805 13:01:49.158779  451238 logs.go:276] 0 containers: []
	W0805 13:01:49.158790  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:49.158799  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:49.158868  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:49.193747  451238 cri.go:89] found id: ""
	I0805 13:01:49.193778  451238 logs.go:276] 0 containers: []
	W0805 13:01:49.193786  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:49.193792  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:49.193843  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:49.227663  451238 cri.go:89] found id: ""
	I0805 13:01:49.227691  451238 logs.go:276] 0 containers: []
	W0805 13:01:49.227704  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:49.227714  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:49.227727  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:49.281380  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:49.281424  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:49.296286  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:49.296318  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:49.368584  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:49.368609  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:49.368625  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:49.453857  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:49.453909  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:48.024987  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:50.026076  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:50.408363  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:52.410039  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:51.444602  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:53.445118  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:51.993057  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:52.006066  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:52.006148  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:52.043179  451238 cri.go:89] found id: ""
	I0805 13:01:52.043212  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.043223  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:52.043231  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:52.043300  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:52.076469  451238 cri.go:89] found id: ""
	I0805 13:01:52.076502  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.076512  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:52.076520  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:52.076586  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:52.112443  451238 cri.go:89] found id: ""
	I0805 13:01:52.112477  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.112488  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:52.112497  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:52.112569  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:52.147589  451238 cri.go:89] found id: ""
	I0805 13:01:52.147620  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.147631  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:52.147638  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:52.147702  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:52.184016  451238 cri.go:89] found id: ""
	I0805 13:01:52.184053  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.184063  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:52.184072  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:52.184134  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:52.219670  451238 cri.go:89] found id: ""
	I0805 13:01:52.219702  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.219714  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:52.219727  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:52.219820  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:52.258697  451238 cri.go:89] found id: ""
	I0805 13:01:52.258731  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.258744  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:52.258752  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:52.258818  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:52.299599  451238 cri.go:89] found id: ""
	I0805 13:01:52.299636  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.299649  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:52.299665  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:52.299683  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:52.351730  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:52.351772  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:52.365993  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:52.366022  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:52.436019  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:52.436041  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:52.436056  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:52.520082  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:52.520118  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:55.064214  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:55.077358  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:55.077454  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:55.110523  451238 cri.go:89] found id: ""
	I0805 13:01:55.110555  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.110564  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:55.110570  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:55.110630  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:55.147870  451238 cri.go:89] found id: ""
	I0805 13:01:55.147905  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.147916  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:55.147925  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:55.147998  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:55.180769  451238 cri.go:89] found id: ""
	I0805 13:01:55.180803  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.180814  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:55.180822  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:55.180890  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:55.217290  451238 cri.go:89] found id: ""
	I0805 13:01:55.217332  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.217343  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:55.217353  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:55.217420  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:55.254185  451238 cri.go:89] found id: ""
	I0805 13:01:55.254221  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.254232  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:55.254239  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:55.254295  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:55.290633  451238 cri.go:89] found id: ""
	I0805 13:01:55.290662  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.290673  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:55.290681  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:55.290747  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:55.325830  451238 cri.go:89] found id: ""
	I0805 13:01:55.325862  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.325873  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:55.325880  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:55.325947  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:55.359887  451238 cri.go:89] found id: ""
	I0805 13:01:55.359922  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.359931  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:55.359941  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:55.359953  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:55.418251  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:55.418299  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:55.432007  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:55.432038  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:55.507177  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:55.507205  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:55.507219  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:55.586919  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:55.586965  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:52.525480  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:54.525653  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:57.024834  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:54.410408  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:56.909810  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:55.944741  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:57.946654  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:58.128822  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:58.142726  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:58.142799  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:58.178027  451238 cri.go:89] found id: ""
	I0805 13:01:58.178056  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.178067  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:58.178075  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:58.178147  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:58.213309  451238 cri.go:89] found id: ""
	I0805 13:01:58.213340  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.213351  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:58.213358  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:58.213430  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:58.247296  451238 cri.go:89] found id: ""
	I0805 13:01:58.247323  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.247332  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:58.247338  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:58.247393  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:58.280226  451238 cri.go:89] found id: ""
	I0805 13:01:58.280255  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.280266  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:58.280277  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:58.280335  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:58.316934  451238 cri.go:89] found id: ""
	I0805 13:01:58.316969  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.316981  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:58.316989  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:58.317055  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:58.360931  451238 cri.go:89] found id: ""
	I0805 13:01:58.360967  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.360979  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:58.360987  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:58.361055  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:58.399112  451238 cri.go:89] found id: ""
	I0805 13:01:58.399150  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.399163  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:58.399171  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:58.399244  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:58.441903  451238 cri.go:89] found id: ""
	I0805 13:01:58.441930  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.441941  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:58.441952  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:58.441967  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:58.524869  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:58.524908  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:58.562598  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:58.562634  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:58.618274  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:58.618313  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:58.633011  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:58.633039  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:58.706287  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:01.206971  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:01.222277  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:01.222357  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:01.266949  451238 cri.go:89] found id: ""
	I0805 13:02:01.266982  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.266993  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:01.267007  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:01.267108  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:01.306765  451238 cri.go:89] found id: ""
	I0805 13:02:01.306791  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.306799  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:01.306805  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:01.306859  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:01.345108  451238 cri.go:89] found id: ""
	I0805 13:02:01.345145  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.345157  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:01.345164  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:01.345227  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:01.383201  451238 cri.go:89] found id: ""
	I0805 13:02:01.383231  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.383239  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:01.383245  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:01.383307  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:01.419292  451238 cri.go:89] found id: ""
	I0805 13:02:01.419320  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.419331  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:01.419338  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:01.419410  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:01.456447  451238 cri.go:89] found id: ""
	I0805 13:02:01.456482  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.456492  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:01.456500  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:01.456568  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:01.496266  451238 cri.go:89] found id: ""
	I0805 13:02:01.496298  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.496306  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:01.496312  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:01.496375  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:01.541492  451238 cri.go:89] found id: ""
	I0805 13:02:01.541529  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.541541  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:01.541555  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:01.541571  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:01.593140  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:01.593185  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:01.606641  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:01.606670  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:01.681989  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:01.682015  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:01.682030  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:01.765612  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:01.765655  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:59.025355  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:01.025443  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:59.408591  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:01.409368  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:00.445254  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:02.944495  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:04.311066  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:04.326530  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:04.326599  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:04.360091  451238 cri.go:89] found id: ""
	I0805 13:02:04.360124  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.360136  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:04.360142  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:04.360214  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:04.398983  451238 cri.go:89] found id: ""
	I0805 13:02:04.399014  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.399026  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:04.399045  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:04.399122  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:04.433444  451238 cri.go:89] found id: ""
	I0805 13:02:04.433474  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.433483  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:04.433495  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:04.433546  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:04.470113  451238 cri.go:89] found id: ""
	I0805 13:02:04.470145  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.470156  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:04.470167  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:04.470233  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:04.505695  451238 cri.go:89] found id: ""
	I0805 13:02:04.505721  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.505731  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:04.505738  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:04.505801  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:04.544093  451238 cri.go:89] found id: ""
	I0805 13:02:04.544121  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.544129  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:04.544136  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:04.544196  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:04.579663  451238 cri.go:89] found id: ""
	I0805 13:02:04.579702  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.579715  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:04.579724  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:04.579803  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:04.616524  451238 cri.go:89] found id: ""
	I0805 13:02:04.616565  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.616577  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:04.616590  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:04.616607  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:04.693014  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:04.693035  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:04.693048  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:04.772508  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:04.772550  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:04.813014  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:04.813043  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:04.864653  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:04.864702  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:03.525225  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:06.024868  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:03.908365  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:05.908993  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:07.910958  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:05.444593  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:07.444737  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:07.378816  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:07.392347  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:07.392439  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:07.425843  451238 cri.go:89] found id: ""
	I0805 13:02:07.425876  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.425887  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:07.425895  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:07.425958  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:07.461547  451238 cri.go:89] found id: ""
	I0805 13:02:07.461575  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.461584  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:07.461591  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:07.461651  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:07.496461  451238 cri.go:89] found id: ""
	I0805 13:02:07.496500  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.496510  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:07.496521  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:07.496599  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:07.531520  451238 cri.go:89] found id: ""
	I0805 13:02:07.531556  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.531566  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:07.531574  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:07.531642  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:07.571821  451238 cri.go:89] found id: ""
	I0805 13:02:07.571855  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.571866  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:07.571876  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:07.571948  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:07.611111  451238 cri.go:89] found id: ""
	I0805 13:02:07.611151  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.611159  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:07.611165  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:07.611226  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:07.651428  451238 cri.go:89] found id: ""
	I0805 13:02:07.651456  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.651464  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:07.651470  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:07.651520  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:07.689828  451238 cri.go:89] found id: ""
	I0805 13:02:07.689858  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.689866  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:07.689877  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:07.689893  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:07.746381  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:07.746422  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:07.760953  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:07.760989  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:07.834859  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:07.834883  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:07.834901  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:07.915344  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:07.915376  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:10.459232  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:10.472789  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:10.472853  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:10.508434  451238 cri.go:89] found id: ""
	I0805 13:02:10.508462  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.508470  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:10.508477  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:10.508539  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:10.543487  451238 cri.go:89] found id: ""
	I0805 13:02:10.543515  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.543524  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:10.543530  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:10.543582  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:10.588274  451238 cri.go:89] found id: ""
	I0805 13:02:10.588302  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.588310  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:10.588317  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:10.588379  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:10.620810  451238 cri.go:89] found id: ""
	I0805 13:02:10.620851  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.620863  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:10.620871  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:10.620945  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:10.657882  451238 cri.go:89] found id: ""
	I0805 13:02:10.657913  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.657923  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:10.657929  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:10.657993  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:10.696188  451238 cri.go:89] found id: ""
	I0805 13:02:10.696220  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.696229  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:10.696235  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:10.696294  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:10.729942  451238 cri.go:89] found id: ""
	I0805 13:02:10.729977  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.729988  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:10.729996  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:10.730050  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:10.761972  451238 cri.go:89] found id: ""
	I0805 13:02:10.762000  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.762008  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:10.762018  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:10.762032  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:10.816859  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:10.816890  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:10.830348  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:10.830379  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:10.902720  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:10.902753  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:10.902771  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:10.981464  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:10.981505  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:08.024948  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:10.525441  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:10.408841  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:12.409506  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:09.445359  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:11.944853  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:13.528296  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:13.541813  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:13.541887  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:13.575632  451238 cri.go:89] found id: ""
	I0805 13:02:13.575669  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.575681  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:13.575689  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:13.575766  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:13.612646  451238 cri.go:89] found id: ""
	I0805 13:02:13.612680  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.612691  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:13.612699  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:13.612755  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:13.650310  451238 cri.go:89] found id: ""
	I0805 13:02:13.650341  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.650361  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:13.650369  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:13.650439  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:13.686941  451238 cri.go:89] found id: ""
	I0805 13:02:13.686970  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.686981  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:13.686990  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:13.687054  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:13.722250  451238 cri.go:89] found id: ""
	I0805 13:02:13.722285  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.722297  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:13.722306  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:13.722388  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:13.758337  451238 cri.go:89] found id: ""
	I0805 13:02:13.758367  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.758375  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:13.758382  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:13.758443  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:13.792980  451238 cri.go:89] found id: ""
	I0805 13:02:13.793016  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.793028  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:13.793036  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:13.793127  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:13.831511  451238 cri.go:89] found id: ""
	I0805 13:02:13.831539  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.831547  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:13.831558  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:13.831579  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:13.885124  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:13.885169  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:13.899112  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:13.899155  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:13.977058  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:13.977099  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:13.977115  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:14.060873  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:14.060911  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:16.602595  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:16.617557  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:16.617638  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:16.660212  451238 cri.go:89] found id: ""
	I0805 13:02:16.660244  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.660256  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:16.660264  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:16.660323  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:16.695515  451238 cri.go:89] found id: ""
	I0805 13:02:16.695553  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.695564  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:16.695572  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:16.695638  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:16.732844  451238 cri.go:89] found id: ""
	I0805 13:02:16.732875  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.732884  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:16.732891  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:16.732943  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:16.772465  451238 cri.go:89] found id: ""
	I0805 13:02:16.772497  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.772504  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:16.772517  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:16.772582  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:16.809826  451238 cri.go:89] found id: ""
	I0805 13:02:16.809863  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.809875  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:16.809882  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:16.809949  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:16.849480  451238 cri.go:89] found id: ""
	I0805 13:02:16.849512  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.849523  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:16.849531  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:16.849598  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:16.884098  451238 cri.go:89] found id: ""
	I0805 13:02:16.884132  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.884144  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:16.884152  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:16.884222  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:16.920497  451238 cri.go:89] found id: ""
	I0805 13:02:16.920523  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.920530  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:16.920541  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:16.920556  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:13.025299  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:15.525474  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:14.908633  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:16.909254  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:14.445321  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:16.945044  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:18.945630  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:16.975287  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:16.975317  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:16.989524  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:16.989552  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:17.057997  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:17.058022  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:17.058037  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:17.133721  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:17.133763  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:19.672385  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:19.687948  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:19.688017  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:19.724105  451238 cri.go:89] found id: ""
	I0805 13:02:19.724132  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.724140  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:19.724147  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:19.724199  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:19.758263  451238 cri.go:89] found id: ""
	I0805 13:02:19.758296  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.758306  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:19.758314  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:19.758381  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:19.792924  451238 cri.go:89] found id: ""
	I0805 13:02:19.792954  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.792961  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:19.792967  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:19.793023  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:19.826340  451238 cri.go:89] found id: ""
	I0805 13:02:19.826367  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.826375  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:19.826382  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:19.826434  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:19.864289  451238 cri.go:89] found id: ""
	I0805 13:02:19.864323  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.864334  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:19.864343  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:19.864413  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:19.899630  451238 cri.go:89] found id: ""
	I0805 13:02:19.899661  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.899673  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:19.899682  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:19.899786  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:19.935798  451238 cri.go:89] found id: ""
	I0805 13:02:19.935826  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.935836  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:19.935843  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:19.935896  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:19.977984  451238 cri.go:89] found id: ""
	I0805 13:02:19.978019  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.978031  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:19.978044  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:19.978062  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:20.030096  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:20.030131  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:20.043878  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:20.043940  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:20.119251  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:20.119279  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:20.119297  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:20.202445  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:20.202488  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:18.026282  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:20.524225  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:19.408760  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:21.410108  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:21.445045  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:23.944150  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:22.744728  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:22.758606  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:22.758675  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:22.791663  451238 cri.go:89] found id: ""
	I0805 13:02:22.791696  451238 logs.go:276] 0 containers: []
	W0805 13:02:22.791708  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:22.791717  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:22.791821  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:22.826568  451238 cri.go:89] found id: ""
	I0805 13:02:22.826594  451238 logs.go:276] 0 containers: []
	W0805 13:02:22.826603  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:22.826609  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:22.826671  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:22.860430  451238 cri.go:89] found id: ""
	I0805 13:02:22.860459  451238 logs.go:276] 0 containers: []
	W0805 13:02:22.860470  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:22.860479  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:22.860543  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:22.893815  451238 cri.go:89] found id: ""
	I0805 13:02:22.893846  451238 logs.go:276] 0 containers: []
	W0805 13:02:22.893854  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:22.893860  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:22.893929  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:22.929804  451238 cri.go:89] found id: ""
	I0805 13:02:22.929830  451238 logs.go:276] 0 containers: []
	W0805 13:02:22.929840  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:22.929849  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:22.929915  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:22.964918  451238 cri.go:89] found id: ""
	I0805 13:02:22.964950  451238 logs.go:276] 0 containers: []
	W0805 13:02:22.964961  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:22.964969  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:22.965035  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:23.000236  451238 cri.go:89] found id: ""
	I0805 13:02:23.000271  451238 logs.go:276] 0 containers: []
	W0805 13:02:23.000282  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:23.000290  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:23.000354  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:23.052075  451238 cri.go:89] found id: ""
	I0805 13:02:23.052108  451238 logs.go:276] 0 containers: []
	W0805 13:02:23.052117  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:23.052128  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:23.052141  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:23.104213  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:23.104248  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:23.118811  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:23.118851  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:23.188552  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:23.188578  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:23.188595  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:23.272518  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:23.272562  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:25.811116  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:25.825030  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:25.825113  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:25.864282  451238 cri.go:89] found id: ""
	I0805 13:02:25.864318  451238 logs.go:276] 0 containers: []
	W0805 13:02:25.864331  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:25.864339  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:25.864413  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:25.901712  451238 cri.go:89] found id: ""
	I0805 13:02:25.901746  451238 logs.go:276] 0 containers: []
	W0805 13:02:25.901754  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:25.901760  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:25.901822  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:25.937036  451238 cri.go:89] found id: ""
	I0805 13:02:25.937068  451238 logs.go:276] 0 containers: []
	W0805 13:02:25.937077  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:25.937083  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:25.937146  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:25.974598  451238 cri.go:89] found id: ""
	I0805 13:02:25.974627  451238 logs.go:276] 0 containers: []
	W0805 13:02:25.974638  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:25.974646  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:25.974713  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:26.011083  451238 cri.go:89] found id: ""
	I0805 13:02:26.011116  451238 logs.go:276] 0 containers: []
	W0805 13:02:26.011124  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:26.011130  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:26.011190  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:26.050187  451238 cri.go:89] found id: ""
	I0805 13:02:26.050219  451238 logs.go:276] 0 containers: []
	W0805 13:02:26.050231  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:26.050242  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:26.050317  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:26.085038  451238 cri.go:89] found id: ""
	I0805 13:02:26.085067  451238 logs.go:276] 0 containers: []
	W0805 13:02:26.085077  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:26.085086  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:26.085151  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:26.122121  451238 cri.go:89] found id: ""
	I0805 13:02:26.122150  451238 logs.go:276] 0 containers: []
	W0805 13:02:26.122158  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:26.122173  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:26.122191  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:26.193819  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:26.193850  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:26.193865  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:26.273453  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:26.273492  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:26.312474  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:26.312509  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:26.363176  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:26.363215  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:22.524303  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:24.525047  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:26.528347  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:23.909120  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:26.409913  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:25.944824  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:28.444803  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:28.878523  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:28.892242  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:28.892330  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:28.928650  451238 cri.go:89] found id: ""
	I0805 13:02:28.928682  451238 logs.go:276] 0 containers: []
	W0805 13:02:28.928693  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:28.928702  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:28.928772  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:28.965582  451238 cri.go:89] found id: ""
	I0805 13:02:28.965615  451238 logs.go:276] 0 containers: []
	W0805 13:02:28.965626  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:28.965634  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:28.965698  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:29.001824  451238 cri.go:89] found id: ""
	I0805 13:02:29.001855  451238 logs.go:276] 0 containers: []
	W0805 13:02:29.001865  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:29.001874  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:29.001939  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:29.037688  451238 cri.go:89] found id: ""
	I0805 13:02:29.037715  451238 logs.go:276] 0 containers: []
	W0805 13:02:29.037722  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:29.037730  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:29.037780  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:29.078495  451238 cri.go:89] found id: ""
	I0805 13:02:29.078540  451238 logs.go:276] 0 containers: []
	W0805 13:02:29.078552  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:29.078559  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:29.078627  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:29.113728  451238 cri.go:89] found id: ""
	I0805 13:02:29.113764  451238 logs.go:276] 0 containers: []
	W0805 13:02:29.113776  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:29.113786  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:29.113851  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:29.147590  451238 cri.go:89] found id: ""
	I0805 13:02:29.147618  451238 logs.go:276] 0 containers: []
	W0805 13:02:29.147629  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:29.147638  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:29.147702  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:29.186015  451238 cri.go:89] found id: ""
	I0805 13:02:29.186043  451238 logs.go:276] 0 containers: []
	W0805 13:02:29.186052  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:29.186062  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:29.186074  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:29.242795  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:29.242850  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:29.257012  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:29.257046  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:29.330528  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:29.330555  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:29.330569  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:29.418109  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:29.418145  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:29.025256  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:31.526187  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:28.909283  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:31.409736  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:30.944380  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:32.945421  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:31.986351  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:32.001265  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:32.001349  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:32.035152  451238 cri.go:89] found id: ""
	I0805 13:02:32.035191  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.035200  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:32.035208  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:32.035262  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:32.069086  451238 cri.go:89] found id: ""
	I0805 13:02:32.069118  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.069128  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:32.069136  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:32.069204  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:32.103788  451238 cri.go:89] found id: ""
	I0805 13:02:32.103814  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.103822  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:32.103831  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:32.103893  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:32.139104  451238 cri.go:89] found id: ""
	I0805 13:02:32.139138  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.139149  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:32.139157  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:32.139222  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:32.192759  451238 cri.go:89] found id: ""
	I0805 13:02:32.192789  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.192798  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:32.192804  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:32.192865  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:32.231080  451238 cri.go:89] found id: ""
	I0805 13:02:32.231115  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.231126  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:32.231135  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:32.231200  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:32.266547  451238 cri.go:89] found id: ""
	I0805 13:02:32.266578  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.266587  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:32.266594  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:32.266647  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:32.301828  451238 cri.go:89] found id: ""
	I0805 13:02:32.301856  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.301865  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:32.301875  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:32.301888  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:32.358439  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:32.358479  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:32.372349  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:32.372383  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:32.442335  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:32.442369  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:32.442388  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:32.521705  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:32.521744  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:35.060867  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:35.074370  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:35.074433  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:35.111149  451238 cri.go:89] found id: ""
	I0805 13:02:35.111181  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.111191  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:35.111200  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:35.111268  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:35.153781  451238 cri.go:89] found id: ""
	I0805 13:02:35.153814  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.153825  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:35.153832  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:35.153894  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:35.193207  451238 cri.go:89] found id: ""
	I0805 13:02:35.193239  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.193256  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:35.193291  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:35.193370  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:35.243879  451238 cri.go:89] found id: ""
	I0805 13:02:35.243915  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.243928  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:35.243936  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:35.243994  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:35.297922  451238 cri.go:89] found id: ""
	I0805 13:02:35.297954  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.297966  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:35.297973  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:35.298039  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:35.333201  451238 cri.go:89] found id: ""
	I0805 13:02:35.333234  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.333245  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:35.333254  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:35.333316  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:35.366327  451238 cri.go:89] found id: ""
	I0805 13:02:35.366361  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.366373  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:35.366381  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:35.366449  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:35.401515  451238 cri.go:89] found id: ""
	I0805 13:02:35.401546  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.401555  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:35.401565  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:35.401578  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:35.451057  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:35.451090  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:35.465054  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:35.465095  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:35.547111  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:35.547142  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:35.547160  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:35.627451  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:35.627490  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:34.025104  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:36.524904  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:33.908489  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:35.909183  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:37.909360  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:35.445317  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:37.446056  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:38.169022  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:38.181892  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:38.181968  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:38.217919  451238 cri.go:89] found id: ""
	I0805 13:02:38.217951  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.217961  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:38.217970  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:38.218041  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:38.253967  451238 cri.go:89] found id: ""
	I0805 13:02:38.253999  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.254008  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:38.254020  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:38.254073  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:38.293757  451238 cri.go:89] found id: ""
	I0805 13:02:38.293789  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.293801  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:38.293809  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:38.293904  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:38.329657  451238 cri.go:89] found id: ""
	I0805 13:02:38.329686  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.329697  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:38.329705  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:38.329772  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:38.364602  451238 cri.go:89] found id: ""
	I0805 13:02:38.364635  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.364647  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:38.364656  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:38.364732  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:38.396352  451238 cri.go:89] found id: ""
	I0805 13:02:38.396382  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.396394  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:38.396403  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:38.396471  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:38.429172  451238 cri.go:89] found id: ""
	I0805 13:02:38.429203  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.429214  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:38.429223  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:38.429293  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:38.464855  451238 cri.go:89] found id: ""
	I0805 13:02:38.464891  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.464903  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:38.464916  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:38.464931  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:38.514924  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:38.514967  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:38.530076  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:38.530113  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:38.602472  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:38.602494  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:38.602509  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:38.683905  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:38.683948  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:41.226878  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:41.245027  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:41.245100  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:41.280482  451238 cri.go:89] found id: ""
	I0805 13:02:41.280511  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.280523  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:41.280532  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:41.280597  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:41.316592  451238 cri.go:89] found id: ""
	I0805 13:02:41.316622  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.316633  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:41.316641  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:41.316708  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:41.353282  451238 cri.go:89] found id: ""
	I0805 13:02:41.353313  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.353324  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:41.353333  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:41.353397  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:41.393379  451238 cri.go:89] found id: ""
	I0805 13:02:41.393406  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.393417  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:41.393426  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:41.393502  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:41.430980  451238 cri.go:89] found id: ""
	I0805 13:02:41.431012  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.431023  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:41.431031  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:41.431106  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:41.467228  451238 cri.go:89] found id: ""
	I0805 13:02:41.467261  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.467273  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:41.467281  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:41.467348  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:41.502105  451238 cri.go:89] found id: ""
	I0805 13:02:41.502153  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.502166  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:41.502175  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:41.502250  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:41.539286  451238 cri.go:89] found id: ""
	I0805 13:02:41.539314  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.539325  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:41.539338  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:41.539353  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:41.592135  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:41.592175  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:41.608151  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:41.608184  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:41.680096  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:41.680131  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:41.680148  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:41.759589  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:41.759628  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:39.025448  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:41.526590  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:40.409447  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:42.909412  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:39.945459  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:42.444630  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:44.300461  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:44.314310  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:44.314388  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:44.348516  451238 cri.go:89] found id: ""
	I0805 13:02:44.348549  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.348562  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:44.348570  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:44.348635  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:44.388256  451238 cri.go:89] found id: ""
	I0805 13:02:44.388289  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.388299  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:44.388309  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:44.388383  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:44.426743  451238 cri.go:89] found id: ""
	I0805 13:02:44.426778  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.426786  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:44.426792  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:44.426848  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:44.463008  451238 cri.go:89] found id: ""
	I0805 13:02:44.463044  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.463054  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:44.463062  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:44.463129  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:44.497662  451238 cri.go:89] found id: ""
	I0805 13:02:44.497696  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.497707  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:44.497715  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:44.497789  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:44.534253  451238 cri.go:89] found id: ""
	I0805 13:02:44.534281  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.534288  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:44.534294  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:44.534378  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:44.574350  451238 cri.go:89] found id: ""
	I0805 13:02:44.574380  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.574390  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:44.574398  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:44.574468  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:44.609984  451238 cri.go:89] found id: ""
	I0805 13:02:44.610018  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.610031  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:44.610044  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:44.610060  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:44.650363  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:44.650402  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:44.700997  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:44.701032  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:44.716841  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:44.716874  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:44.785482  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:44.785502  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:44.785517  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:44.023932  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:46.025733  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:44.909613  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:47.409724  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:44.445234  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:46.944157  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:48.946098  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:47.365382  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:47.378779  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:47.378851  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:47.413615  451238 cri.go:89] found id: ""
	I0805 13:02:47.413636  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.413645  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:47.413651  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:47.413699  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:47.448536  451238 cri.go:89] found id: ""
	I0805 13:02:47.448563  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.448572  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:47.448578  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:47.448629  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:47.490817  451238 cri.go:89] found id: ""
	I0805 13:02:47.490847  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.490856  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:47.490862  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:47.490931  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:47.533151  451238 cri.go:89] found id: ""
	I0805 13:02:47.533179  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.533187  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:47.533193  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:47.533250  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:47.571991  451238 cri.go:89] found id: ""
	I0805 13:02:47.572022  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.572030  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:47.572036  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:47.572096  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:47.606943  451238 cri.go:89] found id: ""
	I0805 13:02:47.606976  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.606987  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:47.606995  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:47.607073  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:47.644704  451238 cri.go:89] found id: ""
	I0805 13:02:47.644741  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.644753  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:47.644762  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:47.644828  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:47.687361  451238 cri.go:89] found id: ""
	I0805 13:02:47.687395  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.687408  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:47.687427  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:47.687453  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:47.766572  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:47.766614  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:47.812209  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:47.812242  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:47.862948  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:47.862987  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:47.878697  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:47.878729  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:47.951680  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:50.452861  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:50.466370  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:50.466440  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:50.500001  451238 cri.go:89] found id: ""
	I0805 13:02:50.500031  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.500043  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:50.500051  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:50.500126  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:50.541752  451238 cri.go:89] found id: ""
	I0805 13:02:50.541786  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.541794  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:50.541800  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:50.541864  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:50.578889  451238 cri.go:89] found id: ""
	I0805 13:02:50.578915  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.578923  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:50.578930  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:50.578984  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:50.614865  451238 cri.go:89] found id: ""
	I0805 13:02:50.614896  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.614906  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:50.614912  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:50.614980  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:50.656169  451238 cri.go:89] found id: ""
	I0805 13:02:50.656195  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.656202  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:50.656209  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:50.656277  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:50.695050  451238 cri.go:89] found id: ""
	I0805 13:02:50.695082  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.695099  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:50.695108  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:50.695187  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:50.733205  451238 cri.go:89] found id: ""
	I0805 13:02:50.733233  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.733242  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:50.733249  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:50.733300  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:50.770654  451238 cri.go:89] found id: ""
	I0805 13:02:50.770683  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.770693  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:50.770706  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:50.770721  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:50.826521  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:50.826567  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:50.842153  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:50.842181  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:50.916445  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:50.916474  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:50.916487  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:50.999973  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:51.000020  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:48.525240  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:51.024459  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:49.907505  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:51.909037  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:50.946199  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:53.444128  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:53.539541  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:53.553804  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:53.553893  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:53.593075  451238 cri.go:89] found id: ""
	I0805 13:02:53.593105  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.593114  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:53.593121  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:53.593190  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:53.629967  451238 cri.go:89] found id: ""
	I0805 13:02:53.630001  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.630012  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:53.630020  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:53.630088  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:53.663535  451238 cri.go:89] found id: ""
	I0805 13:02:53.663564  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.663572  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:53.663577  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:53.663635  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:53.697650  451238 cri.go:89] found id: ""
	I0805 13:02:53.697676  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.697684  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:53.697690  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:53.697741  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:53.732845  451238 cri.go:89] found id: ""
	I0805 13:02:53.732873  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.732883  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:53.732891  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:53.732950  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:53.774673  451238 cri.go:89] found id: ""
	I0805 13:02:53.774703  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.774712  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:53.774719  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:53.774783  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:53.815368  451238 cri.go:89] found id: ""
	I0805 13:02:53.815401  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.815413  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:53.815423  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:53.815487  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:53.849726  451238 cri.go:89] found id: ""
	I0805 13:02:53.849760  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.849771  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:53.849785  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:53.849801  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:53.925356  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:53.925398  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:53.966721  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:53.966751  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:54.023096  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:54.023140  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:54.037634  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:54.037666  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:54.115159  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:56.616326  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:56.629665  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:56.629744  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:56.665665  451238 cri.go:89] found id: ""
	I0805 13:02:56.665701  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.665713  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:56.665722  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:56.665790  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:56.700446  451238 cri.go:89] found id: ""
	I0805 13:02:56.700473  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.700481  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:56.700488  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:56.700554  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:56.737152  451238 cri.go:89] found id: ""
	I0805 13:02:56.737190  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.737202  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:56.737210  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:56.737283  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:56.777909  451238 cri.go:89] found id: ""
	I0805 13:02:56.777942  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.777954  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:56.777961  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:56.778027  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:56.813503  451238 cri.go:89] found id: ""
	I0805 13:02:56.813537  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.813547  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:56.813556  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:56.813625  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:56.848964  451238 cri.go:89] found id: ""
	I0805 13:02:56.848993  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.849002  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:56.849008  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:56.849071  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:56.884310  451238 cri.go:89] found id: ""
	I0805 13:02:56.884339  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.884347  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:56.884356  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:56.884417  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:56.925895  451238 cri.go:89] found id: ""
	I0805 13:02:56.925926  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.925936  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:56.925948  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:56.925962  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:53.025086  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:55.025424  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:57.026117  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:53.909851  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:56.411536  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:55.945123  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:57.945278  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:56.982847  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:56.982882  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:56.997703  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:56.997742  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:57.071130  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:57.071153  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:57.071174  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:57.152985  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:57.153029  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:59.697501  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:59.711799  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:59.711879  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:59.746992  451238 cri.go:89] found id: ""
	I0805 13:02:59.747024  451238 logs.go:276] 0 containers: []
	W0805 13:02:59.747035  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:59.747043  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:59.747115  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:59.780563  451238 cri.go:89] found id: ""
	I0805 13:02:59.780592  451238 logs.go:276] 0 containers: []
	W0805 13:02:59.780604  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:59.780611  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:59.780676  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:59.816973  451238 cri.go:89] found id: ""
	I0805 13:02:59.817007  451238 logs.go:276] 0 containers: []
	W0805 13:02:59.817019  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:59.817027  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:59.817098  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:59.851989  451238 cri.go:89] found id: ""
	I0805 13:02:59.852018  451238 logs.go:276] 0 containers: []
	W0805 13:02:59.852028  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:59.852035  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:59.852086  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:59.887491  451238 cri.go:89] found id: ""
	I0805 13:02:59.887517  451238 logs.go:276] 0 containers: []
	W0805 13:02:59.887525  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:59.887535  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:59.887587  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:59.924965  451238 cri.go:89] found id: ""
	I0805 13:02:59.924997  451238 logs.go:276] 0 containers: []
	W0805 13:02:59.925005  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:59.925012  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:59.925062  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:59.965830  451238 cri.go:89] found id: ""
	I0805 13:02:59.965860  451238 logs.go:276] 0 containers: []
	W0805 13:02:59.965868  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:59.965875  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:59.965932  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:03:00.003208  451238 cri.go:89] found id: ""
	I0805 13:03:00.003241  451238 logs.go:276] 0 containers: []
	W0805 13:03:00.003250  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:03:00.003260  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:00.003275  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:00.056865  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:00.056911  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:00.070563  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:00.070593  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:03:00.137931  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:03:00.137957  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:00.137976  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:00.221598  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:03:00.221649  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:59.525042  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:02.024461  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:58.903499  450576 pod_ready.go:81] duration metric: took 4m0.001018928s for pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace to be "Ready" ...
	E0805 13:02:58.903533  450576 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace to be "Ready" (will not retry!)
	I0805 13:02:58.903556  450576 pod_ready.go:38] duration metric: took 4m8.049032492s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 13:02:58.903598  450576 kubeadm.go:597] duration metric: took 4m18.518107211s to restartPrimaryControlPlane
	W0805 13:02:58.903786  450576 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0805 13:02:58.903819  450576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0805 13:02:59.945464  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:02.443954  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:02.761328  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:03:02.775836  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:03:02.775904  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:03:02.812714  451238 cri.go:89] found id: ""
	I0805 13:03:02.812752  451238 logs.go:276] 0 containers: []
	W0805 13:03:02.812764  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:03:02.812773  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:03:02.812848  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:03:02.850072  451238 cri.go:89] found id: ""
	I0805 13:03:02.850103  451238 logs.go:276] 0 containers: []
	W0805 13:03:02.850130  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:03:02.850138  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:03:02.850197  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:03:02.886956  451238 cri.go:89] found id: ""
	I0805 13:03:02.887081  451238 logs.go:276] 0 containers: []
	W0805 13:03:02.887103  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:03:02.887114  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:03:02.887188  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:03:02.924874  451238 cri.go:89] found id: ""
	I0805 13:03:02.924906  451238 logs.go:276] 0 containers: []
	W0805 13:03:02.924918  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:03:02.924925  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:03:02.924996  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:03:02.965965  451238 cri.go:89] found id: ""
	I0805 13:03:02.965996  451238 logs.go:276] 0 containers: []
	W0805 13:03:02.966007  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:03:02.966015  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:03:02.966101  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:03:03.001081  451238 cri.go:89] found id: ""
	I0805 13:03:03.001118  451238 logs.go:276] 0 containers: []
	W0805 13:03:03.001130  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:03:03.001140  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:03:03.001201  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:03:03.036194  451238 cri.go:89] found id: ""
	I0805 13:03:03.036223  451238 logs.go:276] 0 containers: []
	W0805 13:03:03.036234  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:03:03.036243  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:03:03.036303  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:03:03.071905  451238 cri.go:89] found id: ""
	I0805 13:03:03.071940  451238 logs.go:276] 0 containers: []
	W0805 13:03:03.071951  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:03:03.071964  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:03.071982  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:03.124400  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:03.124442  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:03.138492  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:03.138520  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:03:03.207300  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:03:03.207326  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:03.207342  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:03.294941  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:03:03.294983  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:03:05.836187  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:03:05.850504  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:03:05.850609  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:03:05.889692  451238 cri.go:89] found id: ""
	I0805 13:03:05.889718  451238 logs.go:276] 0 containers: []
	W0805 13:03:05.889729  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:03:05.889737  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:03:05.889804  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:03:05.924597  451238 cri.go:89] found id: ""
	I0805 13:03:05.924630  451238 logs.go:276] 0 containers: []
	W0805 13:03:05.924640  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:03:05.924647  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:03:05.924711  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:03:05.960373  451238 cri.go:89] found id: ""
	I0805 13:03:05.960404  451238 logs.go:276] 0 containers: []
	W0805 13:03:05.960413  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:03:05.960419  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:03:05.960471  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:03:05.996583  451238 cri.go:89] found id: ""
	I0805 13:03:05.996617  451238 logs.go:276] 0 containers: []
	W0805 13:03:05.996628  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:03:05.996636  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:03:05.996708  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:03:06.033539  451238 cri.go:89] found id: ""
	I0805 13:03:06.033567  451238 logs.go:276] 0 containers: []
	W0805 13:03:06.033575  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:03:06.033586  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:03:06.033655  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:03:06.069348  451238 cri.go:89] found id: ""
	I0805 13:03:06.069378  451238 logs.go:276] 0 containers: []
	W0805 13:03:06.069391  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:03:06.069401  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:03:06.069466  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:03:06.103570  451238 cri.go:89] found id: ""
	I0805 13:03:06.103599  451238 logs.go:276] 0 containers: []
	W0805 13:03:06.103607  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:03:06.103613  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:03:06.103665  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:03:06.140230  451238 cri.go:89] found id: ""
	I0805 13:03:06.140260  451238 logs.go:276] 0 containers: []
	W0805 13:03:06.140271  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:03:06.140284  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:06.140300  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:06.191073  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:06.191123  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:06.204825  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:06.204857  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:03:06.281309  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:03:06.281339  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:06.281358  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:06.361709  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:03:06.361749  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:03:04.025007  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:06.524506  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:04.444267  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:06.444910  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:08.445441  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:08.903194  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:03:08.921602  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:03:08.921681  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:03:08.960916  451238 cri.go:89] found id: ""
	I0805 13:03:08.960945  451238 logs.go:276] 0 containers: []
	W0805 13:03:08.960975  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:03:08.960986  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:03:08.961055  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:03:08.996316  451238 cri.go:89] found id: ""
	I0805 13:03:08.996417  451238 logs.go:276] 0 containers: []
	W0805 13:03:08.996436  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:03:08.996448  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:03:08.996522  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:03:09.038536  451238 cri.go:89] found id: ""
	I0805 13:03:09.038572  451238 logs.go:276] 0 containers: []
	W0805 13:03:09.038584  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:03:09.038593  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:03:09.038664  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:03:09.075368  451238 cri.go:89] found id: ""
	I0805 13:03:09.075396  451238 logs.go:276] 0 containers: []
	W0805 13:03:09.075405  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:03:09.075412  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:03:09.075474  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:03:09.114232  451238 cri.go:89] found id: ""
	I0805 13:03:09.114262  451238 logs.go:276] 0 containers: []
	W0805 13:03:09.114272  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:03:09.114280  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:03:09.114353  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:03:09.161878  451238 cri.go:89] found id: ""
	I0805 13:03:09.161964  451238 logs.go:276] 0 containers: []
	W0805 13:03:09.161978  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:03:09.161988  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:03:09.162062  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:03:09.206694  451238 cri.go:89] found id: ""
	I0805 13:03:09.206727  451238 logs.go:276] 0 containers: []
	W0805 13:03:09.206739  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:03:09.206748  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:03:09.206890  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:03:09.257029  451238 cri.go:89] found id: ""
	I0805 13:03:09.257066  451238 logs.go:276] 0 containers: []
	W0805 13:03:09.257079  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:03:09.257090  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:09.257107  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:09.278638  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:09.278679  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:03:09.353760  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:03:09.353781  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:09.353793  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:09.438371  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:03:09.438419  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:03:09.487253  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:09.487297  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:08.018954  450884 pod_ready.go:81] duration metric: took 4m0.00055059s for pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace to be "Ready" ...
	E0805 13:03:08.018987  450884 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace to be "Ready" (will not retry!)
	I0805 13:03:08.019010  450884 pod_ready.go:38] duration metric: took 4m11.028507743s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 13:03:08.019048  450884 kubeadm.go:597] duration metric: took 4m19.097834327s to restartPrimaryControlPlane
	W0805 13:03:08.019122  450884 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0805 13:03:08.019157  450884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0805 13:03:10.945002  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:12.945953  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:12.042215  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:03:12.055721  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:03:12.055812  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:03:12.096936  451238 cri.go:89] found id: ""
	I0805 13:03:12.096965  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.096977  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:03:12.096985  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:03:12.097051  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:03:12.136149  451238 cri.go:89] found id: ""
	I0805 13:03:12.136181  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.136192  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:03:12.136199  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:03:12.136276  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:03:12.180568  451238 cri.go:89] found id: ""
	I0805 13:03:12.180606  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.180618  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:03:12.180626  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:03:12.180695  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:03:12.221759  451238 cri.go:89] found id: ""
	I0805 13:03:12.221794  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.221806  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:03:12.221815  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:03:12.221882  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:03:12.259460  451238 cri.go:89] found id: ""
	I0805 13:03:12.259490  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.259498  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:03:12.259508  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:03:12.259563  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:03:12.301245  451238 cri.go:89] found id: ""
	I0805 13:03:12.301277  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.301289  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:03:12.301297  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:03:12.301368  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:03:12.343640  451238 cri.go:89] found id: ""
	I0805 13:03:12.343678  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.343690  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:03:12.343698  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:03:12.343809  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:03:12.382729  451238 cri.go:89] found id: ""
	I0805 13:03:12.382762  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.382774  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:03:12.382787  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:12.382807  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:12.400862  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:12.400897  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:03:12.478755  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:03:12.478788  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:12.478807  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:12.566029  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:03:12.566080  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:03:12.611834  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:12.611929  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:15.171517  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:03:15.185569  451238 kubeadm.go:597] duration metric: took 4m3.737627997s to restartPrimaryControlPlane
	W0805 13:03:15.185662  451238 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0805 13:03:15.185697  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0805 13:03:15.669994  451238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 13:03:15.684794  451238 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 13:03:15.695088  451238 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 13:03:15.705403  451238 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 13:03:15.705427  451238 kubeadm.go:157] found existing configuration files:
	
	I0805 13:03:15.705488  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 13:03:15.714777  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 13:03:15.714833  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 13:03:15.724437  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 13:03:15.733263  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 13:03:15.733317  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 13:03:15.743004  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 13:03:15.752219  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 13:03:15.752278  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 13:03:15.761788  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 13:03:15.771193  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 13:03:15.771245  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 13:03:15.780964  451238 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 13:03:15.855628  451238 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0805 13:03:15.855751  451238 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 13:03:16.015686  451238 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 13:03:16.015880  451238 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 13:03:16.016041  451238 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 13:03:16.207054  451238 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 13:03:16.209133  451238 out.go:204]   - Generating certificates and keys ...
	I0805 13:03:16.209256  451238 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 13:03:16.209376  451238 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 13:03:16.209493  451238 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0805 13:03:16.209597  451238 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0805 13:03:16.209703  451238 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0805 13:03:16.211637  451238 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0805 13:03:16.211726  451238 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0805 13:03:16.211833  451238 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0805 13:03:16.211959  451238 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0805 13:03:16.212690  451238 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0805 13:03:16.212863  451238 kubeadm.go:310] [certs] Using the existing "sa" key
	I0805 13:03:16.212963  451238 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 13:03:16.283080  451238 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 13:03:16.609523  451238 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 13:03:16.765635  451238 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 13:03:16.934487  451238 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 13:03:16.955335  451238 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 13:03:16.956267  451238 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 13:03:16.956328  451238 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 13:03:17.088081  451238 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 13:03:15.445305  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:17.447306  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:17.090118  451238 out.go:204]   - Booting up control plane ...
	I0805 13:03:17.090264  451238 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 13:03:17.100902  451238 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 13:03:17.101263  451238 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 13:03:17.102210  451238 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 13:03:17.112522  451238 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0805 13:03:19.943658  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:21.944253  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:23.945158  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:25.252381  450576 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.348530672s)
	I0805 13:03:25.252504  450576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 13:03:25.269305  450576 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 13:03:25.279322  450576 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 13:03:25.289241  450576 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 13:03:25.289266  450576 kubeadm.go:157] found existing configuration files:
	
	I0805 13:03:25.289304  450576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 13:03:25.298671  450576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 13:03:25.298732  450576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 13:03:25.309962  450576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 13:03:25.320180  450576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 13:03:25.320247  450576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 13:03:25.330481  450576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 13:03:25.340565  450576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 13:03:25.340652  450576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 13:03:25.351244  450576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 13:03:25.361443  450576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 13:03:25.361536  450576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 13:03:25.371655  450576 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 13:03:25.419277  450576 kubeadm.go:310] W0805 13:03:25.398597    2979 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0805 13:03:25.420220  450576 kubeadm.go:310] W0805 13:03:25.399642    2979 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0805 13:03:25.537148  450576 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 13:03:25.945501  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:27.945972  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:33.413703  450576 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-rc.0
	I0805 13:03:33.413775  450576 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 13:03:33.413863  450576 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 13:03:33.414008  450576 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 13:03:33.414152  450576 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0805 13:03:33.414235  450576 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 13:03:33.415804  450576 out.go:204]   - Generating certificates and keys ...
	I0805 13:03:33.415874  450576 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 13:03:33.415949  450576 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 13:03:33.416037  450576 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0805 13:03:33.416101  450576 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0805 13:03:33.416174  450576 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0805 13:03:33.416237  450576 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0805 13:03:33.416289  450576 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0805 13:03:33.416357  450576 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0805 13:03:33.416437  450576 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0805 13:03:33.416518  450576 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0805 13:03:33.416553  450576 kubeadm.go:310] [certs] Using the existing "sa" key
	I0805 13:03:33.416603  450576 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 13:03:33.416646  450576 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 13:03:33.416701  450576 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0805 13:03:33.416745  450576 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 13:03:33.416816  450576 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 13:03:33.416878  450576 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 13:03:33.416971  450576 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 13:03:33.417059  450576 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 13:03:33.418572  450576 out.go:204]   - Booting up control plane ...
	I0805 13:03:33.418671  450576 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 13:03:33.418751  450576 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 13:03:33.418833  450576 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 13:03:33.418965  450576 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 13:03:33.419092  450576 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 13:03:33.419172  450576 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 13:03:33.419342  450576 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0805 13:03:33.419488  450576 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0805 13:03:33.419577  450576 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.308417ms
	I0805 13:03:33.419672  450576 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0805 13:03:33.419780  450576 kubeadm.go:310] [api-check] The API server is healthy after 5.001429681s
	I0805 13:03:33.419908  450576 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 13:03:33.420049  450576 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 13:03:33.420117  450576 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0805 13:03:33.420293  450576 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-669469 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 13:03:33.420385  450576 kubeadm.go:310] [bootstrap-token] Using token: i9zl3x.c4hzh1c9ccxlydzt
	I0805 13:03:33.421925  450576 out.go:204]   - Configuring RBAC rules ...
	I0805 13:03:33.422042  450576 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 13:03:33.422157  450576 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 13:03:33.422352  450576 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 13:03:33.422488  450576 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 13:03:33.422649  450576 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 13:03:33.422784  450576 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 13:03:33.422914  450576 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 13:03:33.422991  450576 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0805 13:03:33.423060  450576 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0805 13:03:33.423070  450576 kubeadm.go:310] 
	I0805 13:03:33.423160  450576 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0805 13:03:33.423173  450576 kubeadm.go:310] 
	I0805 13:03:33.423274  450576 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0805 13:03:33.423283  450576 kubeadm.go:310] 
	I0805 13:03:33.423316  450576 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0805 13:03:33.423409  450576 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 13:03:33.423495  450576 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 13:03:33.423513  450576 kubeadm.go:310] 
	I0805 13:03:33.423616  450576 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0805 13:03:33.423628  450576 kubeadm.go:310] 
	I0805 13:03:33.423692  450576 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 13:03:33.423701  450576 kubeadm.go:310] 
	I0805 13:03:33.423793  450576 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0805 13:03:33.423931  450576 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 13:03:33.424030  450576 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 13:03:33.424039  450576 kubeadm.go:310] 
	I0805 13:03:33.424106  450576 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0805 13:03:33.424176  450576 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0805 13:03:33.424185  450576 kubeadm.go:310] 
	I0805 13:03:33.424282  450576 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token i9zl3x.c4hzh1c9ccxlydzt \
	I0805 13:03:33.424430  450576 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d5d31a77e9c4cbf19599d2fca5d8f2345e115b01301fa4b841f92bcfec86ddc6 \
	I0805 13:03:33.424473  450576 kubeadm.go:310] 	--control-plane 
	I0805 13:03:33.424482  450576 kubeadm.go:310] 
	I0805 13:03:33.424588  450576 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0805 13:03:33.424602  450576 kubeadm.go:310] 
	I0805 13:03:33.424725  450576 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token i9zl3x.c4hzh1c9ccxlydzt \
	I0805 13:03:33.424870  450576 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d5d31a77e9c4cbf19599d2fca5d8f2345e115b01301fa4b841f92bcfec86ddc6 
	I0805 13:03:33.424892  450576 cni.go:84] Creating CNI manager for ""
	I0805 13:03:33.424911  450576 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 13:03:33.426503  450576 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0805 13:03:33.427981  450576 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0805 13:03:33.439484  450576 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0805 13:03:33.458459  450576 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 13:03:33.458547  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:33.458579  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-669469 minikube.k8s.io/updated_at=2024_08_05T13_03_33_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f minikube.k8s.io/name=no-preload-669469 minikube.k8s.io/primary=true
	I0805 13:03:33.488847  450576 ops.go:34] apiserver oom_adj: -16
	I0805 13:03:29.946423  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:32.444923  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:33.674306  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:34.174940  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:34.674936  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:35.174693  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:35.675004  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:36.174801  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:36.674878  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:37.174394  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:37.263948  450576 kubeadm.go:1113] duration metric: took 3.805464287s to wait for elevateKubeSystemPrivileges
	I0805 13:03:37.263985  450576 kubeadm.go:394] duration metric: took 4m56.93214495s to StartCluster
	I0805 13:03:37.264025  450576 settings.go:142] acquiring lock: {Name:mkef693333292ed53a03690c72ec170ce2e26d3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 13:03:37.264143  450576 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 13:03:37.265965  450576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/kubeconfig: {Name:mkf2ea766e58530103015ce4ba9d1ed3336f3926 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 13:03:37.266283  450576 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.223 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 13:03:37.266400  450576 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 13:03:37.266469  450576 addons.go:69] Setting storage-provisioner=true in profile "no-preload-669469"
	I0805 13:03:37.266510  450576 addons.go:234] Setting addon storage-provisioner=true in "no-preload-669469"
	W0805 13:03:37.266518  450576 addons.go:243] addon storage-provisioner should already be in state true
	I0805 13:03:37.266519  450576 config.go:182] Loaded profile config "no-preload-669469": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0805 13:03:37.266551  450576 host.go:66] Checking if "no-preload-669469" exists ...
	I0805 13:03:37.266505  450576 addons.go:69] Setting default-storageclass=true in profile "no-preload-669469"
	I0805 13:03:37.266547  450576 addons.go:69] Setting metrics-server=true in profile "no-preload-669469"
	I0805 13:03:37.266612  450576 addons.go:234] Setting addon metrics-server=true in "no-preload-669469"
	I0805 13:03:37.266616  450576 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-669469"
	W0805 13:03:37.266627  450576 addons.go:243] addon metrics-server should already be in state true
	I0805 13:03:37.266668  450576 host.go:66] Checking if "no-preload-669469" exists ...
	I0805 13:03:37.267002  450576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:03:37.267002  450576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:03:37.267035  450576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:03:37.267049  450576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:03:37.267041  450576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:03:37.267085  450576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:03:37.267985  450576 out.go:177] * Verifying Kubernetes components...
	I0805 13:03:37.269486  450576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 13:03:37.283242  450576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44391
	I0805 13:03:37.283291  450576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35597
	I0805 13:03:37.283245  450576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38679
	I0805 13:03:37.283710  450576 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:03:37.283785  450576 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:03:37.283717  450576 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:03:37.284296  450576 main.go:141] libmachine: Using API Version  1
	I0805 13:03:37.284316  450576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:03:37.284319  450576 main.go:141] libmachine: Using API Version  1
	I0805 13:03:37.284296  450576 main.go:141] libmachine: Using API Version  1
	I0805 13:03:37.284335  450576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:03:37.284360  450576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:03:37.284734  450576 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:03:37.284735  450576 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:03:37.284746  450576 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:03:37.284963  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetState
	I0805 13:03:37.285343  450576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:03:37.285375  450576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:03:37.285387  450576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:03:37.285441  450576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:03:37.288699  450576 addons.go:234] Setting addon default-storageclass=true in "no-preload-669469"
	W0805 13:03:37.288722  450576 addons.go:243] addon default-storageclass should already be in state true
	I0805 13:03:37.288753  450576 host.go:66] Checking if "no-preload-669469" exists ...
	I0805 13:03:37.289023  450576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:03:37.289049  450576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:03:37.303814  450576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38647
	I0805 13:03:37.304491  450576 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:03:37.305081  450576 main.go:141] libmachine: Using API Version  1
	I0805 13:03:37.305104  450576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:03:37.305552  450576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42975
	I0805 13:03:37.305566  450576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36331
	I0805 13:03:37.305583  450576 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:03:37.305928  450576 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:03:37.306007  450576 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:03:37.306148  450576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:03:37.306190  450576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:03:37.306485  450576 main.go:141] libmachine: Using API Version  1
	I0805 13:03:37.306503  450576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:03:37.306595  450576 main.go:141] libmachine: Using API Version  1
	I0805 13:03:37.306611  450576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:03:37.306971  450576 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:03:37.306998  450576 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:03:37.307157  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetState
	I0805 13:03:37.307162  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetState
	I0805 13:03:37.309002  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 13:03:37.309241  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 13:03:37.311054  450576 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0805 13:03:37.311055  450576 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 13:03:37.312682  450576 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0805 13:03:37.312695  450576 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0805 13:03:37.312710  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 13:03:37.312834  450576 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 13:03:37.312856  450576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 13:03:37.312874  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 13:03:37.317044  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 13:03:37.317635  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 13:03:37.317660  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 13:03:37.317753  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 13:03:37.317955  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 13:03:37.318141  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 13:03:37.318360  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 13:03:37.318400  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 13:03:37.318427  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 13:03:37.318539  450576 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa Username:docker}
	I0805 13:03:37.318633  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 13:03:37.318967  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 13:03:37.319111  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 13:03:37.319241  450576 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa Username:docker}
	I0805 13:03:37.325066  450576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46527
	I0805 13:03:37.325633  450576 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:03:37.326052  450576 main.go:141] libmachine: Using API Version  1
	I0805 13:03:37.326071  450576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:03:37.326326  450576 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:03:37.326473  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetState
	I0805 13:03:37.328502  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 13:03:37.328814  450576 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 13:03:37.328826  450576 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 13:03:37.328839  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 13:03:37.331482  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 13:03:37.331853  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 13:03:37.331874  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 13:03:37.332013  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 13:03:37.332169  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 13:03:37.332270  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 13:03:37.332358  450576 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa Username:docker}
	I0805 13:03:37.483477  450576 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 13:03:37.501924  450576 node_ready.go:35] waiting up to 6m0s for node "no-preload-669469" to be "Ready" ...
	I0805 13:03:37.511394  450576 node_ready.go:49] node "no-preload-669469" has status "Ready":"True"
	I0805 13:03:37.511427  450576 node_ready.go:38] duration metric: took 9.462968ms for node "no-preload-669469" to be "Ready" ...
	I0805 13:03:37.511443  450576 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 13:03:37.526505  450576 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 13:03:37.575598  450576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 13:03:37.583338  450576 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0805 13:03:37.583362  450576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0805 13:03:37.594019  450576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 13:03:37.629885  450576 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0805 13:03:37.629913  450576 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0805 13:03:37.684790  450576 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0805 13:03:37.684825  450576 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0805 13:03:37.753629  450576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0805 13:03:37.857352  450576 main.go:141] libmachine: Making call to close driver server
	I0805 13:03:37.857386  450576 main.go:141] libmachine: (no-preload-669469) Calling .Close
	I0805 13:03:37.857777  450576 main.go:141] libmachine: (no-preload-669469) DBG | Closing plugin on server side
	I0805 13:03:37.857780  450576 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:03:37.857812  450576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:03:37.857829  450576 main.go:141] libmachine: Making call to close driver server
	I0805 13:03:37.857838  450576 main.go:141] libmachine: (no-preload-669469) Calling .Close
	I0805 13:03:37.858101  450576 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:03:37.858117  450576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:03:37.858153  450576 main.go:141] libmachine: (no-preload-669469) DBG | Closing plugin on server side
	I0805 13:03:37.871616  450576 main.go:141] libmachine: Making call to close driver server
	I0805 13:03:37.871639  450576 main.go:141] libmachine: (no-preload-669469) Calling .Close
	I0805 13:03:37.871970  450576 main.go:141] libmachine: (no-preload-669469) DBG | Closing plugin on server side
	I0805 13:03:37.872022  450576 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:03:37.872031  450576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:03:38.290429  450576 main.go:141] libmachine: Making call to close driver server
	I0805 13:03:38.290449  450576 main.go:141] libmachine: (no-preload-669469) Calling .Close
	I0805 13:03:38.290784  450576 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:03:38.290856  450576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:03:38.290871  450576 main.go:141] libmachine: Making call to close driver server
	I0805 13:03:38.290880  450576 main.go:141] libmachine: (no-preload-669469) Calling .Close
	I0805 13:03:38.290829  450576 main.go:141] libmachine: (no-preload-669469) DBG | Closing plugin on server side
	I0805 13:03:38.291265  450576 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:03:38.291289  450576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:03:38.291271  450576 main.go:141] libmachine: (no-preload-669469) DBG | Closing plugin on server side
	I0805 13:03:38.880274  450576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.126602375s)
	I0805 13:03:38.880331  450576 main.go:141] libmachine: Making call to close driver server
	I0805 13:03:38.880344  450576 main.go:141] libmachine: (no-preload-669469) Calling .Close
	I0805 13:03:38.880868  450576 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:03:38.880896  450576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:03:38.880906  450576 main.go:141] libmachine: Making call to close driver server
	I0805 13:03:38.880916  450576 main.go:141] libmachine: (no-preload-669469) Calling .Close
	I0805 13:03:38.880871  450576 main.go:141] libmachine: (no-preload-669469) DBG | Closing plugin on server side
	I0805 13:03:38.881196  450576 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:03:38.881204  450576 main.go:141] libmachine: (no-preload-669469) DBG | Closing plugin on server side
	I0805 13:03:38.881211  450576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:03:38.881230  450576 addons.go:475] Verifying addon metrics-server=true in "no-preload-669469"
	I0805 13:03:38.882896  450576 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0805 13:03:34.945631  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:37.446855  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:39.741362  450884 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.722174979s)
	I0805 13:03:39.741438  450884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 13:03:39.760465  450884 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 13:03:39.770587  450884 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 13:03:39.780157  450884 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 13:03:39.780177  450884 kubeadm.go:157] found existing configuration files:
	
	I0805 13:03:39.780215  450884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0805 13:03:39.790172  450884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 13:03:39.790243  450884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 13:03:39.803838  450884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0805 13:03:39.816314  450884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 13:03:39.816367  450884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 13:03:39.826636  450884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0805 13:03:39.836513  450884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 13:03:39.836570  450884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 13:03:39.846356  450884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0805 13:03:39.855694  450884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 13:03:39.855770  450884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 13:03:39.865721  450884 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 13:03:40.081251  450884 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 13:03:38.884521  450576 addons.go:510] duration metric: took 1.618121451s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0805 13:03:39.536758  450576 pod_ready.go:102] pod "etcd-no-preload-669469" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:41.035239  450576 pod_ready.go:92] pod "etcd-no-preload-669469" in "kube-system" namespace has status "Ready":"True"
	I0805 13:03:41.035266  450576 pod_ready.go:81] duration metric: took 3.508734543s for pod "etcd-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 13:03:41.035280  450576 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 13:03:41.042787  450576 pod_ready.go:92] pod "kube-apiserver-no-preload-669469" in "kube-system" namespace has status "Ready":"True"
	I0805 13:03:41.042811  450576 pod_ready.go:81] duration metric: took 7.522909ms for pod "kube-apiserver-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 13:03:41.042824  450576 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 13:03:42.048338  450576 pod_ready.go:92] pod "kube-controller-manager-no-preload-669469" in "kube-system" namespace has status "Ready":"True"
	I0805 13:03:42.048363  450576 pod_ready.go:81] duration metric: took 1.005531569s for pod "kube-controller-manager-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 13:03:42.048373  450576 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 13:03:39.945815  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:42.445704  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:44.056394  450576 pod_ready.go:102] pod "kube-scheduler-no-preload-669469" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:45.555280  450576 pod_ready.go:92] pod "kube-scheduler-no-preload-669469" in "kube-system" namespace has status "Ready":"True"
	I0805 13:03:45.555310  450576 pod_ready.go:81] duration metric: took 3.506927542s for pod "kube-scheduler-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 13:03:45.555321  450576 pod_ready.go:38] duration metric: took 8.043865797s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 13:03:45.555338  450576 api_server.go:52] waiting for apiserver process to appear ...
	I0805 13:03:45.555397  450576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:03:45.572225  450576 api_server.go:72] duration metric: took 8.30589728s to wait for apiserver process to appear ...
	I0805 13:03:45.572249  450576 api_server.go:88] waiting for apiserver healthz status ...
	I0805 13:03:45.572272  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 13:03:45.578042  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 200:
	ok
	I0805 13:03:45.579014  450576 api_server.go:141] control plane version: v1.31.0-rc.0
	I0805 13:03:45.579034  450576 api_server.go:131] duration metric: took 6.778214ms to wait for apiserver health ...
	I0805 13:03:45.579042  450576 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 13:03:45.585537  450576 system_pods.go:59] 9 kube-system pods found
	I0805 13:03:45.585660  450576 system_pods.go:61] "coredns-6f6b679f8f-npbmj" [9eea9e0a-697b-42c9-857c-a3556c658fde] Running
	I0805 13:03:45.585673  450576 system_pods.go:61] "coredns-6f6b679f8f-pqhwx" [3d7bb193-e93e-49b8-be4b-943f2d7fe59d] Running
	I0805 13:03:45.585679  450576 system_pods.go:61] "etcd-no-preload-669469" [550acfbb-f255-470e-9e4f-a6eb36447951] Running
	I0805 13:03:45.585687  450576 system_pods.go:61] "kube-apiserver-no-preload-669469" [57089d30-f83b-4f06-8281-8bcdfb571df9] Running
	I0805 13:03:45.585694  450576 system_pods.go:61] "kube-controller-manager-no-preload-669469" [8f3b2de3-6296-4f95-8d91-b9408c8eb38b] Running
	I0805 13:03:45.585700  450576 system_pods.go:61] "kube-proxy-tpn5s" [f89e32f9-d750-41ac-891e-e3ca4a4fbbd2] Running
	I0805 13:03:45.585705  450576 system_pods.go:61] "kube-scheduler-no-preload-669469" [69af56a0-7269-4bc5-83ea-c632c7b8d060] Running
	I0805 13:03:45.585716  450576 system_pods.go:61] "metrics-server-6867b74b74-x4j7b" [55a747e4-f9a7-41f1-b584-470048ba6fcb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 13:03:45.585726  450576 system_pods.go:61] "storage-provisioner" [cb19adf6-e208-4709-b02f-ae32acc30478] Running
	I0805 13:03:45.585736  450576 system_pods.go:74] duration metric: took 6.688464ms to wait for pod list to return data ...
	I0805 13:03:45.585749  450576 default_sa.go:34] waiting for default service account to be created ...
	I0805 13:03:45.589498  450576 default_sa.go:45] found service account: "default"
	I0805 13:03:45.589526  450576 default_sa.go:55] duration metric: took 3.765664ms for default service account to be created ...
	I0805 13:03:45.589535  450576 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 13:03:45.597499  450576 system_pods.go:86] 9 kube-system pods found
	I0805 13:03:45.597527  450576 system_pods.go:89] "coredns-6f6b679f8f-npbmj" [9eea9e0a-697b-42c9-857c-a3556c658fde] Running
	I0805 13:03:45.597533  450576 system_pods.go:89] "coredns-6f6b679f8f-pqhwx" [3d7bb193-e93e-49b8-be4b-943f2d7fe59d] Running
	I0805 13:03:45.597537  450576 system_pods.go:89] "etcd-no-preload-669469" [550acfbb-f255-470e-9e4f-a6eb36447951] Running
	I0805 13:03:45.597541  450576 system_pods.go:89] "kube-apiserver-no-preload-669469" [57089d30-f83b-4f06-8281-8bcdfb571df9] Running
	I0805 13:03:45.597547  450576 system_pods.go:89] "kube-controller-manager-no-preload-669469" [8f3b2de3-6296-4f95-8d91-b9408c8eb38b] Running
	I0805 13:03:45.597550  450576 system_pods.go:89] "kube-proxy-tpn5s" [f89e32f9-d750-41ac-891e-e3ca4a4fbbd2] Running
	I0805 13:03:45.597554  450576 system_pods.go:89] "kube-scheduler-no-preload-669469" [69af56a0-7269-4bc5-83ea-c632c7b8d060] Running
	I0805 13:03:45.597563  450576 system_pods.go:89] "metrics-server-6867b74b74-x4j7b" [55a747e4-f9a7-41f1-b584-470048ba6fcb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 13:03:45.597568  450576 system_pods.go:89] "storage-provisioner" [cb19adf6-e208-4709-b02f-ae32acc30478] Running
	I0805 13:03:45.597577  450576 system_pods.go:126] duration metric: took 8.035546ms to wait for k8s-apps to be running ...
	I0805 13:03:45.597586  450576 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 13:03:45.597631  450576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 13:03:45.619317  450576 system_svc.go:56] duration metric: took 21.706117ms WaitForService to wait for kubelet
	I0805 13:03:45.619365  450576 kubeadm.go:582] duration metric: took 8.353035332s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 13:03:45.619398  450576 node_conditions.go:102] verifying NodePressure condition ...
	I0805 13:03:45.622763  450576 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 13:03:45.622790  450576 node_conditions.go:123] node cpu capacity is 2
	I0805 13:03:45.622801  450576 node_conditions.go:105] duration metric: took 3.396756ms to run NodePressure ...
	I0805 13:03:45.622814  450576 start.go:241] waiting for startup goroutines ...
	I0805 13:03:45.622821  450576 start.go:246] waiting for cluster config update ...
	I0805 13:03:45.622831  450576 start.go:255] writing updated cluster config ...
	I0805 13:03:45.623102  450576 ssh_runner.go:195] Run: rm -f paused
	I0805 13:03:45.682547  450576 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-rc.0 (minor skew: 1)
	I0805 13:03:45.684415  450576 out.go:177] * Done! kubectl is now configured to use "no-preload-669469" cluster and "default" namespace by default
	I0805 13:03:48.707730  450884 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0805 13:03:48.707817  450884 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 13:03:48.707920  450884 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 13:03:48.708065  450884 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 13:03:48.708218  450884 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 13:03:48.708311  450884 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 13:03:48.709807  450884 out.go:204]   - Generating certificates and keys ...
	I0805 13:03:48.709878  450884 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 13:03:48.709931  450884 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 13:03:48.710008  450884 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0805 13:03:48.710084  450884 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0805 13:03:48.710148  450884 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0805 13:03:48.710196  450884 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0805 13:03:48.710251  450884 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0805 13:03:48.710316  450884 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0805 13:03:48.710415  450884 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0805 13:03:48.710520  450884 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0805 13:03:48.710582  450884 kubeadm.go:310] [certs] Using the existing "sa" key
	I0805 13:03:48.710656  450884 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 13:03:48.710700  450884 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 13:03:48.710746  450884 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0805 13:03:48.710790  450884 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 13:03:48.710843  450884 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 13:03:48.710895  450884 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 13:03:48.710971  450884 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 13:03:48.711055  450884 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 13:03:48.713503  450884 out.go:204]   - Booting up control plane ...
	I0805 13:03:48.713601  450884 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 13:03:48.713687  450884 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 13:03:48.713763  450884 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 13:03:48.713911  450884 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 13:03:48.714039  450884 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 13:03:48.714105  450884 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 13:03:48.714222  450884 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0805 13:03:48.714284  450884 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0805 13:03:48.714345  450884 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.128103ms
	I0805 13:03:48.714423  450884 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0805 13:03:48.714491  450884 kubeadm.go:310] [api-check] The API server is healthy after 5.502076793s
	I0805 13:03:48.714600  450884 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 13:03:48.714730  450884 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 13:03:48.714794  450884 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0805 13:03:48.714987  450884 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-371585 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 13:03:48.715075  450884 kubeadm.go:310] [bootstrap-token] Using token: cpuyhq.sjq5yhx27tk7meks
	I0805 13:03:48.716575  450884 out.go:204]   - Configuring RBAC rules ...
	I0805 13:03:48.716686  450884 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 13:03:48.716775  450884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 13:03:48.716952  450884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 13:03:48.717075  450884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 13:03:48.717196  450884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 13:03:48.717270  450884 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 13:03:48.717391  450884 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 13:03:48.717450  450884 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0805 13:03:48.717512  450884 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0805 13:03:48.717521  450884 kubeadm.go:310] 
	I0805 13:03:48.717613  450884 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0805 13:03:48.717623  450884 kubeadm.go:310] 
	I0805 13:03:48.717724  450884 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0805 13:03:48.717734  450884 kubeadm.go:310] 
	I0805 13:03:48.717768  450884 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0805 13:03:48.717848  450884 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 13:03:48.717892  450884 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 13:03:48.717898  450884 kubeadm.go:310] 
	I0805 13:03:48.717968  450884 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0805 13:03:48.717978  450884 kubeadm.go:310] 
	I0805 13:03:48.718047  450884 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 13:03:48.718057  450884 kubeadm.go:310] 
	I0805 13:03:48.718133  450884 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0805 13:03:48.718220  450884 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 13:03:48.718297  450884 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 13:03:48.718304  450884 kubeadm.go:310] 
	I0805 13:03:48.718422  450884 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0805 13:03:48.718506  450884 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0805 13:03:48.718513  450884 kubeadm.go:310] 
	I0805 13:03:48.718585  450884 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token cpuyhq.sjq5yhx27tk7meks \
	I0805 13:03:48.718669  450884 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d5d31a77e9c4cbf19599d2fca5d8f2345e115b01301fa4b841f92bcfec86ddc6 \
	I0805 13:03:48.718688  450884 kubeadm.go:310] 	--control-plane 
	I0805 13:03:48.718694  450884 kubeadm.go:310] 
	I0805 13:03:48.718761  450884 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0805 13:03:48.718769  450884 kubeadm.go:310] 
	I0805 13:03:48.718848  450884 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token cpuyhq.sjq5yhx27tk7meks \
	I0805 13:03:48.718948  450884 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d5d31a77e9c4cbf19599d2fca5d8f2345e115b01301fa4b841f92bcfec86ddc6 
	I0805 13:03:48.718957  450884 cni.go:84] Creating CNI manager for ""
	I0805 13:03:48.718965  450884 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 13:03:48.720262  450884 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0805 13:03:44.946225  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:47.444313  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:48.721390  450884 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0805 13:03:48.732324  450884 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0805 13:03:48.750318  450884 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 13:03:48.750397  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:48.750398  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-371585 minikube.k8s.io/updated_at=2024_08_05T13_03_48_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f minikube.k8s.io/name=default-k8s-diff-port-371585 minikube.k8s.io/primary=true
	I0805 13:03:48.781590  450884 ops.go:34] apiserver oom_adj: -16
	I0805 13:03:48.966544  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:49.467473  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:49.967093  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:50.466813  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:50.967183  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:51.467350  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:51.967432  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:49.444667  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:49.444719  450393 pod_ready.go:81] duration metric: took 4m0.006667631s for pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace to be "Ready" ...
	E0805 13:03:49.444731  450393 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0805 13:03:49.444738  450393 pod_ready.go:38] duration metric: took 4m2.407503205s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 13:03:49.444757  450393 api_server.go:52] waiting for apiserver process to appear ...
	I0805 13:03:49.444787  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:03:49.444849  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:03:49.502039  450393 cri.go:89] found id: "be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7"
	I0805 13:03:49.502067  450393 cri.go:89] found id: ""
	I0805 13:03:49.502079  450393 logs.go:276] 1 containers: [be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7]
	I0805 13:03:49.502139  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:49.510426  450393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:03:49.510494  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:03:49.553861  450393 cri.go:89] found id: "85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804"
	I0805 13:03:49.553889  450393 cri.go:89] found id: ""
	I0805 13:03:49.553899  450393 logs.go:276] 1 containers: [85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804]
	I0805 13:03:49.553960  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:49.558802  450393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:03:49.558868  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:03:49.594787  450393 cri.go:89] found id: "b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb"
	I0805 13:03:49.594810  450393 cri.go:89] found id: ""
	I0805 13:03:49.594828  450393 logs.go:276] 1 containers: [b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb]
	I0805 13:03:49.594891  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:49.599735  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:03:49.599822  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:03:49.637856  450393 cri.go:89] found id: "8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756"
	I0805 13:03:49.637878  450393 cri.go:89] found id: ""
	I0805 13:03:49.637886  450393 logs.go:276] 1 containers: [8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756]
	I0805 13:03:49.637939  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:49.642228  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:03:49.642295  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:03:49.683822  450393 cri.go:89] found id: "c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0"
	I0805 13:03:49.683844  450393 cri.go:89] found id: ""
	I0805 13:03:49.683853  450393 logs.go:276] 1 containers: [c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0]
	I0805 13:03:49.683913  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:49.688077  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:03:49.688155  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:03:49.724887  450393 cri.go:89] found id: "75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f"
	I0805 13:03:49.724913  450393 cri.go:89] found id: ""
	I0805 13:03:49.724923  450393 logs.go:276] 1 containers: [75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f]
	I0805 13:03:49.724987  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:49.728965  450393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:03:49.729052  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:03:49.765826  450393 cri.go:89] found id: ""
	I0805 13:03:49.765859  450393 logs.go:276] 0 containers: []
	W0805 13:03:49.765871  450393 logs.go:278] No container was found matching "kindnet"
	I0805 13:03:49.765878  450393 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0805 13:03:49.765944  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0805 13:03:49.803790  450393 cri.go:89] found id: "07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b"
	I0805 13:03:49.803811  450393 cri.go:89] found id: "2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86"
	I0805 13:03:49.803815  450393 cri.go:89] found id: ""
	I0805 13:03:49.803823  450393 logs.go:276] 2 containers: [07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b 2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86]
	I0805 13:03:49.803887  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:49.808064  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:49.812308  450393 logs.go:123] Gathering logs for storage-provisioner [2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86] ...
	I0805 13:03:49.812332  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86"
	I0805 13:03:49.851842  450393 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:49.851867  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:50.418758  450393 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:50.418808  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 13:03:50.564965  450393 logs.go:123] Gathering logs for coredns [b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb] ...
	I0805 13:03:50.564999  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb"
	I0805 13:03:50.608518  450393 logs.go:123] Gathering logs for kube-apiserver [be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7] ...
	I0805 13:03:50.608557  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7"
	I0805 13:03:50.658446  450393 logs.go:123] Gathering logs for etcd [85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804] ...
	I0805 13:03:50.658482  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804"
	I0805 13:03:50.699924  450393 logs.go:123] Gathering logs for kube-scheduler [8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756] ...
	I0805 13:03:50.699962  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756"
	I0805 13:03:50.741228  450393 logs.go:123] Gathering logs for kube-proxy [c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0] ...
	I0805 13:03:50.741264  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0"
	I0805 13:03:50.776100  450393 logs.go:123] Gathering logs for kube-controller-manager [75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f] ...
	I0805 13:03:50.776133  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f"
	I0805 13:03:50.827847  450393 logs.go:123] Gathering logs for storage-provisioner [07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b] ...
	I0805 13:03:50.827880  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b"
	I0805 13:03:50.867699  450393 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:50.867731  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:50.920049  450393 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:50.920085  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:50.934198  450393 logs.go:123] Gathering logs for container status ...
	I0805 13:03:50.934224  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:03:53.477808  450393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:03:53.494062  450393 api_server.go:72] duration metric: took 4m14.183013645s to wait for apiserver process to appear ...
	I0805 13:03:53.494093  450393 api_server.go:88] waiting for apiserver healthz status ...
	I0805 13:03:53.494143  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:03:53.494211  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:03:53.534293  450393 cri.go:89] found id: "be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7"
	I0805 13:03:53.534322  450393 cri.go:89] found id: ""
	I0805 13:03:53.534333  450393 logs.go:276] 1 containers: [be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7]
	I0805 13:03:53.534400  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:53.539014  450393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:03:53.539088  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:03:53.576587  450393 cri.go:89] found id: "85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804"
	I0805 13:03:53.576608  450393 cri.go:89] found id: ""
	I0805 13:03:53.576616  450393 logs.go:276] 1 containers: [85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804]
	I0805 13:03:53.576667  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:53.582068  450393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:03:53.582147  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:03:53.623240  450393 cri.go:89] found id: "b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb"
	I0805 13:03:53.623264  450393 cri.go:89] found id: ""
	I0805 13:03:53.623274  450393 logs.go:276] 1 containers: [b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb]
	I0805 13:03:53.623352  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:53.627638  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:03:53.627699  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:03:53.668167  450393 cri.go:89] found id: "8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756"
	I0805 13:03:53.668198  450393 cri.go:89] found id: ""
	I0805 13:03:53.668209  450393 logs.go:276] 1 containers: [8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756]
	I0805 13:03:53.668281  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:53.672390  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:03:53.672469  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:03:53.714046  450393 cri.go:89] found id: "c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0"
	I0805 13:03:53.714069  450393 cri.go:89] found id: ""
	I0805 13:03:53.714078  450393 logs.go:276] 1 containers: [c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0]
	I0805 13:03:53.714130  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:53.718325  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:03:53.718392  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:03:53.756343  450393 cri.go:89] found id: "75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f"
	I0805 13:03:53.756372  450393 cri.go:89] found id: ""
	I0805 13:03:53.756382  450393 logs.go:276] 1 containers: [75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f]
	I0805 13:03:53.756444  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:53.760627  450393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:03:53.760696  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:03:53.806370  450393 cri.go:89] found id: ""
	I0805 13:03:53.806406  450393 logs.go:276] 0 containers: []
	W0805 13:03:53.806424  450393 logs.go:278] No container was found matching "kindnet"
	I0805 13:03:53.806432  450393 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0805 13:03:53.806505  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0805 13:03:53.843082  450393 cri.go:89] found id: "07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b"
	I0805 13:03:53.843116  450393 cri.go:89] found id: "2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86"
	I0805 13:03:53.843121  450393 cri.go:89] found id: ""
	I0805 13:03:53.843129  450393 logs.go:276] 2 containers: [07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b 2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86]
	I0805 13:03:53.843188  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:53.847214  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:53.851093  450393 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:53.851112  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:52.467589  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:52.967390  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:53.466580  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:53.967544  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:54.467454  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:54.967281  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:55.467111  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:55.967513  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:56.467255  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:56.967513  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:54.296506  450393 logs.go:123] Gathering logs for kube-apiserver [be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7] ...
	I0805 13:03:54.296556  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7"
	I0805 13:03:54.343983  450393 logs.go:123] Gathering logs for etcd [85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804] ...
	I0805 13:03:54.344026  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804"
	I0805 13:03:54.389236  450393 logs.go:123] Gathering logs for coredns [b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb] ...
	I0805 13:03:54.389271  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb"
	I0805 13:03:54.427964  450393 logs.go:123] Gathering logs for kube-proxy [c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0] ...
	I0805 13:03:54.427996  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0"
	I0805 13:03:54.465953  450393 logs.go:123] Gathering logs for kube-controller-manager [75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f] ...
	I0805 13:03:54.465988  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f"
	I0805 13:03:54.521755  450393 logs.go:123] Gathering logs for storage-provisioner [07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b] ...
	I0805 13:03:54.521835  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b"
	I0805 13:03:54.565481  450393 logs.go:123] Gathering logs for storage-provisioner [2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86] ...
	I0805 13:03:54.565513  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86"
	I0805 13:03:54.606592  450393 logs.go:123] Gathering logs for container status ...
	I0805 13:03:54.606634  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:03:54.650820  450393 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:54.650858  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:54.704512  450393 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:54.704559  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:54.722149  450393 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:54.722184  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 13:03:54.844289  450393 logs.go:123] Gathering logs for kube-scheduler [8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756] ...
	I0805 13:03:54.844324  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756"
	I0805 13:03:57.386998  450393 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0805 13:03:57.391714  450393 api_server.go:279] https://192.168.39.196:8443/healthz returned 200:
	ok
	I0805 13:03:57.392752  450393 api_server.go:141] control plane version: v1.30.3
	I0805 13:03:57.392776  450393 api_server.go:131] duration metric: took 3.898675075s to wait for apiserver health ...
	I0805 13:03:57.392783  450393 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 13:03:57.392812  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:03:57.392868  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:03:57.430171  450393 cri.go:89] found id: "be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7"
	I0805 13:03:57.430201  450393 cri.go:89] found id: ""
	I0805 13:03:57.430210  450393 logs.go:276] 1 containers: [be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7]
	I0805 13:03:57.430270  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:57.434861  450393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:03:57.434920  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:03:57.490595  450393 cri.go:89] found id: "85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804"
	I0805 13:03:57.490622  450393 cri.go:89] found id: ""
	I0805 13:03:57.490632  450393 logs.go:276] 1 containers: [85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804]
	I0805 13:03:57.490702  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:57.496054  450393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:03:57.496141  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:03:57.540248  450393 cri.go:89] found id: "b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb"
	I0805 13:03:57.540278  450393 cri.go:89] found id: ""
	I0805 13:03:57.540289  450393 logs.go:276] 1 containers: [b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb]
	I0805 13:03:57.540353  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:57.547750  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:03:57.547820  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:03:57.595821  450393 cri.go:89] found id: "8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756"
	I0805 13:03:57.595852  450393 cri.go:89] found id: ""
	I0805 13:03:57.595864  450393 logs.go:276] 1 containers: [8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756]
	I0805 13:03:57.595932  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:57.600153  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:03:57.600225  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:03:57.640382  450393 cri.go:89] found id: "c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0"
	I0805 13:03:57.640409  450393 cri.go:89] found id: ""
	I0805 13:03:57.640418  450393 logs.go:276] 1 containers: [c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0]
	I0805 13:03:57.640486  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:57.645476  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:03:57.645569  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:03:57.700199  450393 cri.go:89] found id: "75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f"
	I0805 13:03:57.700224  450393 cri.go:89] found id: ""
	I0805 13:03:57.700233  450393 logs.go:276] 1 containers: [75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f]
	I0805 13:03:57.700294  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:57.704818  450393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:03:57.704874  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:03:57.745647  450393 cri.go:89] found id: ""
	I0805 13:03:57.745677  450393 logs.go:276] 0 containers: []
	W0805 13:03:57.745687  450393 logs.go:278] No container was found matching "kindnet"
	I0805 13:03:57.745696  450393 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0805 13:03:57.745760  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0805 13:03:57.787327  450393 cri.go:89] found id: "07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b"
	I0805 13:03:57.787367  450393 cri.go:89] found id: "2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86"
	I0805 13:03:57.787374  450393 cri.go:89] found id: ""
	I0805 13:03:57.787384  450393 logs.go:276] 2 containers: [07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b 2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86]
	I0805 13:03:57.787448  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:57.792340  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:57.796906  450393 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:57.796933  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:57.850401  450393 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:57.850447  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 13:03:57.961760  450393 logs.go:123] Gathering logs for kube-apiserver [be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7] ...
	I0805 13:03:57.961808  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7"
	I0805 13:03:58.009682  450393 logs.go:123] Gathering logs for etcd [85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804] ...
	I0805 13:03:58.009720  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804"
	I0805 13:03:58.061874  450393 logs.go:123] Gathering logs for kube-proxy [c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0] ...
	I0805 13:03:58.061915  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0"
	I0805 13:03:58.105715  450393 logs.go:123] Gathering logs for kube-controller-manager [75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f] ...
	I0805 13:03:58.105745  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f"
	I0805 13:03:58.164739  450393 logs.go:123] Gathering logs for storage-provisioner [07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b] ...
	I0805 13:03:58.164780  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b"
	I0805 13:03:58.203530  450393 logs.go:123] Gathering logs for storage-provisioner [2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86] ...
	I0805 13:03:58.203579  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86"
	I0805 13:03:58.245478  450393 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:58.245511  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:58.647807  450393 logs.go:123] Gathering logs for container status ...
	I0805 13:03:58.647857  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:03:58.694175  450393 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:58.694211  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:58.709744  450393 logs.go:123] Gathering logs for coredns [b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb] ...
	I0805 13:03:58.709773  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb"
	I0805 13:03:58.750668  450393 logs.go:123] Gathering logs for kube-scheduler [8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756] ...
	I0805 13:03:58.750698  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756"
	I0805 13:04:01.297212  450393 system_pods.go:59] 8 kube-system pods found
	I0805 13:04:01.297248  450393 system_pods.go:61] "coredns-7db6d8ff4d-wm7lh" [e3851d79-431c-4629-bfdc-ed9615cd46aa] Running
	I0805 13:04:01.297255  450393 system_pods.go:61] "etcd-embed-certs-321139" [98de664b-92d7-432d-9881-496dd8edd9f3] Running
	I0805 13:04:01.297261  450393 system_pods.go:61] "kube-apiserver-embed-certs-321139" [2d93e6df-1933-4ac1-82f6-d0d8f74f6d4e] Running
	I0805 13:04:01.297265  450393 system_pods.go:61] "kube-controller-manager-embed-certs-321139" [84165f78-f74b-4714-81b9-eeac2771b86b] Running
	I0805 13:04:01.297269  450393 system_pods.go:61] "kube-proxy-shgv2" [a19c5991-505f-4105-8c20-7afd63dd8e61] Running
	I0805 13:04:01.297273  450393 system_pods.go:61] "kube-scheduler-embed-certs-321139" [961a5013-fd55-48a2-adc2-acde33f6aed5] Running
	I0805 13:04:01.297281  450393 system_pods.go:61] "metrics-server-569cc877fc-k8mrt" [6d400b20-5de5-4046-b773-39766c67cdb4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 13:04:01.297289  450393 system_pods.go:61] "storage-provisioner" [8b2db057-5262-4648-93ea-f2f0ed51a19b] Running
	I0805 13:04:01.297300  450393 system_pods.go:74] duration metric: took 3.904508974s to wait for pod list to return data ...
	I0805 13:04:01.297312  450393 default_sa.go:34] waiting for default service account to be created ...
	I0805 13:04:01.299765  450393 default_sa.go:45] found service account: "default"
	I0805 13:04:01.299792  450393 default_sa.go:55] duration metric: took 2.470684ms for default service account to be created ...
	I0805 13:04:01.299802  450393 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 13:04:01.304612  450393 system_pods.go:86] 8 kube-system pods found
	I0805 13:04:01.304644  450393 system_pods.go:89] "coredns-7db6d8ff4d-wm7lh" [e3851d79-431c-4629-bfdc-ed9615cd46aa] Running
	I0805 13:04:01.304651  450393 system_pods.go:89] "etcd-embed-certs-321139" [98de664b-92d7-432d-9881-496dd8edd9f3] Running
	I0805 13:04:01.304656  450393 system_pods.go:89] "kube-apiserver-embed-certs-321139" [2d93e6df-1933-4ac1-82f6-d0d8f74f6d4e] Running
	I0805 13:04:01.304661  450393 system_pods.go:89] "kube-controller-manager-embed-certs-321139" [84165f78-f74b-4714-81b9-eeac2771b86b] Running
	I0805 13:04:01.304665  450393 system_pods.go:89] "kube-proxy-shgv2" [a19c5991-505f-4105-8c20-7afd63dd8e61] Running
	I0805 13:04:01.304670  450393 system_pods.go:89] "kube-scheduler-embed-certs-321139" [961a5013-fd55-48a2-adc2-acde33f6aed5] Running
	I0805 13:04:01.304677  450393 system_pods.go:89] "metrics-server-569cc877fc-k8mrt" [6d400b20-5de5-4046-b773-39766c67cdb4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 13:04:01.304685  450393 system_pods.go:89] "storage-provisioner" [8b2db057-5262-4648-93ea-f2f0ed51a19b] Running
	I0805 13:04:01.304694  450393 system_pods.go:126] duration metric: took 4.885808ms to wait for k8s-apps to be running ...
	I0805 13:04:01.304702  450393 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 13:04:01.304751  450393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 13:04:01.323278  450393 system_svc.go:56] duration metric: took 18.55935ms WaitForService to wait for kubelet
	I0805 13:04:01.323316  450393 kubeadm.go:582] duration metric: took 4m22.01227204s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 13:04:01.323349  450393 node_conditions.go:102] verifying NodePressure condition ...
	I0805 13:04:01.326802  450393 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 13:04:01.326829  450393 node_conditions.go:123] node cpu capacity is 2
	I0805 13:04:01.326843  450393 node_conditions.go:105] duration metric: took 3.486931ms to run NodePressure ...
	I0805 13:04:01.326859  450393 start.go:241] waiting for startup goroutines ...
	I0805 13:04:01.326869  450393 start.go:246] waiting for cluster config update ...
	I0805 13:04:01.326883  450393 start.go:255] writing updated cluster config ...
	I0805 13:04:01.327230  450393 ssh_runner.go:195] Run: rm -f paused
	I0805 13:04:01.380315  450393 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0805 13:04:01.381891  450393 out.go:177] * Done! kubectl is now configured to use "embed-certs-321139" cluster and "default" namespace by default
	I0805 13:03:57.113870  451238 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0805 13:03:57.114408  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:03:57.114630  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:03:57.467412  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:57.967538  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:58.467217  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:58.967035  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:59.466816  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:59.966909  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:04:00.467553  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:04:00.967667  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:04:01.467382  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:04:01.967495  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:04:02.085428  450884 kubeadm.go:1113] duration metric: took 13.335097096s to wait for elevateKubeSystemPrivileges
	I0805 13:04:02.085464  450884 kubeadm.go:394] duration metric: took 5m13.227479413s to StartCluster
	I0805 13:04:02.085482  450884 settings.go:142] acquiring lock: {Name:mkef693333292ed53a03690c72ec170ce2e26d3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 13:04:02.085571  450884 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 13:04:02.087178  450884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/kubeconfig: {Name:mkf2ea766e58530103015ce4ba9d1ed3336f3926 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 13:04:02.087425  450884 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.228 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 13:04:02.087550  450884 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 13:04:02.087653  450884 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-371585"
	I0805 13:04:02.087659  450884 config.go:182] Loaded profile config "default-k8s-diff-port-371585": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 13:04:02.087681  450884 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-371585"
	I0805 13:04:02.087697  450884 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-371585"
	I0805 13:04:02.087718  450884 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-371585"
	W0805 13:04:02.087729  450884 addons.go:243] addon metrics-server should already be in state true
	I0805 13:04:02.087783  450884 host.go:66] Checking if "default-k8s-diff-port-371585" exists ...
	I0805 13:04:02.087727  450884 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-371585"
	I0805 13:04:02.087692  450884 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-371585"
	W0805 13:04:02.087953  450884 addons.go:243] addon storage-provisioner should already be in state true
	I0805 13:04:02.087986  450884 host.go:66] Checking if "default-k8s-diff-port-371585" exists ...
	I0805 13:04:02.088243  450884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:04:02.088294  450884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:04:02.088243  450884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:04:02.088377  450884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:04:02.088406  450884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:04:02.088415  450884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:04:02.088935  450884 out.go:177] * Verifying Kubernetes components...
	I0805 13:04:02.090386  450884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 13:04:02.105328  450884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39169
	I0805 13:04:02.105335  450884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33049
	I0805 13:04:02.105853  450884 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:04:02.105848  450884 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:04:02.106395  450884 main.go:141] libmachine: Using API Version  1
	I0805 13:04:02.106398  450884 main.go:141] libmachine: Using API Version  1
	I0805 13:04:02.106420  450884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:04:02.106423  450884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:04:02.106506  450884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33831
	I0805 13:04:02.106879  450884 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:04:02.106957  450884 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:04:02.106982  450884 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:04:02.107193  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetState
	I0805 13:04:02.107508  450884 main.go:141] libmachine: Using API Version  1
	I0805 13:04:02.107522  450884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:04:02.107534  450884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:04:02.107561  450884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:04:02.107903  450884 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:04:02.108458  450884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:04:02.108490  450884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:04:02.111681  450884 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-371585"
	W0805 13:04:02.111709  450884 addons.go:243] addon default-storageclass should already be in state true
	I0805 13:04:02.111775  450884 host.go:66] Checking if "default-k8s-diff-port-371585" exists ...
	I0805 13:04:02.113601  450884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:04:02.113648  450884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:04:02.127860  450884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37207
	I0805 13:04:02.128512  450884 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:04:02.128619  450884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39253
	I0805 13:04:02.129023  450884 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:04:02.129174  450884 main.go:141] libmachine: Using API Version  1
	I0805 13:04:02.129198  450884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:04:02.129495  450884 main.go:141] libmachine: Using API Version  1
	I0805 13:04:02.129516  450884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:04:02.129566  450884 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:04:02.129850  450884 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:04:02.129879  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetState
	I0805 13:04:02.130443  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetState
	I0805 13:04:02.131691  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 13:04:02.132370  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 13:04:02.133468  450884 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 13:04:02.134210  450884 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0805 13:04:02.134899  450884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37161
	I0805 13:04:02.135049  450884 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0805 13:04:02.135067  450884 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0805 13:04:02.135099  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 13:04:02.135183  450884 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 13:04:02.135201  450884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 13:04:02.135216  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 13:04:02.135404  450884 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:04:02.136704  450884 main.go:141] libmachine: Using API Version  1
	I0805 13:04:02.136723  450884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:04:02.138362  450884 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:04:02.138801  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 13:04:02.138918  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 13:04:02.139264  450884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:04:02.139290  450884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:04:02.139335  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 13:04:02.139377  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 13:04:02.139404  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 13:04:02.139448  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 13:04:02.139482  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 13:04:02.139503  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 13:04:02.139581  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 13:04:02.139637  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 13:04:02.139737  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 13:04:02.139807  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 13:04:02.139867  450884 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa Username:docker}
	I0805 13:04:02.139909  450884 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa Username:docker}
	I0805 13:04:02.159720  450884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34137
	I0805 13:04:02.160199  450884 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:04:02.160744  450884 main.go:141] libmachine: Using API Version  1
	I0805 13:04:02.160770  450884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:04:02.161048  450884 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:04:02.161246  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetState
	I0805 13:04:02.162535  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 13:04:02.162788  450884 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 13:04:02.162805  450884 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 13:04:02.162825  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 13:04:02.165787  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 13:04:02.166204  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 13:04:02.166236  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 13:04:02.166411  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 13:04:02.166594  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 13:04:02.166744  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 13:04:02.166876  450884 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa Username:docker}
	I0805 13:04:02.349175  450884 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 13:04:02.453663  450884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 13:04:02.462474  450884 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-371585" to be "Ready" ...
	I0805 13:04:02.472177  450884 node_ready.go:49] node "default-k8s-diff-port-371585" has status "Ready":"True"
	I0805 13:04:02.472201  450884 node_ready.go:38] duration metric: took 9.692872ms for node "default-k8s-diff-port-371585" to be "Ready" ...
	I0805 13:04:02.472211  450884 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 13:04:02.474341  450884 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0805 13:04:02.474363  450884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0805 13:04:02.485604  450884 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-5vxpl" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:02.514889  450884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 13:04:02.543388  450884 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0805 13:04:02.543428  450884 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0805 13:04:02.618040  450884 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0805 13:04:02.618094  450884 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0805 13:04:02.716705  450884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0805 13:04:02.784102  450884 main.go:141] libmachine: Making call to close driver server
	I0805 13:04:02.784193  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .Close
	I0805 13:04:02.784545  450884 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:04:02.784566  450884 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:04:02.784577  450884 main.go:141] libmachine: Making call to close driver server
	I0805 13:04:02.784586  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .Close
	I0805 13:04:02.784588  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | Closing plugin on server side
	I0805 13:04:02.784851  450884 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:04:02.784868  450884 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:04:02.784868  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | Closing plugin on server side
	I0805 13:04:02.797584  450884 main.go:141] libmachine: Making call to close driver server
	I0805 13:04:02.797617  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .Close
	I0805 13:04:02.797938  450884 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:04:02.797956  450884 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:04:03.431060  450884 main.go:141] libmachine: Making call to close driver server
	I0805 13:04:03.431091  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .Close
	I0805 13:04:03.431452  450884 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:04:03.431494  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | Closing plugin on server side
	I0805 13:04:03.431511  450884 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:04:03.431530  450884 main.go:141] libmachine: Making call to close driver server
	I0805 13:04:03.431539  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .Close
	I0805 13:04:03.431839  450884 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:04:03.431893  450884 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:04:03.746668  450884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.029912928s)
	I0805 13:04:03.746734  450884 main.go:141] libmachine: Making call to close driver server
	I0805 13:04:03.746750  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .Close
	I0805 13:04:03.747152  450884 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:04:03.747180  450884 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:04:03.747191  450884 main.go:141] libmachine: Making call to close driver server
	I0805 13:04:03.747200  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .Close
	I0805 13:04:03.748527  450884 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:04:03.748558  450884 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:04:03.748571  450884 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-371585"
	I0805 13:04:03.750522  450884 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0805 13:04:03.751714  450884 addons.go:510] duration metric: took 1.664163176s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0805 13:04:04.491832  450884 pod_ready.go:92] pod "coredns-7db6d8ff4d-5vxpl" in "kube-system" namespace has status "Ready":"True"
	I0805 13:04:04.491861  450884 pod_ready.go:81] duration metric: took 2.00623062s for pod "coredns-7db6d8ff4d-5vxpl" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.491870  450884 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-qtt9j" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.496173  450884 pod_ready.go:92] pod "coredns-7db6d8ff4d-qtt9j" in "kube-system" namespace has status "Ready":"True"
	I0805 13:04:04.496194  450884 pod_ready.go:81] duration metric: took 4.317446ms for pod "coredns-7db6d8ff4d-qtt9j" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.496202  450884 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.500270  450884 pod_ready.go:92] pod "etcd-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"True"
	I0805 13:04:04.500297  450884 pod_ready.go:81] duration metric: took 4.088399ms for pod "etcd-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.500309  450884 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.504892  450884 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"True"
	I0805 13:04:04.504917  450884 pod_ready.go:81] duration metric: took 4.598589ms for pod "kube-apiserver-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.504926  450884 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.509448  450884 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"True"
	I0805 13:04:04.509468  450884 pod_ready.go:81] duration metric: took 4.535174ms for pod "kube-controller-manager-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.509478  450884 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4v6sn" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.890517  450884 pod_ready.go:92] pod "kube-proxy-4v6sn" in "kube-system" namespace has status "Ready":"True"
	I0805 13:04:04.890544  450884 pod_ready.go:81] duration metric: took 381.059204ms for pod "kube-proxy-4v6sn" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.890552  450884 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:05.289670  450884 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"True"
	I0805 13:04:05.289701  450884 pod_ready.go:81] duration metric: took 399.141309ms for pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:05.289712  450884 pod_ready.go:38] duration metric: took 2.817491444s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 13:04:05.289732  450884 api_server.go:52] waiting for apiserver process to appear ...
	I0805 13:04:05.289805  450884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:04:05.305815  450884 api_server.go:72] duration metric: took 3.218344531s to wait for apiserver process to appear ...
	I0805 13:04:05.305848  450884 api_server.go:88] waiting for apiserver healthz status ...
	I0805 13:04:05.305870  450884 api_server.go:253] Checking apiserver healthz at https://192.168.50.228:8444/healthz ...
	I0805 13:04:05.311144  450884 api_server.go:279] https://192.168.50.228:8444/healthz returned 200:
	ok
	I0805 13:04:05.312427  450884 api_server.go:141] control plane version: v1.30.3
	I0805 13:04:05.312450  450884 api_server.go:131] duration metric: took 6.595933ms to wait for apiserver health ...
	I0805 13:04:05.312460  450884 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 13:04:05.493376  450884 system_pods.go:59] 9 kube-system pods found
	I0805 13:04:05.493417  450884 system_pods.go:61] "coredns-7db6d8ff4d-5vxpl" [6f6aa906-d76f-4f92-8de4-4d3a4a1ee733] Running
	I0805 13:04:05.493425  450884 system_pods.go:61] "coredns-7db6d8ff4d-qtt9j" [8dcadd0b-af8c-4d76-a1d1-ceeaffb725b8] Running
	I0805 13:04:05.493432  450884 system_pods.go:61] "etcd-default-k8s-diff-port-371585" [c3ab12b8-78ea-42c5-a1d3-e37eb9e72961] Running
	I0805 13:04:05.493438  450884 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-371585" [16d27e99-f652-4e88-907f-c2895f051a8a] Running
	I0805 13:04:05.493444  450884 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-371585" [f8d0d828-a7fb-4887-bbf9-e3ad9fd3ebf3] Running
	I0805 13:04:05.493450  450884 system_pods.go:61] "kube-proxy-4v6sn" [497a1512-cdee-49ff-92ea-ea523d3de2a4] Running
	I0805 13:04:05.493456  450884 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-371585" [48ae4522-6d11-4f79-820b-68eb06410186] Running
	I0805 13:04:05.493465  450884 system_pods.go:61] "metrics-server-569cc877fc-xf92r" [edb560ac-ddb1-4afa-b3a3-aa054ea38162] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 13:04:05.493475  450884 system_pods.go:61] "storage-provisioner" [8f3de3fc-9b34-4a46-a7cf-5487647b06ca] Running
	I0805 13:04:05.493488  450884 system_pods.go:74] duration metric: took 181.019102ms to wait for pod list to return data ...
	I0805 13:04:05.493504  450884 default_sa.go:34] waiting for default service account to be created ...
	I0805 13:04:05.688283  450884 default_sa.go:45] found service account: "default"
	I0805 13:04:05.688313  450884 default_sa.go:55] duration metric: took 194.799711ms for default service account to be created ...
	I0805 13:04:05.688323  450884 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 13:04:05.892656  450884 system_pods.go:86] 9 kube-system pods found
	I0805 13:04:05.892688  450884 system_pods.go:89] "coredns-7db6d8ff4d-5vxpl" [6f6aa906-d76f-4f92-8de4-4d3a4a1ee733] Running
	I0805 13:04:05.892696  450884 system_pods.go:89] "coredns-7db6d8ff4d-qtt9j" [8dcadd0b-af8c-4d76-a1d1-ceeaffb725b8] Running
	I0805 13:04:05.892702  450884 system_pods.go:89] "etcd-default-k8s-diff-port-371585" [c3ab12b8-78ea-42c5-a1d3-e37eb9e72961] Running
	I0805 13:04:05.892709  450884 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-371585" [16d27e99-f652-4e88-907f-c2895f051a8a] Running
	I0805 13:04:05.892715  450884 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-371585" [f8d0d828-a7fb-4887-bbf9-e3ad9fd3ebf3] Running
	I0805 13:04:05.892721  450884 system_pods.go:89] "kube-proxy-4v6sn" [497a1512-cdee-49ff-92ea-ea523d3de2a4] Running
	I0805 13:04:05.892727  450884 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-371585" [48ae4522-6d11-4f79-820b-68eb06410186] Running
	I0805 13:04:05.892737  450884 system_pods.go:89] "metrics-server-569cc877fc-xf92r" [edb560ac-ddb1-4afa-b3a3-aa054ea38162] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 13:04:05.892743  450884 system_pods.go:89] "storage-provisioner" [8f3de3fc-9b34-4a46-a7cf-5487647b06ca] Running
	I0805 13:04:05.892755  450884 system_pods.go:126] duration metric: took 204.423562ms to wait for k8s-apps to be running ...
	I0805 13:04:05.892765  450884 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 13:04:05.892819  450884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 13:04:05.907542  450884 system_svc.go:56] duration metric: took 14.764349ms WaitForService to wait for kubelet
	I0805 13:04:05.907576  450884 kubeadm.go:582] duration metric: took 3.820116927s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 13:04:05.907599  450884 node_conditions.go:102] verifying NodePressure condition ...
	I0805 13:04:06.089000  450884 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 13:04:06.089025  450884 node_conditions.go:123] node cpu capacity is 2
	I0805 13:04:06.089035  450884 node_conditions.go:105] duration metric: took 181.431221ms to run NodePressure ...
	I0805 13:04:06.089047  450884 start.go:241] waiting for startup goroutines ...
	I0805 13:04:06.089054  450884 start.go:246] waiting for cluster config update ...
	I0805 13:04:06.089065  450884 start.go:255] writing updated cluster config ...
	I0805 13:04:06.089373  450884 ssh_runner.go:195] Run: rm -f paused
	I0805 13:04:06.140202  450884 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0805 13:04:06.142149  450884 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-371585" cluster and "default" namespace by default
	I0805 13:04:02.115811  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:04:02.116057  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:04:12.115990  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:04:12.116208  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:04:32.116734  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:04:32.117001  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:05:12.119196  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:05:12.119475  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:05:12.119502  451238 kubeadm.go:310] 
	I0805 13:05:12.119564  451238 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0805 13:05:12.119622  451238 kubeadm.go:310] 		timed out waiting for the condition
	I0805 13:05:12.119634  451238 kubeadm.go:310] 
	I0805 13:05:12.119680  451238 kubeadm.go:310] 	This error is likely caused by:
	I0805 13:05:12.119724  451238 kubeadm.go:310] 		- The kubelet is not running
	I0805 13:05:12.119880  451238 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0805 13:05:12.119898  451238 kubeadm.go:310] 
	I0805 13:05:12.120029  451238 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0805 13:05:12.120114  451238 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0805 13:05:12.120169  451238 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0805 13:05:12.120179  451238 kubeadm.go:310] 
	I0805 13:05:12.120321  451238 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0805 13:05:12.120445  451238 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0805 13:05:12.120455  451238 kubeadm.go:310] 
	I0805 13:05:12.120612  451238 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0805 13:05:12.120751  451238 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0805 13:05:12.120888  451238 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0805 13:05:12.121010  451238 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0805 13:05:12.121023  451238 kubeadm.go:310] 
	I0805 13:05:12.121325  451238 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 13:05:12.121458  451238 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0805 13:05:12.121545  451238 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0805 13:05:12.121714  451238 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0805 13:05:12.121782  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0805 13:05:12.587687  451238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 13:05:12.603422  451238 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 13:05:12.614302  451238 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 13:05:12.614330  451238 kubeadm.go:157] found existing configuration files:
	
	I0805 13:05:12.614391  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 13:05:12.625131  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 13:05:12.625199  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 13:05:12.635606  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 13:05:12.644896  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 13:05:12.644953  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 13:05:12.655178  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 13:05:12.664668  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 13:05:12.664753  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 13:05:12.675174  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 13:05:12.684765  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 13:05:12.684834  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 13:05:12.694762  451238 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 13:05:12.930906  451238 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 13:07:09.256859  451238 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0805 13:07:09.257016  451238 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0805 13:07:09.258511  451238 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0805 13:07:09.258579  451238 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 13:07:09.258710  451238 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 13:07:09.258881  451238 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 13:07:09.259022  451238 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 13:07:09.259125  451238 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 13:07:09.260912  451238 out.go:204]   - Generating certificates and keys ...
	I0805 13:07:09.261023  451238 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 13:07:09.261123  451238 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 13:07:09.261232  451238 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0805 13:07:09.261319  451238 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0805 13:07:09.261411  451238 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0805 13:07:09.261507  451238 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0805 13:07:09.261601  451238 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0805 13:07:09.261690  451238 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0805 13:07:09.261801  451238 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0805 13:07:09.261946  451238 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0805 13:07:09.262015  451238 kubeadm.go:310] [certs] Using the existing "sa" key
	I0805 13:07:09.262119  451238 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 13:07:09.262198  451238 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 13:07:09.262273  451238 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 13:07:09.262369  451238 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 13:07:09.262464  451238 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 13:07:09.262615  451238 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 13:07:09.262731  451238 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 13:07:09.262770  451238 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 13:07:09.262831  451238 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 13:07:09.264428  451238 out.go:204]   - Booting up control plane ...
	I0805 13:07:09.264537  451238 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 13:07:09.264663  451238 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 13:07:09.264774  451238 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 13:07:09.264896  451238 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 13:07:09.265144  451238 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0805 13:07:09.265224  451238 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0805 13:07:09.265318  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:07:09.265554  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:07:09.265630  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:07:09.265783  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:07:09.265886  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:07:09.266143  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:07:09.266221  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:07:09.266387  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:07:09.266472  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:07:09.266656  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:07:09.266673  451238 kubeadm.go:310] 
	I0805 13:07:09.266707  451238 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0805 13:07:09.266738  451238 kubeadm.go:310] 		timed out waiting for the condition
	I0805 13:07:09.266743  451238 kubeadm.go:310] 
	I0805 13:07:09.266788  451238 kubeadm.go:310] 	This error is likely caused by:
	I0805 13:07:09.266819  451238 kubeadm.go:310] 		- The kubelet is not running
	I0805 13:07:09.266924  451238 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0805 13:07:09.266932  451238 kubeadm.go:310] 
	I0805 13:07:09.267050  451238 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0805 13:07:09.267137  451238 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0805 13:07:09.267192  451238 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0805 13:07:09.267201  451238 kubeadm.go:310] 
	I0805 13:07:09.267316  451238 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0805 13:07:09.267435  451238 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0805 13:07:09.267445  451238 kubeadm.go:310] 
	I0805 13:07:09.267570  451238 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0805 13:07:09.267683  451238 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0805 13:07:09.267802  451238 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0805 13:07:09.267898  451238 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0805 13:07:09.267986  451238 kubeadm.go:310] 
	I0805 13:07:09.268003  451238 kubeadm.go:394] duration metric: took 7m57.870990174s to StartCluster
	I0805 13:07:09.268066  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:07:09.268158  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:07:09.311436  451238 cri.go:89] found id: ""
	I0805 13:07:09.311471  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.311497  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:07:09.311509  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:07:09.311573  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:07:09.347748  451238 cri.go:89] found id: ""
	I0805 13:07:09.347776  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.347784  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:07:09.347797  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:07:09.347860  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:07:09.385418  451238 cri.go:89] found id: ""
	I0805 13:07:09.385445  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.385453  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:07:09.385460  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:07:09.385517  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:07:09.427209  451238 cri.go:89] found id: ""
	I0805 13:07:09.427255  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.427268  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:07:09.427276  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:07:09.427360  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:07:09.461763  451238 cri.go:89] found id: ""
	I0805 13:07:09.461787  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.461795  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:07:09.461801  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:07:09.461854  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:07:09.498655  451238 cri.go:89] found id: ""
	I0805 13:07:09.498692  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.498705  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:07:09.498713  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:07:09.498782  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:07:09.534100  451238 cri.go:89] found id: ""
	I0805 13:07:09.534134  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.534143  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:07:09.534149  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:07:09.534207  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:07:09.570089  451238 cri.go:89] found id: ""
	I0805 13:07:09.570125  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.570137  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:07:09.570153  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:07:09.570176  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:07:09.625158  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:07:09.625199  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:07:09.640087  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:07:09.640119  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:07:09.719851  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:07:09.719879  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:07:09.719895  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:07:09.832717  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:07:09.832758  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0805 13:07:09.878585  451238 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0805 13:07:09.878653  451238 out.go:239] * 
	W0805 13:07:09.878739  451238 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0805 13:07:09.878767  451238 out.go:239] * 
	W0805 13:07:09.879755  451238 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 13:07:09.883027  451238 out.go:177] 
	W0805 13:07:09.884197  451238 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0805 13:07:09.884243  451238 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0805 13:07:09.884265  451238 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0805 13:07:09.885783  451238 out.go:177] 
	
	
	==> CRI-O <==
	Aug 05 13:12:47 no-preload-669469 crio[701]: time="2024-08-05 13:12:47.839996329Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863567839974486,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a8e3841f-d76d-4648-8796-10d0f9522998 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:12:47 no-preload-669469 crio[701]: time="2024-08-05 13:12:47.840363344Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2ef8c710-1cdd-4279-a77e-46cbf343a097 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:12:47 no-preload-669469 crio[701]: time="2024-08-05 13:12:47.840434894Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2ef8c710-1cdd-4279-a77e-46cbf343a097 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:12:47 no-preload-669469 crio[701]: time="2024-08-05 13:12:47.840675586Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:720f5cc7faa808968b90cc1f67825bc5c2a55fb4bd51337abdedb43b051038e1,PodSandboxId:146e18ec96e30d222eeec255131747faf54b22756f186f1b863eed46c7b3f703,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722863019230186971,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb19adf6-e208-4709-b02f-ae32acc30478,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ea0286156b0339e1479613c3a9526db65b88d0cc949618d5b9db1633024d614,PodSandboxId:6dc0c99effd8af3d3e1c6b937ebf5c34e95a043e142d1ac70528cf75be4f4f01,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722863019098109008,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-pqhwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d7bb193-e93e-49b8-be4b-943f2d7fe59d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63d3e5aad7edc6d373e79326f2cbe5725c39f8108e2c94a88d94054c1aaad279,PodSandboxId:c967731df8d03ea3afe0cf2e7e561e4d5e786b8f4dca27e77ebd11c37dd8149a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722863019097568700,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-npbmj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e
ea9e0a-697b-42c9-857c-a3556c658fde,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8745bae4cc7fc81a4dfa17d9f2a8b64ff736eda91fd2f05a7b189f3de1871d0,PodSandboxId:f9ec1e715194fececc71cf1e147a83a51959ee540a7efa28629b0bc13b2e709a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,CreatedAt:
1722863018521148654,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tpn5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f89e32f9-d750-41ac-891e-e3ca4a4fbbd2,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a71aa20c85d5807d25a2276d35a85b10e2fd1662fd320ae8cb487c535505270,PodSandboxId:eafbded883a1b705b3a1450e46da11d61b3115332de9b047ea8f58f575a0d964,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1722863007531246788,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-669469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 107a9040e215dab2b8aab08673b4f751,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b0d970998865240c4f69eb65c3a50b0071e25ec87618d9d74ccc2bb1cd8caa1,PodSandboxId:14e32cc8dc2a5a0dbbf579da212488b074aa56edad47e2bf531195d75854e49d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1722863007506059824,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-669469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57b68c085e364fe312def9dbe225e5aa,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6359cb0c85ad0b248f0ec187d3821cf1bbcec57798ce503047ed7bb6ca345696,PodSandboxId:794289f6eaecd9a738b4f706dd2678a06270b91890e00ba4385ca63e7b4f6d8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1722863007498373599,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-669469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8078ffb805fb9155d9fb81fa32307361,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6496630cffd11b882d7d7bb3136ddd2b5aa3c243da638db1ed160978ea93c022,PodSandboxId:6317297b1dcb515b7668c236dac256c8d620fb7f4b5448813cd1b8535b3a3992,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1722863007416535495,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-669469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7aca0bf10be39af6c0200757bde06d77,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f57b7378426c58059895e772facc804452834690b99650f40a477308fae1d15,PodSandboxId:0b7be9f4229ba83122768ae8dc28b83e7d0f88b88ff58920dc2f33e630cafe0d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1722862722784968236,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-669469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7aca0bf10be39af6c0200757bde06d77,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2ef8c710-1cdd-4279-a77e-46cbf343a097 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:12:47 no-preload-669469 crio[701]: time="2024-08-05 13:12:47.859850627Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=630d5d02-8f4b-4301-9ef8-9b7490eb6bbd name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 05 13:12:47 no-preload-669469 crio[701]: time="2024-08-05 13:12:47.860064869Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:146e18ec96e30d222eeec255131747faf54b22756f186f1b863eed46c7b3f703,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:cb19adf6-e208-4709-b02f-ae32acc30478,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722863018879916031,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb19adf6-e208-4709-b02f-ae32acc30478,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-
system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-05T13:03:38.262342008Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d858ef906fef541e39e6e4820caaf0079cdcb048f8d5c17f345e3b5cf01cc1e4,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-x4j7b,Uid:55a747e4-f9a7-41f1-b584-470048ba6fcb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722863018836347300,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-x4j7b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55a747e4-f9a7-41f1-b584-470048ba6fcb
,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-05T13:03:38.528429990Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6dc0c99effd8af3d3e1c6b937ebf5c34e95a043e142d1ac70528cf75be4f4f01,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-pqhwx,Uid:3d7bb193-e93e-49b8-be4b-943f2d7fe59d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722863018408894311,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-pqhwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d7bb193-e93e-49b8-be4b-943f2d7fe59d,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-05T13:03:38.089096179Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c967731df8d03ea3afe0cf2e7e561e4d5e786b8f4dca27e77ebd11c37dd8149a,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-npbmj,Uid:9eea9e0a-697b-42c9-
857c-a3556c658fde,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722863018372791294,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-npbmj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eea9e0a-697b-42c9-857c-a3556c658fde,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-05T13:03:38.053003589Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f9ec1e715194fececc71cf1e147a83a51959ee540a7efa28629b0bc13b2e709a,Metadata:&PodSandboxMetadata{Name:kube-proxy-tpn5s,Uid:f89e32f9-d750-41ac-891e-e3ca4a4fbbd2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722863018258429204,Labels:map[string]string{controller-revision-hash: 677fdd8cbc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-tpn5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f89e32f9-d750-41ac-891e-e3ca4a4fbbd2,k8s-app: kube-proxy,pod-temp
late-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-05T13:03:37.940199381Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6317297b1dcb515b7668c236dac256c8d620fb7f4b5448813cd1b8535b3a3992,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-669469,Uid:7aca0bf10be39af6c0200757bde06d77,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722863007253318779,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-669469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7aca0bf10be39af6c0200757bde06d77,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.223:8443,kubernetes.io/config.hash: 7aca0bf10be39af6c0200757bde06d77,kubernetes.io/config.seen: 2024-08-05T13:03:26.773669098Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:794289f6eaecd9a738b4f706dd2678a
06270b91890e00ba4385ca63e7b4f6d8b,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-669469,Uid:8078ffb805fb9155d9fb81fa32307361,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722863007246858554,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-669469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8078ffb805fb9155d9fb81fa32307361,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.223:2379,kubernetes.io/config.hash: 8078ffb805fb9155d9fb81fa32307361,kubernetes.io/config.seen: 2024-08-05T13:03:26.773667893Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:eafbded883a1b705b3a1450e46da11d61b3115332de9b047ea8f58f575a0d964,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-669469,Uid:107a9040e215dab2b8aab08673b4f751,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722863007240364926,Labels:map[string]strin
g{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-669469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 107a9040e215dab2b8aab08673b4f751,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 107a9040e215dab2b8aab08673b4f751,kubernetes.io/config.seen: 2024-08-05T13:03:26.773666331Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:14e32cc8dc2a5a0dbbf579da212488b074aa56edad47e2bf531195d75854e49d,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-669469,Uid:57b68c085e364fe312def9dbe225e5aa,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722863007239961084,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-669469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57b68c085e364fe312def9dbe225e5aa,tier: control-plane,},Annotations:map[string]strin
g{kubernetes.io/config.hash: 57b68c085e364fe312def9dbe225e5aa,kubernetes.io/config.seen: 2024-08-05T13:03:26.773651013Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0b7be9f4229ba83122768ae8dc28b83e7d0f88b88ff58920dc2f33e630cafe0d,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-669469,Uid:7aca0bf10be39af6c0200757bde06d77,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722862722507905933,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-669469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7aca0bf10be39af6c0200757bde06d77,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.223:8443,kubernetes.io/config.hash: 7aca0bf10be39af6c0200757bde06d77,kubernetes.io/config.seen: 2024-08-05T12:58:41.997121796Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/inter
ceptors.go:74" id=630d5d02-8f4b-4301-9ef8-9b7490eb6bbd name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 05 13:12:47 no-preload-669469 crio[701]: time="2024-08-05 13:12:47.860630348Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a9d60600-5341-445e-ae61-228d71c211a9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:12:47 no-preload-669469 crio[701]: time="2024-08-05 13:12:47.860686119Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a9d60600-5341-445e-ae61-228d71c211a9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:12:47 no-preload-669469 crio[701]: time="2024-08-05 13:12:47.860962186Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:720f5cc7faa808968b90cc1f67825bc5c2a55fb4bd51337abdedb43b051038e1,PodSandboxId:146e18ec96e30d222eeec255131747faf54b22756f186f1b863eed46c7b3f703,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722863019230186971,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb19adf6-e208-4709-b02f-ae32acc30478,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ea0286156b0339e1479613c3a9526db65b88d0cc949618d5b9db1633024d614,PodSandboxId:6dc0c99effd8af3d3e1c6b937ebf5c34e95a043e142d1ac70528cf75be4f4f01,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722863019098109008,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-pqhwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d7bb193-e93e-49b8-be4b-943f2d7fe59d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63d3e5aad7edc6d373e79326f2cbe5725c39f8108e2c94a88d94054c1aaad279,PodSandboxId:c967731df8d03ea3afe0cf2e7e561e4d5e786b8f4dca27e77ebd11c37dd8149a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722863019097568700,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-npbmj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e
ea9e0a-697b-42c9-857c-a3556c658fde,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8745bae4cc7fc81a4dfa17d9f2a8b64ff736eda91fd2f05a7b189f3de1871d0,PodSandboxId:f9ec1e715194fececc71cf1e147a83a51959ee540a7efa28629b0bc13b2e709a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,CreatedAt:
1722863018521148654,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tpn5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f89e32f9-d750-41ac-891e-e3ca4a4fbbd2,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a71aa20c85d5807d25a2276d35a85b10e2fd1662fd320ae8cb487c535505270,PodSandboxId:eafbded883a1b705b3a1450e46da11d61b3115332de9b047ea8f58f575a0d964,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1722863007531246788,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-669469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 107a9040e215dab2b8aab08673b4f751,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b0d970998865240c4f69eb65c3a50b0071e25ec87618d9d74ccc2bb1cd8caa1,PodSandboxId:14e32cc8dc2a5a0dbbf579da212488b074aa56edad47e2bf531195d75854e49d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1722863007506059824,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-669469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57b68c085e364fe312def9dbe225e5aa,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6359cb0c85ad0b248f0ec187d3821cf1bbcec57798ce503047ed7bb6ca345696,PodSandboxId:794289f6eaecd9a738b4f706dd2678a06270b91890e00ba4385ca63e7b4f6d8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1722863007498373599,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-669469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8078ffb805fb9155d9fb81fa32307361,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6496630cffd11b882d7d7bb3136ddd2b5aa3c243da638db1ed160978ea93c022,PodSandboxId:6317297b1dcb515b7668c236dac256c8d620fb7f4b5448813cd1b8535b3a3992,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1722863007416535495,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-669469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7aca0bf10be39af6c0200757bde06d77,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f57b7378426c58059895e772facc804452834690b99650f40a477308fae1d15,PodSandboxId:0b7be9f4229ba83122768ae8dc28b83e7d0f88b88ff58920dc2f33e630cafe0d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1722862722784968236,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-669469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7aca0bf10be39af6c0200757bde06d77,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a9d60600-5341-445e-ae61-228d71c211a9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:12:47 no-preload-669469 crio[701]: time="2024-08-05 13:12:47.880574939Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4cd85b64-565a-49e8-82d1-6cf7add15684 name=/runtime.v1.RuntimeService/Version
	Aug 05 13:12:47 no-preload-669469 crio[701]: time="2024-08-05 13:12:47.880650372Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4cd85b64-565a-49e8-82d1-6cf7add15684 name=/runtime.v1.RuntimeService/Version
	Aug 05 13:12:47 no-preload-669469 crio[701]: time="2024-08-05 13:12:47.881435005Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8a50a8aa-d2ce-4de5-b230-dd1a6c79b536 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:12:47 no-preload-669469 crio[701]: time="2024-08-05 13:12:47.881919095Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863567881898298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8a50a8aa-d2ce-4de5-b230-dd1a6c79b536 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:12:47 no-preload-669469 crio[701]: time="2024-08-05 13:12:47.882355326Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=703f10ab-b2d1-4e49-b09b-4a087ea08e6b name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:12:47 no-preload-669469 crio[701]: time="2024-08-05 13:12:47.882410000Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=703f10ab-b2d1-4e49-b09b-4a087ea08e6b name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:12:47 no-preload-669469 crio[701]: time="2024-08-05 13:12:47.882946302Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:720f5cc7faa808968b90cc1f67825bc5c2a55fb4bd51337abdedb43b051038e1,PodSandboxId:146e18ec96e30d222eeec255131747faf54b22756f186f1b863eed46c7b3f703,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722863019230186971,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb19adf6-e208-4709-b02f-ae32acc30478,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ea0286156b0339e1479613c3a9526db65b88d0cc949618d5b9db1633024d614,PodSandboxId:6dc0c99effd8af3d3e1c6b937ebf5c34e95a043e142d1ac70528cf75be4f4f01,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722863019098109008,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-pqhwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d7bb193-e93e-49b8-be4b-943f2d7fe59d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63d3e5aad7edc6d373e79326f2cbe5725c39f8108e2c94a88d94054c1aaad279,PodSandboxId:c967731df8d03ea3afe0cf2e7e561e4d5e786b8f4dca27e77ebd11c37dd8149a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722863019097568700,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-npbmj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e
ea9e0a-697b-42c9-857c-a3556c658fde,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8745bae4cc7fc81a4dfa17d9f2a8b64ff736eda91fd2f05a7b189f3de1871d0,PodSandboxId:f9ec1e715194fececc71cf1e147a83a51959ee540a7efa28629b0bc13b2e709a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,CreatedAt:
1722863018521148654,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tpn5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f89e32f9-d750-41ac-891e-e3ca4a4fbbd2,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a71aa20c85d5807d25a2276d35a85b10e2fd1662fd320ae8cb487c535505270,PodSandboxId:eafbded883a1b705b3a1450e46da11d61b3115332de9b047ea8f58f575a0d964,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1722863007531246788,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-669469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 107a9040e215dab2b8aab08673b4f751,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b0d970998865240c4f69eb65c3a50b0071e25ec87618d9d74ccc2bb1cd8caa1,PodSandboxId:14e32cc8dc2a5a0dbbf579da212488b074aa56edad47e2bf531195d75854e49d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1722863007506059824,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-669469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57b68c085e364fe312def9dbe225e5aa,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6359cb0c85ad0b248f0ec187d3821cf1bbcec57798ce503047ed7bb6ca345696,PodSandboxId:794289f6eaecd9a738b4f706dd2678a06270b91890e00ba4385ca63e7b4f6d8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1722863007498373599,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-669469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8078ffb805fb9155d9fb81fa32307361,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6496630cffd11b882d7d7bb3136ddd2b5aa3c243da638db1ed160978ea93c022,PodSandboxId:6317297b1dcb515b7668c236dac256c8d620fb7f4b5448813cd1b8535b3a3992,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1722863007416535495,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-669469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7aca0bf10be39af6c0200757bde06d77,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f57b7378426c58059895e772facc804452834690b99650f40a477308fae1d15,PodSandboxId:0b7be9f4229ba83122768ae8dc28b83e7d0f88b88ff58920dc2f33e630cafe0d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1722862722784968236,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-669469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7aca0bf10be39af6c0200757bde06d77,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=703f10ab-b2d1-4e49-b09b-4a087ea08e6b name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:12:47 no-preload-669469 crio[701]: time="2024-08-05 13:12:47.918243005Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=92633eba-5bdb-4499-a7b3-210a3d9b5c15 name=/runtime.v1.RuntimeService/Version
	Aug 05 13:12:47 no-preload-669469 crio[701]: time="2024-08-05 13:12:47.918400448Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=92633eba-5bdb-4499-a7b3-210a3d9b5c15 name=/runtime.v1.RuntimeService/Version
	Aug 05 13:12:47 no-preload-669469 crio[701]: time="2024-08-05 13:12:47.920106718Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a15667ed-f587-4a88-b134-d92553a43e23 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:12:47 no-preload-669469 crio[701]: time="2024-08-05 13:12:47.920506373Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863567920481354,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a15667ed-f587-4a88-b134-d92553a43e23 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:12:47 no-preload-669469 crio[701]: time="2024-08-05 13:12:47.920990426Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4a765cc4-b2e9-4db9-bd16-14f0e156f50c name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:12:47 no-preload-669469 crio[701]: time="2024-08-05 13:12:47.921043863Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4a765cc4-b2e9-4db9-bd16-14f0e156f50c name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:12:47 no-preload-669469 crio[701]: time="2024-08-05 13:12:47.921528865Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:720f5cc7faa808968b90cc1f67825bc5c2a55fb4bd51337abdedb43b051038e1,PodSandboxId:146e18ec96e30d222eeec255131747faf54b22756f186f1b863eed46c7b3f703,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722863019230186971,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb19adf6-e208-4709-b02f-ae32acc30478,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ea0286156b0339e1479613c3a9526db65b88d0cc949618d5b9db1633024d614,PodSandboxId:6dc0c99effd8af3d3e1c6b937ebf5c34e95a043e142d1ac70528cf75be4f4f01,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722863019098109008,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-pqhwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d7bb193-e93e-49b8-be4b-943f2d7fe59d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63d3e5aad7edc6d373e79326f2cbe5725c39f8108e2c94a88d94054c1aaad279,PodSandboxId:c967731df8d03ea3afe0cf2e7e561e4d5e786b8f4dca27e77ebd11c37dd8149a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722863019097568700,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-npbmj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e
ea9e0a-697b-42c9-857c-a3556c658fde,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8745bae4cc7fc81a4dfa17d9f2a8b64ff736eda91fd2f05a7b189f3de1871d0,PodSandboxId:f9ec1e715194fececc71cf1e147a83a51959ee540a7efa28629b0bc13b2e709a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,CreatedAt:
1722863018521148654,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tpn5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f89e32f9-d750-41ac-891e-e3ca4a4fbbd2,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a71aa20c85d5807d25a2276d35a85b10e2fd1662fd320ae8cb487c535505270,PodSandboxId:eafbded883a1b705b3a1450e46da11d61b3115332de9b047ea8f58f575a0d964,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1722863007531246788,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-669469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 107a9040e215dab2b8aab08673b4f751,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b0d970998865240c4f69eb65c3a50b0071e25ec87618d9d74ccc2bb1cd8caa1,PodSandboxId:14e32cc8dc2a5a0dbbf579da212488b074aa56edad47e2bf531195d75854e49d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1722863007506059824,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-669469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57b68c085e364fe312def9dbe225e5aa,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6359cb0c85ad0b248f0ec187d3821cf1bbcec57798ce503047ed7bb6ca345696,PodSandboxId:794289f6eaecd9a738b4f706dd2678a06270b91890e00ba4385ca63e7b4f6d8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1722863007498373599,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-669469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8078ffb805fb9155d9fb81fa32307361,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6496630cffd11b882d7d7bb3136ddd2b5aa3c243da638db1ed160978ea93c022,PodSandboxId:6317297b1dcb515b7668c236dac256c8d620fb7f4b5448813cd1b8535b3a3992,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1722863007416535495,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-669469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7aca0bf10be39af6c0200757bde06d77,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f57b7378426c58059895e772facc804452834690b99650f40a477308fae1d15,PodSandboxId:0b7be9f4229ba83122768ae8dc28b83e7d0f88b88ff58920dc2f33e630cafe0d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1722862722784968236,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-669469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7aca0bf10be39af6c0200757bde06d77,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4a765cc4-b2e9-4db9-bd16-14f0e156f50c name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:12:47 no-preload-669469 crio[701]: time="2024-08-05 13:12:47.944188965Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=48937f4a-9c05-42cf-8115-80ade10c61a4 name=/runtime.v1.RuntimeService/Status
	Aug 05 13:12:47 no-preload-669469 crio[701]: time="2024-08-05 13:12:47.944254086Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=48937f4a-9c05-42cf-8115-80ade10c61a4 name=/runtime.v1.RuntimeService/Status
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	720f5cc7faa80       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   146e18ec96e30       storage-provisioner
	3ea0286156b03       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   6dc0c99effd8a       coredns-6f6b679f8f-pqhwx
	63d3e5aad7edc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   c967731df8d03       coredns-6f6b679f8f-npbmj
	a8745bae4cc7f       41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318   9 minutes ago       Running             kube-proxy                0                   f9ec1e715194f       kube-proxy-tpn5s
	4a71aa20c85d5       0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c   9 minutes ago       Running             kube-scheduler            2                   eafbded883a1b       kube-scheduler-no-preload-669469
	5b0d970998865       fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c   9 minutes ago       Running             kube-controller-manager   2                   14e32cc8dc2a5       kube-controller-manager-no-preload-669469
	6359cb0c85ad0       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   794289f6eaecd       etcd-no-preload-669469
	6496630cffd11       c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0   9 minutes ago       Running             kube-apiserver            2                   6317297b1dcb5       kube-apiserver-no-preload-669469
	3f57b7378426c       c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0   14 minutes ago      Exited              kube-apiserver            1                   0b7be9f4229ba       kube-apiserver-no-preload-669469
	
	
	==> coredns [3ea0286156b0339e1479613c3a9526db65b88d0cc949618d5b9db1633024d614] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [63d3e5aad7edc6d373e79326f2cbe5725c39f8108e2c94a88d94054c1aaad279] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-669469
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-669469
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f
	                    minikube.k8s.io/name=no-preload-669469
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_05T13_03_33_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 13:03:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-669469
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 13:12:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 13:08:48 +0000   Mon, 05 Aug 2024 13:03:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 13:08:48 +0000   Mon, 05 Aug 2024 13:03:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 13:08:48 +0000   Mon, 05 Aug 2024 13:03:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 13:08:48 +0000   Mon, 05 Aug 2024 13:03:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.223
	  Hostname:    no-preload-669469
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e6cf68b47cf0432ea69b9d25e8c7dfb7
	  System UUID:                e6cf68b4-7cf0-432e-a69b-9d25e8c7dfb7
	  Boot ID:                    c6760a17-44d7-4269-8a25-de73df8e3f0f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-rc.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-npbmj                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m10s
	  kube-system                 coredns-6f6b679f8f-pqhwx                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m10s
	  kube-system                 etcd-no-preload-669469                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m17s
	  kube-system                 kube-apiserver-no-preload-669469             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m16s
	  kube-system                 kube-controller-manager-no-preload-669469    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m16s
	  kube-system                 kube-proxy-tpn5s                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m11s
	  kube-system                 kube-scheduler-no-preload-669469             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m16s
	  kube-system                 metrics-server-6867b74b74-x4j7b              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m10s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m8s                   kube-proxy       
	  Normal  Starting                 9m22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m22s (x8 over 9m22s)  kubelet          Node no-preload-669469 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m22s (x8 over 9m22s)  kubelet          Node no-preload-669469 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m22s (x7 over 9m22s)  kubelet          Node no-preload-669469 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m16s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m16s                  kubelet          Node no-preload-669469 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m16s                  kubelet          Node no-preload-669469 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m16s                  kubelet          Node no-preload-669469 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m11s                  node-controller  Node no-preload-669469 event: Registered Node no-preload-669469 in Controller
	
	
	==> dmesg <==
	[  +0.040658] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.767330] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.466493] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.441223] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.665582] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.056619] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053528] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.214168] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.126035] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +0.591017] systemd-fstab-generator[686]: Ignoring "noauto" option for root device
	[ +16.361707] systemd-fstab-generator[1221]: Ignoring "noauto" option for root device
	[  +0.059565] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.955777] systemd-fstab-generator[1342]: Ignoring "noauto" option for root device
	[  +5.717080] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.698288] kauditd_printk_skb: 52 callbacks suppressed
	[Aug 5 12:59] kauditd_printk_skb: 30 callbacks suppressed
	[Aug 5 13:03] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.399707] systemd-fstab-generator[3005]: Ignoring "noauto" option for root device
	[  +4.454018] kauditd_printk_skb: 56 callbacks suppressed
	[  +1.610788] systemd-fstab-generator[3327]: Ignoring "noauto" option for root device
	[  +4.891320] systemd-fstab-generator[3440]: Ignoring "noauto" option for root device
	[  +0.108570] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.099091] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [6359cb0c85ad0b248f0ec187d3821cf1bbcec57798ce503047ed7bb6ca345696] <==
	{"level":"info","ts":"2024-08-05T13:03:27.796016Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"5072550c343bb357","initial-advertise-peer-urls":["https://192.168.72.223:2380"],"listen-peer-urls":["https://192.168.72.223:2380"],"advertise-client-urls":["https://192.168.72.223:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.223:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-05T13:03:27.796132Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-05T13:03:27.796201Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.72.223:2380"}
	{"level":"info","ts":"2024-08-05T13:03:27.802033Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.72.223:2380"}
	{"level":"info","ts":"2024-08-05T13:03:27.802255Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d0d4b5aa9c0518f1","local-member-id":"5072550c343bb357","added-peer-id":"5072550c343bb357","added-peer-peer-urls":["https://192.168.72.223:2380"]}
	{"level":"info","ts":"2024-08-05T13:03:28.661219Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5072550c343bb357 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-05T13:03:28.661445Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5072550c343bb357 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-05T13:03:28.661621Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5072550c343bb357 received MsgPreVoteResp from 5072550c343bb357 at term 1"}
	{"level":"info","ts":"2024-08-05T13:03:28.661797Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5072550c343bb357 became candidate at term 2"}
	{"level":"info","ts":"2024-08-05T13:03:28.661837Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5072550c343bb357 received MsgVoteResp from 5072550c343bb357 at term 2"}
	{"level":"info","ts":"2024-08-05T13:03:28.661946Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5072550c343bb357 became leader at term 2"}
	{"level":"info","ts":"2024-08-05T13:03:28.661982Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 5072550c343bb357 elected leader 5072550c343bb357 at term 2"}
	{"level":"info","ts":"2024-08-05T13:03:28.663612Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T13:03:28.664099Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"5072550c343bb357","local-member-attributes":"{Name:no-preload-669469 ClientURLs:[https://192.168.72.223:2379]}","request-path":"/0/members/5072550c343bb357/attributes","cluster-id":"d0d4b5aa9c0518f1","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-05T13:03:28.664170Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T13:03:28.664948Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d0d4b5aa9c0518f1","local-member-id":"5072550c343bb357","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T13:03:28.665079Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T13:03:28.665143Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T13:03:28.665183Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T13:03:28.666463Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-05T13:03:28.667371Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-05T13:03:28.667430Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-05T13:03:28.669042Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-05T13:03:28.666483Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-05T13:03:28.676819Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.223:2379"}
	
	
	==> kernel <==
	 13:12:48 up 14 min,  0 users,  load average: 0.29, 0.24, 0.15
	Linux no-preload-669469 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3f57b7378426c58059895e772facc804452834690b99650f40a477308fae1d15] <==
	W0805 13:03:22.681378       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 13:03:22.702206       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 13:03:22.753912       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 13:03:22.820521       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 13:03:22.820533       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 13:03:22.850968       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 13:03:22.884098       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 13:03:22.884610       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 13:03:22.984082       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 13:03:23.008146       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 13:03:23.014038       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 13:03:23.029605       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 13:03:23.043200       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 13:03:23.086274       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 13:03:23.094194       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 13:03:23.096897       1 logging.go:55] [core] [Channel #15 SubChannel #16]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 13:03:23.121631       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 13:03:23.243210       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 13:03:23.289026       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 13:03:23.385256       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 13:03:23.433107       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 13:03:23.496926       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 13:03:23.691076       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 13:03:23.921147       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 13:03:23.924885       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [6496630cffd11b882d7d7bb3136ddd2b5aa3c243da638db1ed160978ea93c022] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0805 13:08:31.179689       1 handler_proxy.go:99] no RequestInfo found in the context
	E0805 13:08:31.179757       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0805 13:08:31.180753       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0805 13:08:31.180809       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0805 13:09:31.180927       1 handler_proxy.go:99] no RequestInfo found in the context
	W0805 13:09:31.181364       1 handler_proxy.go:99] no RequestInfo found in the context
	E0805 13:09:31.181426       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0805 13:09:31.181533       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0805 13:09:31.182675       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0805 13:09:31.182844       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0805 13:11:31.182984       1 handler_proxy.go:99] no RequestInfo found in the context
	E0805 13:11:31.183374       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0805 13:11:31.183467       1 handler_proxy.go:99] no RequestInfo found in the context
	E0805 13:11:31.183571       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0805 13:11:31.184488       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0805 13:11:31.184797       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [5b0d970998865240c4f69eb65c3a50b0071e25ec87618d9d74ccc2bb1cd8caa1] <==
	E0805 13:07:37.200690       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0805 13:07:37.653396       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:08:07.208402       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0805 13:08:07.662223       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:08:37.216370       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0805 13:08:37.670569       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0805 13:08:48.539980       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-669469"
	E0805 13:09:07.222473       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0805 13:09:07.679543       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:09:37.232440       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0805 13:09:37.687181       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0805 13:09:40.787943       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="306.524µs"
	I0805 13:09:51.776261       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="57.993µs"
	E0805 13:10:07.238315       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0805 13:10:07.694915       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:10:37.244466       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0805 13:10:37.702796       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:11:07.250943       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0805 13:11:07.711494       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:11:37.262800       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0805 13:11:37.719347       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:12:07.270540       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0805 13:12:07.729103       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:12:37.277873       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0805 13:12:37.737640       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [a8745bae4cc7fc81a4dfa17d9f2a8b64ff736eda91fd2f05a7b189f3de1871d0] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0805 13:03:39.560782       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0805 13:03:39.582126       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.223"]
	E0805 13:03:39.582472       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0805 13:03:39.644601       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0805 13:03:39.644667       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0805 13:03:39.644781       1 server_linux.go:169] "Using iptables Proxier"
	I0805 13:03:39.650044       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0805 13:03:39.650429       1 server.go:483] "Version info" version="v1.31.0-rc.0"
	I0805 13:03:39.650463       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 13:03:39.652241       1 config.go:197] "Starting service config controller"
	I0805 13:03:39.652309       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 13:03:39.652344       1 config.go:104] "Starting endpoint slice config controller"
	I0805 13:03:39.652361       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 13:03:39.654472       1 config.go:326] "Starting node config controller"
	I0805 13:03:39.654539       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 13:03:39.752537       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0805 13:03:39.752575       1 shared_informer.go:320] Caches are synced for service config
	I0805 13:03:39.755422       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4a71aa20c85d5807d25a2276d35a85b10e2fd1662fd320ae8cb487c535505270] <==
	W0805 13:03:30.210940       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0805 13:03:30.211214       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0805 13:03:30.211084       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0805 13:03:30.211311       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0805 13:03:30.211124       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0805 13:03:30.211377       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0805 13:03:30.211182       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0805 13:03:30.211624       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0805 13:03:31.015066       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0805 13:03:31.015241       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0805 13:03:31.021945       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0805 13:03:31.022050       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0805 13:03:31.206387       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0805 13:03:31.206488       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0805 13:03:31.317910       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0805 13:03:31.318005       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0805 13:03:31.349963       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0805 13:03:31.350090       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0805 13:03:31.383130       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0805 13:03:31.383547       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0805 13:03:31.403608       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0805 13:03:31.404566       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0805 13:03:31.416504       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0805 13:03:31.416633       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0805 13:03:33.801686       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 05 13:11:42 no-preload-669469 kubelet[3334]: E0805 13:11:42.898588    3334 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863502898104278,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 05 13:11:42 no-preload-669469 kubelet[3334]: E0805 13:11:42.898613    3334 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863502898104278,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 05 13:11:44 no-preload-669469 kubelet[3334]: E0805 13:11:44.764132    3334 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-x4j7b" podUID="55a747e4-f9a7-41f1-b584-470048ba6fcb"
	Aug 05 13:11:52 no-preload-669469 kubelet[3334]: E0805 13:11:52.900877    3334 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863512900339763,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 05 13:11:52 no-preload-669469 kubelet[3334]: E0805 13:11:52.900973    3334 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863512900339763,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 05 13:11:55 no-preload-669469 kubelet[3334]: E0805 13:11:55.762780    3334 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-x4j7b" podUID="55a747e4-f9a7-41f1-b584-470048ba6fcb"
	Aug 05 13:12:02 no-preload-669469 kubelet[3334]: E0805 13:12:02.902622    3334 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863522902269446,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 05 13:12:02 no-preload-669469 kubelet[3334]: E0805 13:12:02.903168    3334 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863522902269446,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 05 13:12:08 no-preload-669469 kubelet[3334]: E0805 13:12:08.763208    3334 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-x4j7b" podUID="55a747e4-f9a7-41f1-b584-470048ba6fcb"
	Aug 05 13:12:12 no-preload-669469 kubelet[3334]: E0805 13:12:12.905310    3334 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863532904995086,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 05 13:12:12 no-preload-669469 kubelet[3334]: E0805 13:12:12.905338    3334 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863532904995086,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 05 13:12:20 no-preload-669469 kubelet[3334]: E0805 13:12:20.762899    3334 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-x4j7b" podUID="55a747e4-f9a7-41f1-b584-470048ba6fcb"
	Aug 05 13:12:22 no-preload-669469 kubelet[3334]: E0805 13:12:22.906982    3334 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863542906385890,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 05 13:12:22 no-preload-669469 kubelet[3334]: E0805 13:12:22.907258    3334 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863542906385890,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 05 13:12:32 no-preload-669469 kubelet[3334]: E0805 13:12:32.805464    3334 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 13:12:32 no-preload-669469 kubelet[3334]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 13:12:32 no-preload-669469 kubelet[3334]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 13:12:32 no-preload-669469 kubelet[3334]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 13:12:32 no-preload-669469 kubelet[3334]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 13:12:32 no-preload-669469 kubelet[3334]: E0805 13:12:32.909473    3334 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863552909183804,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 05 13:12:32 no-preload-669469 kubelet[3334]: E0805 13:12:32.909514    3334 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863552909183804,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 05 13:12:33 no-preload-669469 kubelet[3334]: E0805 13:12:33.762946    3334 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-x4j7b" podUID="55a747e4-f9a7-41f1-b584-470048ba6fcb"
	Aug 05 13:12:42 no-preload-669469 kubelet[3334]: E0805 13:12:42.912580    3334 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863562911806410,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 05 13:12:42 no-preload-669469 kubelet[3334]: E0805 13:12:42.913003    3334 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863562911806410,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 05 13:12:44 no-preload-669469 kubelet[3334]: E0805 13:12:44.762206    3334 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-x4j7b" podUID="55a747e4-f9a7-41f1-b584-470048ba6fcb"
	
	
	==> storage-provisioner [720f5cc7faa808968b90cc1f67825bc5c2a55fb4bd51337abdedb43b051038e1] <==
	I0805 13:03:39.564907       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0805 13:03:39.581832       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0805 13:03:39.581942       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0805 13:03:39.591421       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0805 13:03:39.591695       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-669469_a33707dc-914f-4c2f-9543-ab961615e6e7!
	I0805 13:03:39.594182       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2ae93881-5737-4b1f-8fa9-1574a3d54891", APIVersion:"v1", ResourceVersion:"428", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-669469_a33707dc-914f-4c2f-9543-ab961615e6e7 became leader
	I0805 13:03:39.692390       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-669469_a33707dc-914f-4c2f-9543-ab961615e6e7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-669469 -n no-preload-669469
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-669469 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-x4j7b
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-669469 describe pod metrics-server-6867b74b74-x4j7b
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-669469 describe pod metrics-server-6867b74b74-x4j7b: exit status 1 (63.021759ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-x4j7b" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-669469 describe pod metrics-server-6867b74b74-x4j7b: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-321139 -n embed-certs-321139
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-05 13:13:01.929257762 +0000 UTC m=+6368.886581448
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-321139 -n embed-certs-321139
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-321139 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-321139 logs -n 25: (2.132365428s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-119870 sudo cat                              | bridge-119870                | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-119870 sudo                                  | bridge-119870                | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-119870 sudo                                  | bridge-119870                | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-119870 sudo                                  | bridge-119870                | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-119870 sudo find                             | bridge-119870                | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-119870 sudo crio                             | bridge-119870                | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-119870                                       | bridge-119870                | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	| delete  | -p                                                     | disable-driver-mounts-130994 | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | disable-driver-mounts-130994                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-371585 | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:51 UTC |
	|         | default-k8s-diff-port-371585                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-321139            | embed-certs-321139           | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-321139                                  | embed-certs-321139           | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-669469             | no-preload-669469            | jenkins | v1.33.1 | 05 Aug 24 12:51 UTC | 05 Aug 24 12:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-669469                                   | no-preload-669469            | jenkins | v1.33.1 | 05 Aug 24 12:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-371585  | default-k8s-diff-port-371585 | jenkins | v1.33.1 | 05 Aug 24 12:51 UTC | 05 Aug 24 12:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-371585 | jenkins | v1.33.1 | 05 Aug 24 12:51 UTC |                     |
	|         | default-k8s-diff-port-371585                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-321139                 | embed-certs-321139           | jenkins | v1.33.1 | 05 Aug 24 12:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-635707        | old-k8s-version-635707       | jenkins | v1.33.1 | 05 Aug 24 12:53 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-321139                                  | embed-certs-321139           | jenkins | v1.33.1 | 05 Aug 24 12:53 UTC | 05 Aug 24 13:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-669469                  | no-preload-669469            | jenkins | v1.33.1 | 05 Aug 24 12:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-669469                                   | no-preload-669469            | jenkins | v1.33.1 | 05 Aug 24 12:53 UTC | 05 Aug 24 13:03 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-371585       | default-k8s-diff-port-371585 | jenkins | v1.33.1 | 05 Aug 24 12:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-371585 | jenkins | v1.33.1 | 05 Aug 24 12:54 UTC | 05 Aug 24 13:04 UTC |
	|         | default-k8s-diff-port-371585                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-635707                              | old-k8s-version-635707       | jenkins | v1.33.1 | 05 Aug 24 12:55 UTC | 05 Aug 24 12:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-635707             | old-k8s-version-635707       | jenkins | v1.33.1 | 05 Aug 24 12:55 UTC | 05 Aug 24 12:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-635707                              | old-k8s-version-635707       | jenkins | v1.33.1 | 05 Aug 24 12:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 12:55:11
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 12:55:11.960192  451238 out.go:291] Setting OutFile to fd 1 ...
	I0805 12:55:11.960471  451238 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:55:11.960479  451238 out.go:304] Setting ErrFile to fd 2...
	I0805 12:55:11.960484  451238 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:55:11.960646  451238 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
	I0805 12:55:11.961145  451238 out.go:298] Setting JSON to false
	I0805 12:55:11.962063  451238 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":9459,"bootTime":1722853053,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0805 12:55:11.962121  451238 start.go:139] virtualization: kvm guest
	I0805 12:55:11.964372  451238 out.go:177] * [old-k8s-version-635707] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0805 12:55:11.965770  451238 notify.go:220] Checking for updates...
	I0805 12:55:11.965787  451238 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 12:55:11.967106  451238 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 12:55:11.968790  451238 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 12:55:11.970181  451238 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 12:55:11.971500  451238 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0805 12:55:11.973243  451238 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 12:55:11.974825  451238 config.go:182] Loaded profile config "old-k8s-version-635707": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0805 12:55:11.975239  451238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:55:11.975319  451238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:55:11.990296  451238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40583
	I0805 12:55:11.990704  451238 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:55:11.991235  451238 main.go:141] libmachine: Using API Version  1
	I0805 12:55:11.991259  451238 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:55:11.991575  451238 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:55:11.991765  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:55:11.993484  451238 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0805 12:55:11.994687  451238 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 12:55:11.994952  451238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:55:11.994984  451238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:55:12.009528  451238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37395
	I0805 12:55:12.009879  451238 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:55:12.010353  451238 main.go:141] libmachine: Using API Version  1
	I0805 12:55:12.010375  451238 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:55:12.010670  451238 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:55:12.010857  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:55:12.044634  451238 out.go:177] * Using the kvm2 driver based on existing profile
	I0805 12:55:12.045859  451238 start.go:297] selected driver: kvm2
	I0805 12:55:12.045876  451238 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-635707 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-635707 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.41 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:55:12.045987  451238 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 12:55:12.046662  451238 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 12:55:12.046731  451238 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19377-383955/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0805 12:55:12.061918  451238 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0805 12:55:12.062400  451238 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 12:55:12.062484  451238 cni.go:84] Creating CNI manager for ""
	I0805 12:55:12.062502  451238 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:55:12.062572  451238 start.go:340] cluster config:
	{Name:old-k8s-version-635707 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-635707 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.41 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:55:12.062722  451238 iso.go:125] acquiring lock: {Name:mk78a4988ea0dfb86bb6f7367e362683a39fd912 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 12:55:12.064478  451238 out.go:177] * Starting "old-k8s-version-635707" primary control-plane node in "old-k8s-version-635707" cluster
	I0805 12:55:10.820047  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:13.892041  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:12.065640  451238 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0805 12:55:12.065680  451238 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0805 12:55:12.065701  451238 cache.go:56] Caching tarball of preloaded images
	I0805 12:55:12.065786  451238 preload.go:172] Found /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0805 12:55:12.065797  451238 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0805 12:55:12.065897  451238 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/config.json ...
	I0805 12:55:12.066073  451238 start.go:360] acquireMachinesLock for old-k8s-version-635707: {Name:mk3babe91d55c30c0b650587cdec6489eb3a7ed6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 12:55:19.971977  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:23.044092  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:29.124041  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:32.196124  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:38.276045  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:41.348117  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:47.428042  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:50.500022  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:56.580074  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:59.652091  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:05.732072  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:08.804128  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:14.884085  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:17.956073  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:24.036067  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:27.108059  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:33.188012  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:36.260134  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:42.340036  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:45.412038  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:51.492022  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:54.564068  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:00.644018  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:03.716112  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:09.796041  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:12.868080  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:18.948054  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:22.020023  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:28.100099  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:31.172076  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:37.251997  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:40.324080  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:46.404055  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:49.476072  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:55.556045  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:58.627984  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:58:01.632326  450576 start.go:364] duration metric: took 4m17.994768704s to acquireMachinesLock for "no-preload-669469"
	I0805 12:58:01.632391  450576 start.go:96] Skipping create...Using existing machine configuration
	I0805 12:58:01.632403  450576 fix.go:54] fixHost starting: 
	I0805 12:58:01.632845  450576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:58:01.632880  450576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:58:01.648358  450576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43013
	I0805 12:58:01.648860  450576 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:58:01.649387  450576 main.go:141] libmachine: Using API Version  1
	I0805 12:58:01.649410  450576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:58:01.649779  450576 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:58:01.649963  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 12:58:01.650176  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetState
	I0805 12:58:01.651681  450576 fix.go:112] recreateIfNeeded on no-preload-669469: state=Stopped err=<nil>
	I0805 12:58:01.651715  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	W0805 12:58:01.651903  450576 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 12:58:01.653860  450576 out.go:177] * Restarting existing kvm2 VM for "no-preload-669469" ...
	I0805 12:58:01.655338  450576 main.go:141] libmachine: (no-preload-669469) Calling .Start
	I0805 12:58:01.655475  450576 main.go:141] libmachine: (no-preload-669469) Ensuring networks are active...
	I0805 12:58:01.656224  450576 main.go:141] libmachine: (no-preload-669469) Ensuring network default is active
	I0805 12:58:01.656565  450576 main.go:141] libmachine: (no-preload-669469) Ensuring network mk-no-preload-669469 is active
	I0805 12:58:01.656898  450576 main.go:141] libmachine: (no-preload-669469) Getting domain xml...
	I0805 12:58:01.657537  450576 main.go:141] libmachine: (no-preload-669469) Creating domain...
	I0805 12:58:02.879809  450576 main.go:141] libmachine: (no-preload-669469) Waiting to get IP...
	I0805 12:58:02.880800  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:02.881194  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:02.881270  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:02.881175  451829 retry.go:31] will retry after 303.380177ms: waiting for machine to come up
	I0805 12:58:03.185834  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:03.186259  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:03.186288  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:03.186214  451829 retry.go:31] will retry after 263.494141ms: waiting for machine to come up
	I0805 12:58:03.451923  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:03.452263  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:03.452340  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:03.452217  451829 retry.go:31] will retry after 310.615163ms: waiting for machine to come up
	I0805 12:58:01.629832  450393 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 12:58:01.629873  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetMachineName
	I0805 12:58:01.630250  450393 buildroot.go:166] provisioning hostname "embed-certs-321139"
	I0805 12:58:01.630295  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetMachineName
	I0805 12:58:01.630511  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:58:01.632158  450393 machine.go:97] duration metric: took 4m37.422562602s to provisionDockerMachine
	I0805 12:58:01.632208  450393 fix.go:56] duration metric: took 4m37.444588707s for fixHost
	I0805 12:58:01.632226  450393 start.go:83] releasing machines lock for "embed-certs-321139", held for 4m37.44461751s
	W0805 12:58:01.632250  450393 start.go:714] error starting host: provision: host is not running
	W0805 12:58:01.632431  450393 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0805 12:58:01.632445  450393 start.go:729] Will try again in 5 seconds ...
	I0805 12:58:03.764803  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:03.765280  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:03.765305  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:03.765243  451829 retry.go:31] will retry after 570.955722ms: waiting for machine to come up
	I0805 12:58:04.338423  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:04.338863  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:04.338893  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:04.338811  451829 retry.go:31] will retry after 485.490715ms: waiting for machine to come up
	I0805 12:58:04.825511  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:04.825882  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:04.825911  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:04.825823  451829 retry.go:31] will retry after 671.109731ms: waiting for machine to come up
	I0805 12:58:05.498113  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:05.498529  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:05.498557  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:05.498467  451829 retry.go:31] will retry after 997.668856ms: waiting for machine to come up
	I0805 12:58:06.497843  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:06.498144  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:06.498161  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:06.498120  451829 retry.go:31] will retry after 996.614411ms: waiting for machine to come up
	I0805 12:58:07.496801  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:07.497298  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:07.497334  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:07.497249  451829 retry.go:31] will retry after 1.384682595s: waiting for machine to come up
	I0805 12:58:06.634410  450393 start.go:360] acquireMachinesLock for embed-certs-321139: {Name:mk3babe91d55c30c0b650587cdec6489eb3a7ed6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 12:58:08.883309  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:08.883701  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:08.883732  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:08.883642  451829 retry.go:31] will retry after 2.017073843s: waiting for machine to come up
	I0805 12:58:10.903852  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:10.904279  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:10.904310  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:10.904233  451829 retry.go:31] will retry after 2.485880433s: waiting for machine to come up
	I0805 12:58:13.392693  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:13.393169  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:13.393199  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:13.393116  451829 retry.go:31] will retry after 2.986076236s: waiting for machine to come up
	I0805 12:58:16.380921  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:16.381475  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:16.381508  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:16.381432  451829 retry.go:31] will retry after 4.291617536s: waiting for machine to come up
	I0805 12:58:21.948770  450884 start.go:364] duration metric: took 4m4.773878111s to acquireMachinesLock for "default-k8s-diff-port-371585"
	I0805 12:58:21.948843  450884 start.go:96] Skipping create...Using existing machine configuration
	I0805 12:58:21.948851  450884 fix.go:54] fixHost starting: 
	I0805 12:58:21.949291  450884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:58:21.949337  450884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:58:21.966933  450884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34223
	I0805 12:58:21.967356  450884 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:58:21.967874  450884 main.go:141] libmachine: Using API Version  1
	I0805 12:58:21.967899  450884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:58:21.968326  450884 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:58:21.968638  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 12:58:21.968874  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetState
	I0805 12:58:21.970608  450884 fix.go:112] recreateIfNeeded on default-k8s-diff-port-371585: state=Stopped err=<nil>
	I0805 12:58:21.970631  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	W0805 12:58:21.970789  450884 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 12:58:21.973235  450884 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-371585" ...
	I0805 12:58:21.974564  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .Start
	I0805 12:58:21.974751  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Ensuring networks are active...
	I0805 12:58:21.975581  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Ensuring network default is active
	I0805 12:58:21.976001  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Ensuring network mk-default-k8s-diff-port-371585 is active
	I0805 12:58:21.976376  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Getting domain xml...
	I0805 12:58:21.977078  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Creating domain...
	I0805 12:58:20.678231  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.678743  450576 main.go:141] libmachine: (no-preload-669469) Found IP for machine: 192.168.72.223
	I0805 12:58:20.678771  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has current primary IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.678786  450576 main.go:141] libmachine: (no-preload-669469) Reserving static IP address...
	I0805 12:58:20.679230  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "no-preload-669469", mac: "52:54:00:55:38:0a", ip: "192.168.72.223"} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:20.679266  450576 main.go:141] libmachine: (no-preload-669469) Reserved static IP address: 192.168.72.223
	I0805 12:58:20.679288  450576 main.go:141] libmachine: (no-preload-669469) DBG | skip adding static IP to network mk-no-preload-669469 - found existing host DHCP lease matching {name: "no-preload-669469", mac: "52:54:00:55:38:0a", ip: "192.168.72.223"}
	I0805 12:58:20.679302  450576 main.go:141] libmachine: (no-preload-669469) DBG | Getting to WaitForSSH function...
	I0805 12:58:20.679317  450576 main.go:141] libmachine: (no-preload-669469) Waiting for SSH to be available...
	I0805 12:58:20.681864  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.682263  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:20.682297  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.682447  450576 main.go:141] libmachine: (no-preload-669469) DBG | Using SSH client type: external
	I0805 12:58:20.682484  450576 main.go:141] libmachine: (no-preload-669469) DBG | Using SSH private key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa (-rw-------)
	I0805 12:58:20.682539  450576 main.go:141] libmachine: (no-preload-669469) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.223 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 12:58:20.682557  450576 main.go:141] libmachine: (no-preload-669469) DBG | About to run SSH command:
	I0805 12:58:20.682568  450576 main.go:141] libmachine: (no-preload-669469) DBG | exit 0
	I0805 12:58:20.807791  450576 main.go:141] libmachine: (no-preload-669469) DBG | SSH cmd err, output: <nil>: 
	I0805 12:58:20.808168  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetConfigRaw
	I0805 12:58:20.808767  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetIP
	I0805 12:58:20.811170  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.811486  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:20.811517  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.811738  450576 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469/config.json ...
	I0805 12:58:20.811957  450576 machine.go:94] provisionDockerMachine start ...
	I0805 12:58:20.811976  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 12:58:20.812203  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:20.814305  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.814656  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:20.814693  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.814823  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:20.814996  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:20.815156  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:20.815329  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:20.815503  450576 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:20.815871  450576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.223 22 <nil> <nil>}
	I0805 12:58:20.815887  450576 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 12:58:20.920311  450576 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 12:58:20.920344  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetMachineName
	I0805 12:58:20.920642  450576 buildroot.go:166] provisioning hostname "no-preload-669469"
	I0805 12:58:20.920695  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetMachineName
	I0805 12:58:20.920951  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:20.924029  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.924583  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:20.924611  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.924770  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:20.925001  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:20.925190  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:20.925334  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:20.925514  450576 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:20.925755  450576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.223 22 <nil> <nil>}
	I0805 12:58:20.925774  450576 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-669469 && echo "no-preload-669469" | sudo tee /etc/hostname
	I0805 12:58:21.046579  450576 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-669469
	
	I0805 12:58:21.046614  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:21.049322  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.049657  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.049687  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.049851  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:21.050049  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.050239  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.050412  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:21.050588  450576 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:21.050755  450576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.223 22 <nil> <nil>}
	I0805 12:58:21.050771  450576 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-669469' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-669469/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-669469' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 12:58:21.165100  450576 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 12:58:21.165134  450576 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19377-383955/.minikube CaCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19377-383955/.minikube}
	I0805 12:58:21.165170  450576 buildroot.go:174] setting up certificates
	I0805 12:58:21.165180  450576 provision.go:84] configureAuth start
	I0805 12:58:21.165191  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetMachineName
	I0805 12:58:21.165477  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetIP
	I0805 12:58:21.168018  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.168399  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.168443  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.168703  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:21.171168  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.171536  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.171565  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.171638  450576 provision.go:143] copyHostCerts
	I0805 12:58:21.171713  450576 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem, removing ...
	I0805 12:58:21.171724  450576 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 12:58:21.171807  450576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem (1082 bytes)
	I0805 12:58:21.171920  450576 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem, removing ...
	I0805 12:58:21.171930  450576 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 12:58:21.171955  450576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem (1123 bytes)
	I0805 12:58:21.172010  450576 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem, removing ...
	I0805 12:58:21.172016  450576 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 12:58:21.172037  450576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem (1675 bytes)
	I0805 12:58:21.172095  450576 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem org=jenkins.no-preload-669469 san=[127.0.0.1 192.168.72.223 localhost minikube no-preload-669469]
	I0805 12:58:21.287395  450576 provision.go:177] copyRemoteCerts
	I0805 12:58:21.287463  450576 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 12:58:21.287505  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:21.290416  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.290765  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.290796  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.290962  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:21.291169  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.291323  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:21.291460  450576 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa Username:docker}
	I0805 12:58:21.373992  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0805 12:58:21.398249  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 12:58:21.422950  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0805 12:58:21.446469  450576 provision.go:87] duration metric: took 281.275299ms to configureAuth
	I0805 12:58:21.446500  450576 buildroot.go:189] setting minikube options for container-runtime
	I0805 12:58:21.446688  450576 config.go:182] Loaded profile config "no-preload-669469": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0805 12:58:21.446813  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:21.449833  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.450219  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.450235  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.450526  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:21.450814  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.450993  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.451168  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:21.451342  450576 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:21.451515  450576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.223 22 <nil> <nil>}
	I0805 12:58:21.451532  450576 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 12:58:21.714813  450576 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 12:58:21.714842  450576 machine.go:97] duration metric: took 902.872257ms to provisionDockerMachine
	I0805 12:58:21.714858  450576 start.go:293] postStartSetup for "no-preload-669469" (driver="kvm2")
	I0805 12:58:21.714889  450576 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 12:58:21.714940  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 12:58:21.715304  450576 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 12:58:21.715333  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:21.717989  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.718405  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.718427  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.718597  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:21.718832  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.718993  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:21.719152  450576 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa Username:docker}
	I0805 12:58:21.802634  450576 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 12:58:21.806957  450576 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 12:58:21.806985  450576 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/addons for local assets ...
	I0805 12:58:21.807079  450576 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/files for local assets ...
	I0805 12:58:21.807186  450576 filesync.go:149] local asset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> 3912192.pem in /etc/ssl/certs
	I0805 12:58:21.807293  450576 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 12:58:21.816690  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:58:21.839848  450576 start.go:296] duration metric: took 124.973515ms for postStartSetup
	I0805 12:58:21.839903  450576 fix.go:56] duration metric: took 20.207499572s for fixHost
	I0805 12:58:21.839934  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:21.842548  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.842869  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.842893  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.843090  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:21.843310  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.843502  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.843640  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:21.843815  450576 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:21.844015  450576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.223 22 <nil> <nil>}
	I0805 12:58:21.844029  450576 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 12:58:21.948584  450576 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722862701.921979093
	
	I0805 12:58:21.948613  450576 fix.go:216] guest clock: 1722862701.921979093
	I0805 12:58:21.948623  450576 fix.go:229] Guest: 2024-08-05 12:58:21.921979093 +0000 UTC Remote: 2024-08-05 12:58:21.83991063 +0000 UTC m=+278.340267839 (delta=82.068463ms)
	I0805 12:58:21.948671  450576 fix.go:200] guest clock delta is within tolerance: 82.068463ms
	I0805 12:58:21.948680  450576 start.go:83] releasing machines lock for "no-preload-669469", held for 20.316310092s
	I0805 12:58:21.948713  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 12:58:21.948990  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetIP
	I0805 12:58:21.951624  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.952086  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.952136  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.952256  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 12:58:21.952797  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 12:58:21.952984  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 12:58:21.953065  450576 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 12:58:21.953113  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:21.953227  450576 ssh_runner.go:195] Run: cat /version.json
	I0805 12:58:21.953255  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:21.955837  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.956081  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.956200  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.956227  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.956370  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:21.956504  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.956528  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.956568  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.956670  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:21.956760  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:21.956857  450576 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa Username:docker}
	I0805 12:58:21.956906  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.957058  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:21.957205  450576 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa Username:docker}
	I0805 12:58:22.058847  450576 ssh_runner.go:195] Run: systemctl --version
	I0805 12:58:22.065110  450576 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 12:58:22.211415  450576 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 12:58:22.219405  450576 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 12:58:22.219492  450576 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 12:58:22.240631  450576 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 12:58:22.240659  450576 start.go:495] detecting cgroup driver to use...
	I0805 12:58:22.240764  450576 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 12:58:22.258777  450576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 12:58:22.273312  450576 docker.go:217] disabling cri-docker service (if available) ...
	I0805 12:58:22.273400  450576 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 12:58:22.288455  450576 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 12:58:22.305028  450576 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 12:58:22.428098  450576 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 12:58:22.586232  450576 docker.go:233] disabling docker service ...
	I0805 12:58:22.586318  450576 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 12:58:22.611888  450576 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 12:58:22.627393  450576 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 12:58:22.757335  450576 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 12:58:22.878168  450576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 12:58:22.896174  450576 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 12:58:22.914395  450576 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0805 12:58:23.229202  450576 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0805 12:58:23.229300  450576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:23.242180  450576 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 12:58:23.242262  450576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:23.254577  450576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:23.265805  450576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:23.276522  450576 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 12:58:23.287288  450576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:23.297863  450576 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:23.314322  450576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:23.324662  450576 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 12:58:23.334125  450576 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0805 12:58:23.334192  450576 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0805 12:58:23.346701  450576 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 12:58:23.356256  450576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:58:23.474046  450576 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 12:58:23.617276  450576 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 12:58:23.617363  450576 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 12:58:23.622001  450576 start.go:563] Will wait 60s for crictl version
	I0805 12:58:23.622047  450576 ssh_runner.go:195] Run: which crictl
	I0805 12:58:23.626041  450576 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 12:58:23.670186  450576 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 12:58:23.670267  450576 ssh_runner.go:195] Run: crio --version
	I0805 12:58:23.700616  450576 ssh_runner.go:195] Run: crio --version
	I0805 12:58:23.733411  450576 out.go:177] * Preparing Kubernetes v1.31.0-rc.0 on CRI-O 1.29.1 ...
	I0805 12:58:23.254293  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting to get IP...
	I0805 12:58:23.255331  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:23.255802  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:23.255880  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:23.255773  451963 retry.go:31] will retry after 245.269435ms: waiting for machine to come up
	I0805 12:58:23.502617  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:23.503105  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:23.503130  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:23.503068  451963 retry.go:31] will retry after 243.155673ms: waiting for machine to come up
	I0805 12:58:23.747498  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:23.747913  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:23.747950  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:23.747867  451963 retry.go:31] will retry after 459.286566ms: waiting for machine to come up
	I0805 12:58:24.208594  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:24.209076  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:24.209127  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:24.209003  451963 retry.go:31] will retry after 499.069946ms: waiting for machine to come up
	I0805 12:58:24.709128  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:24.709554  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:24.709577  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:24.709512  451963 retry.go:31] will retry after 732.735525ms: waiting for machine to come up
	I0805 12:58:25.443632  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:25.444185  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:25.444216  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:25.444125  451963 retry.go:31] will retry after 883.69375ms: waiting for machine to come up
	I0805 12:58:26.329477  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:26.330010  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:26.330045  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:26.329947  451963 retry.go:31] will retry after 1.157298734s: waiting for machine to come up
	I0805 12:58:23.734875  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetIP
	I0805 12:58:23.737945  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:23.738460  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:23.738487  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:23.738646  450576 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0805 12:58:23.742894  450576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:58:23.756164  450576 kubeadm.go:883] updating cluster {Name:no-preload-669469 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-rc.0 ClusterName:no-preload-669469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.223 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 12:58:23.756435  450576 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0805 12:58:24.035575  450576 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0805 12:58:24.352144  450576 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0805 12:58:24.657175  450576 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0805 12:58:24.657266  450576 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:58:24.694685  450576 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-rc.0". assuming images are not preloaded.
	I0805 12:58:24.694720  450576 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-rc.0 registry.k8s.io/kube-controller-manager:v1.31.0-rc.0 registry.k8s.io/kube-scheduler:v1.31.0-rc.0 registry.k8s.io/kube-proxy:v1.31.0-rc.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0805 12:58:24.694809  450576 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0805 12:58:24.694831  450576 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0805 12:58:24.694845  450576 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0805 12:58:24.694867  450576 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0805 12:58:24.694835  450576 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:58:24.694815  450576 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0805 12:58:24.694801  450576 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0805 12:58:24.694917  450576 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0805 12:58:24.696852  450576 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0805 12:58:24.696859  450576 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0805 12:58:24.696860  450576 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0805 12:58:24.696902  450576 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0805 12:58:24.696904  450576 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:58:24.696852  450576 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0805 12:58:24.696881  450576 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0805 12:58:24.696852  450576 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0805 12:58:24.864249  450576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0805 12:58:24.867334  450576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0805 12:58:24.905018  450576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0805 12:58:24.920294  450576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0805 12:58:24.925405  450576 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" does not exist at hash "fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c" in container runtime
	I0805 12:58:24.925440  450576 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" does not exist at hash "c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0" in container runtime
	I0805 12:58:24.925456  450576 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0805 12:58:24.925476  450576 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0805 12:58:24.925508  450576 ssh_runner.go:195] Run: which crictl
	I0805 12:58:24.925520  450576 ssh_runner.go:195] Run: which crictl
	I0805 12:58:24.973191  450576 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-rc.0" does not exist at hash "41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318" in container runtime
	I0805 12:58:24.973240  450576 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0805 12:58:24.973304  450576 ssh_runner.go:195] Run: which crictl
	I0805 12:58:24.986642  450576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0805 12:58:24.986685  450576 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0805 12:58:24.986706  450576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0805 12:58:24.986723  450576 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0805 12:58:24.986642  450576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0805 12:58:24.986772  450576 ssh_runner.go:195] Run: which crictl
	I0805 12:58:25.037012  450576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0
	I0805 12:58:25.037066  450576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0805 12:58:25.037132  450576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0805 12:58:25.067311  450576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0805 12:58:25.068528  450576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0805 12:58:25.073769  450576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0
	I0805 12:58:25.073831  450576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-rc.0
	I0805 12:58:25.073872  450576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0805 12:58:25.073933  450576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0805 12:58:25.082476  450576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0805 12:58:25.126044  450576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0 (exists)
	I0805 12:58:25.126080  450576 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0805 12:58:25.126127  450576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0805 12:58:25.126144  450576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0805 12:58:25.126230  450576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0805 12:58:25.149903  450576 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0805 12:58:25.149965  450576 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0805 12:58:25.150028  450576 ssh_runner.go:195] Run: which crictl
	I0805 12:58:25.196288  450576 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" does not exist at hash "0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c" in container runtime
	I0805 12:58:25.196336  450576 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0805 12:58:25.196388  450576 ssh_runner.go:195] Run: which crictl
	I0805 12:58:25.196416  450576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0 (exists)
	I0805 12:58:25.196510  450576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0 (exists)
	I0805 12:58:25.651632  450576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:58:27.532922  450576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0: (2.406747514s)
	I0805 12:58:27.532959  450576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 from cache
	I0805 12:58:27.532994  450576 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0805 12:58:27.533010  450576 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.406755032s)
	I0805 12:58:27.533048  450576 ssh_runner.go:235] Completed: which crictl: (2.383000552s)
	I0805 12:58:27.533050  450576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0805 12:58:27.533082  450576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0805 12:58:27.533082  450576 ssh_runner.go:235] Completed: which crictl: (2.336681164s)
	I0805 12:58:27.533095  450576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0805 12:58:27.533117  450576 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.88145852s)
	I0805 12:58:27.533139  450576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0805 12:58:27.533161  450576 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0805 12:58:27.533198  450576 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:58:27.533234  450576 ssh_runner.go:195] Run: which crictl
	I0805 12:58:27.488683  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:27.489080  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:27.489108  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:27.489027  451963 retry.go:31] will retry after 997.566168ms: waiting for machine to come up
	I0805 12:58:28.488397  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:28.488846  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:28.488878  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:28.488794  451963 retry.go:31] will retry after 1.327498575s: waiting for machine to come up
	I0805 12:58:29.818339  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:29.818705  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:29.818735  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:29.818660  451963 retry.go:31] will retry after 2.105158858s: waiting for machine to come up
	I0805 12:58:31.925036  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:31.925564  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:31.925601  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:31.925492  451963 retry.go:31] will retry after 2.860711737s: waiting for machine to come up
	I0805 12:58:29.629896  450576 ssh_runner.go:235] Completed: which crictl: (2.096633143s)
	I0805 12:58:29.630000  450576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:58:29.630084  450576 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0: (2.096969259s)
	I0805 12:58:29.630184  450576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0805 12:58:29.630102  450576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0: (2.09697893s)
	I0805 12:58:29.630255  450576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 from cache
	I0805 12:58:29.630121  450576 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-rc.0: (2.096957841s)
	I0805 12:58:29.630282  450576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.15-0
	I0805 12:58:29.630286  450576 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0805 12:58:29.630313  450576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0
	I0805 12:58:29.630322  450576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0805 12:58:29.630381  450576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0805 12:58:29.675831  450576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0805 12:58:29.675914  450576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0805 12:58:29.676019  450576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0805 12:58:31.695376  450576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0: (2.06501136s)
	I0805 12:58:31.695429  450576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 from cache
	I0805 12:58:31.695458  450576 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0805 12:58:31.695476  450576 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.019437866s)
	I0805 12:58:31.695382  450576 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0: (2.064967299s)
	I0805 12:58:31.695510  450576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0805 12:58:31.695523  450576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0 (exists)
	I0805 12:58:31.695536  450576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0805 12:58:34.789126  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:34.789644  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:34.789673  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:34.789592  451963 retry.go:31] will retry after 2.763937018s: waiting for machine to come up
	I0805 12:58:33.659147  450576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.963588438s)
	I0805 12:58:33.659183  450576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0805 12:58:33.659216  450576 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0805 12:58:33.659263  450576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0805 12:58:37.466579  450576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.807281649s)
	I0805 12:58:37.466623  450576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0805 12:58:37.466657  450576 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0805 12:58:37.466709  450576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0805 12:58:38.111584  450576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0805 12:58:38.111633  450576 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0805 12:58:38.111678  450576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0805 12:58:37.554827  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:37.555233  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:37.555263  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:37.555184  451963 retry.go:31] will retry after 3.143735106s: waiting for machine to come up
	I0805 12:58:40.701139  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.701615  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Found IP for machine: 192.168.50.228
	I0805 12:58:40.701649  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has current primary IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.701660  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Reserving static IP address...
	I0805 12:58:40.702105  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-371585", mac: "52:54:00:f4:9f:83", ip: "192.168.50.228"} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:40.702126  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Reserved static IP address: 192.168.50.228
	I0805 12:58:40.702146  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | skip adding static IP to network mk-default-k8s-diff-port-371585 - found existing host DHCP lease matching {name: "default-k8s-diff-port-371585", mac: "52:54:00:f4:9f:83", ip: "192.168.50.228"}
	I0805 12:58:40.702156  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for SSH to be available...
	I0805 12:58:40.702198  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | Getting to WaitForSSH function...
	I0805 12:58:40.704600  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.704920  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:40.704950  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.705091  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | Using SSH client type: external
	I0805 12:58:40.705129  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | Using SSH private key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa (-rw-------)
	I0805 12:58:40.705179  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.228 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 12:58:40.705200  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | About to run SSH command:
	I0805 12:58:40.705218  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | exit 0
	I0805 12:58:40.836818  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | SSH cmd err, output: <nil>: 
	I0805 12:58:40.837228  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetConfigRaw
	I0805 12:58:40.837884  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetIP
	I0805 12:58:40.840503  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.840843  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:40.840870  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.841129  450884 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585/config.json ...
	I0805 12:58:40.841353  450884 machine.go:94] provisionDockerMachine start ...
	I0805 12:58:40.841373  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 12:58:40.841587  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:40.843943  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.844308  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:40.844336  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.844448  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:40.844614  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:40.844782  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:40.844922  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:40.845067  450884 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:40.845322  450884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.228 22 <nil> <nil>}
	I0805 12:58:40.845333  450884 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 12:58:40.952367  450884 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 12:58:40.952410  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetMachineName
	I0805 12:58:40.952733  450884 buildroot.go:166] provisioning hostname "default-k8s-diff-port-371585"
	I0805 12:58:40.952762  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetMachineName
	I0805 12:58:40.952968  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:40.955642  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.956045  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:40.956077  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.956216  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:40.956493  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:40.956651  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:40.956804  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:40.957027  450884 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:40.957239  450884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.228 22 <nil> <nil>}
	I0805 12:58:40.957255  450884 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-371585 && echo "default-k8s-diff-port-371585" | sudo tee /etc/hostname
	I0805 12:58:41.077775  450884 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-371585
	
	I0805 12:58:41.077808  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:41.080777  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.081230  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:41.081273  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.081406  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:41.081631  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:41.081782  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:41.081963  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:41.082139  450884 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:41.082315  450884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.228 22 <nil> <nil>}
	I0805 12:58:41.082333  450884 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-371585' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-371585/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-371585' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 12:58:41.200835  450884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 12:58:41.200871  450884 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19377-383955/.minikube CaCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19377-383955/.minikube}
	I0805 12:58:41.200923  450884 buildroot.go:174] setting up certificates
	I0805 12:58:41.200934  450884 provision.go:84] configureAuth start
	I0805 12:58:41.200945  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetMachineName
	I0805 12:58:41.201284  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetIP
	I0805 12:58:41.204107  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.204460  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:41.204494  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.204631  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:41.206634  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.206948  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:41.206977  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.207048  450884 provision.go:143] copyHostCerts
	I0805 12:58:41.207139  450884 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem, removing ...
	I0805 12:58:41.207151  450884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 12:58:41.207215  450884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem (1082 bytes)
	I0805 12:58:41.207333  450884 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem, removing ...
	I0805 12:58:41.207345  450884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 12:58:41.207372  450884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem (1123 bytes)
	I0805 12:58:41.207451  450884 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem, removing ...
	I0805 12:58:41.207462  450884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 12:58:41.207502  450884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem (1675 bytes)
	I0805 12:58:41.207573  450884 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-371585 san=[127.0.0.1 192.168.50.228 default-k8s-diff-port-371585 localhost minikube]
	I0805 12:58:41.357243  450884 provision.go:177] copyRemoteCerts
	I0805 12:58:41.357344  450884 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 12:58:41.357386  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:41.360309  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.360697  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:41.360738  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.360933  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:41.361120  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:41.361295  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:41.361474  450884 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa Username:docker}
	I0805 12:58:41.454251  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 12:58:41.480595  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0805 12:58:41.506729  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 12:58:41.533349  450884 provision.go:87] duration metric: took 332.399026ms to configureAuth
	I0805 12:58:41.533402  450884 buildroot.go:189] setting minikube options for container-runtime
	I0805 12:58:41.533575  450884 config.go:182] Loaded profile config "default-k8s-diff-port-371585": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:58:41.533655  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:41.536469  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.536831  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:41.536862  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.537006  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:41.537197  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:41.537386  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:41.537541  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:41.537734  450884 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:41.537946  450884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.228 22 <nil> <nil>}
	I0805 12:58:41.537968  450884 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 12:58:41.827043  450884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 12:58:41.827078  450884 machine.go:97] duration metric: took 985.710155ms to provisionDockerMachine
	I0805 12:58:41.827095  450884 start.go:293] postStartSetup for "default-k8s-diff-port-371585" (driver="kvm2")
	I0805 12:58:41.827109  450884 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 12:58:41.827145  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 12:58:41.827564  450884 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 12:58:41.827605  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:41.830350  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.830724  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:41.830761  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.830853  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:41.831034  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:41.831206  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:41.831329  450884 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa Username:docker}
	I0805 12:58:41.915261  450884 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 12:58:41.919719  450884 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 12:58:41.919760  450884 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/addons for local assets ...
	I0805 12:58:41.919835  450884 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/files for local assets ...
	I0805 12:58:41.919930  450884 filesync.go:149] local asset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> 3912192.pem in /etc/ssl/certs
	I0805 12:58:41.920062  450884 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 12:58:41.929842  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:58:41.958933  450884 start.go:296] duration metric: took 131.820227ms for postStartSetup
	I0805 12:58:41.958981  450884 fix.go:56] duration metric: took 20.010130311s for fixHost
	I0805 12:58:41.959012  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:41.962092  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.962510  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:41.962540  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.962726  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:41.962968  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:41.963153  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:41.963309  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:41.963479  450884 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:41.963687  450884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.228 22 <nil> <nil>}
	I0805 12:58:41.963700  450884 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 12:58:42.080993  451238 start.go:364] duration metric: took 3m30.014883629s to acquireMachinesLock for "old-k8s-version-635707"
	I0805 12:58:42.081066  451238 start.go:96] Skipping create...Using existing machine configuration
	I0805 12:58:42.081076  451238 fix.go:54] fixHost starting: 
	I0805 12:58:42.081569  451238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:58:42.081611  451238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:58:42.101889  451238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43379
	I0805 12:58:42.102366  451238 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:58:42.102910  451238 main.go:141] libmachine: Using API Version  1
	I0805 12:58:42.102947  451238 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:58:42.103310  451238 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:58:42.103552  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:58:42.103718  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetState
	I0805 12:58:42.105465  451238 fix.go:112] recreateIfNeeded on old-k8s-version-635707: state=Stopped err=<nil>
	I0805 12:58:42.105504  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	W0805 12:58:42.105674  451238 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 12:58:42.107563  451238 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-635707" ...
	I0805 12:58:39.567840  450576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0: (1.456137011s)
	I0805 12:58:39.567879  450576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 from cache
	I0805 12:58:39.567905  450576 cache_images.go:123] Successfully loaded all cached images
	I0805 12:58:39.567911  450576 cache_images.go:92] duration metric: took 14.873174481s to LoadCachedImages
	I0805 12:58:39.567921  450576 kubeadm.go:934] updating node { 192.168.72.223 8443 v1.31.0-rc.0 crio true true} ...
	I0805 12:58:39.568053  450576 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-669469 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.223
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-669469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 12:58:39.568137  450576 ssh_runner.go:195] Run: crio config
	I0805 12:58:39.616607  450576 cni.go:84] Creating CNI manager for ""
	I0805 12:58:39.616634  450576 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:58:39.616660  450576 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 12:58:39.616683  450576 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.223 APIServerPort:8443 KubernetesVersion:v1.31.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-669469 NodeName:no-preload-669469 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.223"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.223 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 12:58:39.616822  450576 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.223
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-669469"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.223
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.223"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 12:58:39.616896  450576 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-rc.0
	I0805 12:58:39.627827  450576 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 12:58:39.627901  450576 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 12:58:39.637348  450576 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0805 12:58:39.653917  450576 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0805 12:58:39.670196  450576 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0805 12:58:39.686922  450576 ssh_runner.go:195] Run: grep 192.168.72.223	control-plane.minikube.internal$ /etc/hosts
	I0805 12:58:39.690804  450576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.223	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:58:39.703146  450576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:58:39.834718  450576 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 12:58:39.857015  450576 certs.go:68] Setting up /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469 for IP: 192.168.72.223
	I0805 12:58:39.857036  450576 certs.go:194] generating shared ca certs ...
	I0805 12:58:39.857057  450576 certs.go:226] acquiring lock for ca certs: {Name:mk0abfcaff3883fbb5243c47b487f9200d9166d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:58:39.857229  450576 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key
	I0805 12:58:39.857286  450576 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key
	I0805 12:58:39.857300  450576 certs.go:256] generating profile certs ...
	I0805 12:58:39.857431  450576 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469/client.key
	I0805 12:58:39.857489  450576 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469/apiserver.key.dd0884bb
	I0805 12:58:39.857535  450576 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469/proxy-client.key
	I0805 12:58:39.857683  450576 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem (1338 bytes)
	W0805 12:58:39.857723  450576 certs.go:480] ignoring /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219_empty.pem, impossibly tiny 0 bytes
	I0805 12:58:39.857739  450576 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 12:58:39.857769  450576 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem (1082 bytes)
	I0805 12:58:39.857834  450576 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem (1123 bytes)
	I0805 12:58:39.857872  450576 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem (1675 bytes)
	I0805 12:58:39.857923  450576 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:58:39.858695  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 12:58:39.895944  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 12:58:39.925816  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 12:58:39.960150  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 12:58:39.993307  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0805 12:58:40.027900  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0805 12:58:40.053492  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 12:58:40.077331  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 12:58:40.101010  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /usr/share/ca-certificates/3912192.pem (1708 bytes)
	I0805 12:58:40.123991  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 12:58:40.147563  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem --> /usr/share/ca-certificates/391219.pem (1338 bytes)
	I0805 12:58:40.170414  450576 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 12:58:40.188256  450576 ssh_runner.go:195] Run: openssl version
	I0805 12:58:40.193955  450576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3912192.pem && ln -fs /usr/share/ca-certificates/3912192.pem /etc/ssl/certs/3912192.pem"
	I0805 12:58:40.204793  450576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3912192.pem
	I0805 12:58:40.209061  450576 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 11:39 /usr/share/ca-certificates/3912192.pem
	I0805 12:58:40.209115  450576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3912192.pem
	I0805 12:58:40.214948  450576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3912192.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 12:58:40.226193  450576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 12:58:40.237723  450576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:58:40.241960  450576 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:58:40.242019  450576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:58:40.247502  450576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 12:58:40.258791  450576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391219.pem && ln -fs /usr/share/ca-certificates/391219.pem /etc/ssl/certs/391219.pem"
	I0805 12:58:40.270176  450576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/391219.pem
	I0805 12:58:40.274717  450576 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 11:39 /usr/share/ca-certificates/391219.pem
	I0805 12:58:40.274786  450576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391219.pem
	I0805 12:58:40.280457  450576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391219.pem /etc/ssl/certs/51391683.0"
	I0805 12:58:40.292091  450576 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 12:58:40.296842  450576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 12:58:40.303003  450576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 12:58:40.309009  450576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 12:58:40.314951  450576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 12:58:40.320674  450576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 12:58:40.326433  450576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 12:58:40.331848  450576 kubeadm.go:392] StartCluster: {Name:no-preload-669469 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-rc.0 ClusterName:no-preload-669469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.223 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:58:40.331938  450576 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 12:58:40.331975  450576 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:58:40.374390  450576 cri.go:89] found id: ""
	I0805 12:58:40.374482  450576 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 12:58:40.385467  450576 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0805 12:58:40.385485  450576 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0805 12:58:40.385531  450576 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0805 12:58:40.395411  450576 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0805 12:58:40.396455  450576 kubeconfig.go:125] found "no-preload-669469" server: "https://192.168.72.223:8443"
	I0805 12:58:40.400090  450576 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0805 12:58:40.410942  450576 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.223
	I0805 12:58:40.410971  450576 kubeadm.go:1160] stopping kube-system containers ...
	I0805 12:58:40.410985  450576 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0805 12:58:40.411032  450576 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:58:40.453021  450576 cri.go:89] found id: ""
	I0805 12:58:40.453115  450576 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0805 12:58:40.470389  450576 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 12:58:40.480421  450576 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 12:58:40.480445  450576 kubeadm.go:157] found existing configuration files:
	
	I0805 12:58:40.480502  450576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 12:58:40.489625  450576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 12:58:40.489672  450576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 12:58:40.499261  450576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 12:58:40.508571  450576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 12:58:40.508634  450576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 12:58:40.517811  450576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 12:58:40.526563  450576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 12:58:40.526620  450576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 12:58:40.535753  450576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 12:58:40.544981  450576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 12:58:40.545040  450576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 12:58:40.555237  450576 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 12:58:40.565180  450576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:40.683889  450576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:41.632122  450576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:41.866665  450576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:41.944022  450576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:42.048030  450576 api_server.go:52] waiting for apiserver process to appear ...
	I0805 12:58:42.048127  450576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:58:42.548995  450576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:58:43.048336  450576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:58:43.086457  450576 api_server.go:72] duration metric: took 1.038426772s to wait for apiserver process to appear ...
	I0805 12:58:43.086487  450576 api_server.go:88] waiting for apiserver healthz status ...
	I0805 12:58:43.086509  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:43.086982  450576 api_server.go:269] stopped: https://192.168.72.223:8443/healthz: Get "https://192.168.72.223:8443/healthz": dial tcp 192.168.72.223:8443: connect: connection refused
	I0805 12:58:42.080800  450884 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722862722.053648046
	
	I0805 12:58:42.080828  450884 fix.go:216] guest clock: 1722862722.053648046
	I0805 12:58:42.080839  450884 fix.go:229] Guest: 2024-08-05 12:58:42.053648046 +0000 UTC Remote: 2024-08-05 12:58:41.958987261 +0000 UTC m=+264.923354352 (delta=94.660785ms)
	I0805 12:58:42.080867  450884 fix.go:200] guest clock delta is within tolerance: 94.660785ms
	I0805 12:58:42.080876  450884 start.go:83] releasing machines lock for "default-k8s-diff-port-371585", held for 20.132054114s
	I0805 12:58:42.080916  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 12:58:42.081260  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetIP
	I0805 12:58:42.084196  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:42.084662  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:42.084695  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:42.084867  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 12:58:42.085589  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 12:58:42.085786  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 12:58:42.085875  450884 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 12:58:42.085925  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:42.086064  450884 ssh_runner.go:195] Run: cat /version.json
	I0805 12:58:42.086091  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:42.088693  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:42.089018  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:42.089042  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:42.089197  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:42.089260  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:42.089455  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:42.089729  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:42.089730  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:42.089785  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:42.089881  450884 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa Username:docker}
	I0805 12:58:42.089970  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:42.090128  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:42.090286  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:42.090457  450884 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa Username:docker}
	I0805 12:58:42.193160  450884 ssh_runner.go:195] Run: systemctl --version
	I0805 12:58:42.199341  450884 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 12:58:42.344713  450884 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 12:58:42.350944  450884 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 12:58:42.351026  450884 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 12:58:42.368162  450884 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 12:58:42.368196  450884 start.go:495] detecting cgroup driver to use...
	I0805 12:58:42.368260  450884 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 12:58:42.384477  450884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 12:58:42.401847  450884 docker.go:217] disabling cri-docker service (if available) ...
	I0805 12:58:42.401907  450884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 12:58:42.416318  450884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 12:58:42.430994  450884 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 12:58:42.545944  450884 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 12:58:42.721877  450884 docker.go:233] disabling docker service ...
	I0805 12:58:42.721961  450884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 12:58:42.743504  450884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 12:58:42.763111  450884 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 12:58:42.914270  450884 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 12:58:43.064816  450884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 12:58:43.090748  450884 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 12:58:43.115493  450884 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0805 12:58:43.115565  450884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:43.132497  450884 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 12:58:43.132583  450884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:43.146700  450884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:43.159880  450884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:43.175598  450884 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 12:58:43.191263  450884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:43.207573  450884 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:43.229567  450884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:43.248604  450884 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 12:58:43.261272  450884 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0805 12:58:43.261350  450884 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0805 12:58:43.276740  450884 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 12:58:43.288473  450884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:58:43.436066  450884 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 12:58:43.593264  450884 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 12:58:43.593355  450884 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 12:58:43.599342  450884 start.go:563] Will wait 60s for crictl version
	I0805 12:58:43.599419  450884 ssh_runner.go:195] Run: which crictl
	I0805 12:58:43.603681  450884 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 12:58:43.651181  450884 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 12:58:43.651296  450884 ssh_runner.go:195] Run: crio --version
	I0805 12:58:43.691418  450884 ssh_runner.go:195] Run: crio --version
	I0805 12:58:43.725036  450884 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0805 12:58:42.109016  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .Start
	I0805 12:58:42.109214  451238 main.go:141] libmachine: (old-k8s-version-635707) Ensuring networks are active...
	I0805 12:58:42.110192  451238 main.go:141] libmachine: (old-k8s-version-635707) Ensuring network default is active
	I0805 12:58:42.110686  451238 main.go:141] libmachine: (old-k8s-version-635707) Ensuring network mk-old-k8s-version-635707 is active
	I0805 12:58:42.111108  451238 main.go:141] libmachine: (old-k8s-version-635707) Getting domain xml...
	I0805 12:58:42.112194  451238 main.go:141] libmachine: (old-k8s-version-635707) Creating domain...
	I0805 12:58:43.453015  451238 main.go:141] libmachine: (old-k8s-version-635707) Waiting to get IP...
	I0805 12:58:43.453994  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:43.454435  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:43.454504  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:43.454435  452186 retry.go:31] will retry after 270.355403ms: waiting for machine to come up
	I0805 12:58:43.727101  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:43.727583  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:43.727641  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:43.727568  452186 retry.go:31] will retry after 313.75466ms: waiting for machine to come up
	I0805 12:58:44.043303  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:44.043954  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:44.043981  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:44.043855  452186 retry.go:31] will retry after 308.608573ms: waiting for machine to come up
	I0805 12:58:44.354830  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:44.355396  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:44.355421  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:44.355305  452186 retry.go:31] will retry after 510.256657ms: waiting for machine to come up
	I0805 12:58:44.866970  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:44.867534  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:44.867559  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:44.867424  452186 retry.go:31] will retry after 668.55006ms: waiting for machine to come up
	I0805 12:58:45.537377  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:45.537959  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:45.537989  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:45.537909  452186 retry.go:31] will retry after 677.549944ms: waiting for machine to come up
	I0805 12:58:46.217077  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:46.217591  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:46.217625  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:46.217483  452186 retry.go:31] will retry after 847.636867ms: waiting for machine to come up
	I0805 12:58:43.726277  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetIP
	I0805 12:58:43.729689  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:43.730162  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:43.730195  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:43.730391  450884 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0805 12:58:43.735448  450884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:58:43.749640  450884 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-371585 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-371585 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.228 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 12:58:43.749808  450884 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 12:58:43.749886  450884 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:58:43.798507  450884 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0805 12:58:43.798584  450884 ssh_runner.go:195] Run: which lz4
	I0805 12:58:43.803306  450884 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0805 12:58:43.809104  450884 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 12:58:43.809144  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0805 12:58:45.333758  450884 crio.go:462] duration metric: took 1.530500213s to copy over tarball
	I0805 12:58:45.333831  450884 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 12:58:43.587275  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:46.303995  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:46.304038  450576 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:46.304057  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:46.308815  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:46.308849  450576 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:46.587239  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:46.595116  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:46.595151  450576 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:47.087372  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:47.094319  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:47.094363  450576 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:47.586909  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:47.592210  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:47.592252  450576 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:48.086763  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:48.095151  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:48.095182  450576 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:48.586840  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:48.593834  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:48.593870  450576 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:49.087516  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:49.093647  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:49.093677  450576 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:49.587309  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:49.593592  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 200:
	ok
	I0805 12:58:49.602960  450576 api_server.go:141] control plane version: v1.31.0-rc.0
	I0805 12:58:49.603001  450576 api_server.go:131] duration metric: took 6.516505116s to wait for apiserver health ...
	I0805 12:58:49.603013  450576 cni.go:84] Creating CNI manager for ""
	I0805 12:58:49.603024  450576 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:58:49.851135  450576 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0805 12:58:47.067245  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:47.067895  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:47.067930  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:47.067838  452186 retry.go:31] will retry after 1.275228928s: waiting for machine to come up
	I0805 12:58:48.344881  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:48.345295  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:48.345319  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:48.345258  452186 retry.go:31] will retry after 1.826891386s: waiting for machine to come up
	I0805 12:58:50.174583  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:50.175111  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:50.175138  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:50.175074  452186 retry.go:31] will retry after 1.53756677s: waiting for machine to come up
	I0805 12:58:51.714025  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:51.714529  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:51.714553  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:51.714485  452186 retry.go:31] will retry after 2.762270002s: waiting for machine to come up
	I0805 12:58:47.908896  450884 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.575029516s)
	I0805 12:58:47.908929  450884 crio.go:469] duration metric: took 2.575138566s to extract the tarball
	I0805 12:58:47.908938  450884 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0805 12:58:47.964757  450884 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:58:48.013358  450884 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 12:58:48.013392  450884 cache_images.go:84] Images are preloaded, skipping loading
	I0805 12:58:48.013404  450884 kubeadm.go:934] updating node { 192.168.50.228 8444 v1.30.3 crio true true} ...
	I0805 12:58:48.013533  450884 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-371585 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.228
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-371585 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 12:58:48.013623  450884 ssh_runner.go:195] Run: crio config
	I0805 12:58:48.062183  450884 cni.go:84] Creating CNI manager for ""
	I0805 12:58:48.062219  450884 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:58:48.062238  450884 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 12:58:48.062274  450884 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.228 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-371585 NodeName:default-k8s-diff-port-371585 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.228"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.228 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 12:58:48.062474  450884 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.228
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-371585"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.228
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.228"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 12:58:48.062552  450884 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 12:58:48.076490  450884 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 12:58:48.076583  450884 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 12:58:48.090058  450884 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0805 12:58:48.110202  450884 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 12:58:48.131420  450884 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0805 12:58:48.151774  450884 ssh_runner.go:195] Run: grep 192.168.50.228	control-plane.minikube.internal$ /etc/hosts
	I0805 12:58:48.156904  450884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.228	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:58:48.172398  450884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:58:48.292999  450884 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 12:58:48.310331  450884 certs.go:68] Setting up /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585 for IP: 192.168.50.228
	I0805 12:58:48.310366  450884 certs.go:194] generating shared ca certs ...
	I0805 12:58:48.310389  450884 certs.go:226] acquiring lock for ca certs: {Name:mk0abfcaff3883fbb5243c47b487f9200d9166d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:58:48.310576  450884 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key
	I0805 12:58:48.310640  450884 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key
	I0805 12:58:48.310658  450884 certs.go:256] generating profile certs ...
	I0805 12:58:48.310803  450884 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585/client.key
	I0805 12:58:48.310881  450884 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585/apiserver.key.f7891227
	I0805 12:58:48.310946  450884 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585/proxy-client.key
	I0805 12:58:48.311231  450884 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem (1338 bytes)
	W0805 12:58:48.311317  450884 certs.go:480] ignoring /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219_empty.pem, impossibly tiny 0 bytes
	I0805 12:58:48.311354  450884 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 12:58:48.311408  450884 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem (1082 bytes)
	I0805 12:58:48.311447  450884 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem (1123 bytes)
	I0805 12:58:48.311485  450884 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem (1675 bytes)
	I0805 12:58:48.311545  450884 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:58:48.312365  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 12:58:48.363733  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 12:58:48.395662  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 12:58:48.450822  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 12:58:48.495611  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0805 12:58:48.529393  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0805 12:58:48.557543  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 12:58:48.584777  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 12:58:48.611987  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /usr/share/ca-certificates/3912192.pem (1708 bytes)
	I0805 12:58:48.637500  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 12:58:48.664469  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem --> /usr/share/ca-certificates/391219.pem (1338 bytes)
	I0805 12:58:48.690221  450884 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 12:58:48.709082  450884 ssh_runner.go:195] Run: openssl version
	I0805 12:58:48.716181  450884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3912192.pem && ln -fs /usr/share/ca-certificates/3912192.pem /etc/ssl/certs/3912192.pem"
	I0805 12:58:48.728455  450884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3912192.pem
	I0805 12:58:48.733395  450884 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 11:39 /usr/share/ca-certificates/3912192.pem
	I0805 12:58:48.733456  450884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3912192.pem
	I0805 12:58:48.739295  450884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3912192.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 12:58:48.750515  450884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 12:58:48.761506  450884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:58:48.765995  450884 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:58:48.766052  450884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:58:48.772121  450884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 12:58:48.783123  450884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391219.pem && ln -fs /usr/share/ca-certificates/391219.pem /etc/ssl/certs/391219.pem"
	I0805 12:58:48.794318  450884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/391219.pem
	I0805 12:58:48.798795  450884 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 11:39 /usr/share/ca-certificates/391219.pem
	I0805 12:58:48.798843  450884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391219.pem
	I0805 12:58:48.804878  450884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391219.pem /etc/ssl/certs/51391683.0"
	I0805 12:58:48.816757  450884 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 12:58:48.821686  450884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 12:58:48.828121  450884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 12:58:48.834386  450884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 12:58:48.840425  450884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 12:58:48.846218  450884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 12:58:48.852035  450884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 12:58:48.857997  450884 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-371585 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-371585 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.228 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:58:48.858131  450884 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 12:58:48.858179  450884 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:58:48.908402  450884 cri.go:89] found id: ""
	I0805 12:58:48.908471  450884 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 12:58:48.921185  450884 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0805 12:58:48.921207  450884 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0805 12:58:48.921258  450884 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0805 12:58:48.932907  450884 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0805 12:58:48.933927  450884 kubeconfig.go:125] found "default-k8s-diff-port-371585" server: "https://192.168.50.228:8444"
	I0805 12:58:48.936058  450884 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0805 12:58:48.947233  450884 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.228
	I0805 12:58:48.947262  450884 kubeadm.go:1160] stopping kube-system containers ...
	I0805 12:58:48.947273  450884 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0805 12:58:48.947313  450884 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:58:48.988179  450884 cri.go:89] found id: ""
	I0805 12:58:48.988281  450884 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0805 12:58:49.005901  450884 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 12:58:49.016576  450884 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 12:58:49.016597  450884 kubeadm.go:157] found existing configuration files:
	
	I0805 12:58:49.016648  450884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0805 12:58:49.029718  450884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 12:58:49.029822  450884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 12:58:49.041670  450884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0805 12:58:49.051650  450884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 12:58:49.051724  450884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 12:58:49.061671  450884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0805 12:58:49.071671  450884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 12:58:49.071755  450884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 12:58:49.082022  450884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0805 12:58:49.092013  450884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 12:58:49.092103  450884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 12:58:49.105446  450884 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 12:58:49.118581  450884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:49.233260  450884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:50.199462  450884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:50.418823  450884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:50.500350  450884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:50.594991  450884 api_server.go:52] waiting for apiserver process to appear ...
	I0805 12:58:50.595109  450884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:58:51.096171  450884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:58:51.596111  450884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:58:51.633309  450884 api_server.go:72] duration metric: took 1.038316986s to wait for apiserver process to appear ...
	I0805 12:58:51.633350  450884 api_server.go:88] waiting for apiserver healthz status ...
	I0805 12:58:51.633377  450884 api_server.go:253] Checking apiserver healthz at https://192.168.50.228:8444/healthz ...
	I0805 12:58:51.634005  450884 api_server.go:269] stopped: https://192.168.50.228:8444/healthz: Get "https://192.168.50.228:8444/healthz": dial tcp 192.168.50.228:8444: connect: connection refused
	I0805 12:58:50.021635  450576 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0805 12:58:50.036338  450576 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0805 12:58:50.060746  450576 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 12:58:50.159670  450576 system_pods.go:59] 8 kube-system pods found
	I0805 12:58:50.159724  450576 system_pods.go:61] "coredns-6f6b679f8f-nkv88" [ee7e59fb-2500-4d7a-9537-e38e08fb2445] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0805 12:58:50.159737  450576 system_pods.go:61] "etcd-no-preload-669469" [095df0f1-069a-419f-815b-ddbec3a2291f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0805 12:58:50.159762  450576 system_pods.go:61] "kube-apiserver-no-preload-669469" [20b45902-b807-457a-93b3-d2b9b76d2598] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0805 12:58:50.159772  450576 system_pods.go:61] "kube-controller-manager-no-preload-669469" [122a47ed-7f6f-4b2e-980a-45f41b997dda] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0805 12:58:50.159780  450576 system_pods.go:61] "kube-proxy-cwq69" [78e0333b-a0f4-40a6-a04d-6971bb4d09a8] Running
	I0805 12:58:50.159788  450576 system_pods.go:61] "kube-scheduler-no-preload-669469" [88010c2b-b32f-4fe1-952d-262e881b76dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0805 12:58:50.159796  450576 system_pods.go:61] "metrics-server-6867b74b74-p7b2r" [7e4dd805-07c8-4339-bf1a-57a98fd674cd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 12:58:50.159808  450576 system_pods.go:61] "storage-provisioner" [207c46c5-c3c0-4f0b-b3ea-9b42b9e5f761] Running
	I0805 12:58:50.159817  450576 system_pods.go:74] duration metric: took 99.038765ms to wait for pod list to return data ...
	I0805 12:58:50.159830  450576 node_conditions.go:102] verifying NodePressure condition ...
	I0805 12:58:50.163888  450576 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 12:58:50.163923  450576 node_conditions.go:123] node cpu capacity is 2
	I0805 12:58:50.163956  450576 node_conditions.go:105] duration metric: took 4.11869ms to run NodePressure ...
	I0805 12:58:50.163980  450576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:50.849885  450576 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0805 12:58:50.854483  450576 kubeadm.go:739] kubelet initialised
	I0805 12:58:50.854505  450576 kubeadm.go:740] duration metric: took 4.588388ms waiting for restarted kubelet to initialise ...
	I0805 12:58:50.854514  450576 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 12:58:50.861245  450576 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-nkv88" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:52.869370  450576 pod_ready.go:102] pod "coredns-6f6b679f8f-nkv88" in "kube-system" namespace has status "Ready":"False"
	I0805 12:58:52.134427  450884 api_server.go:253] Checking apiserver healthz at https://192.168.50.228:8444/healthz ...
	I0805 12:58:54.933253  450884 api_server.go:279] https://192.168.50.228:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0805 12:58:54.933288  450884 api_server.go:103] status: https://192.168.50.228:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0805 12:58:54.933305  450884 api_server.go:253] Checking apiserver healthz at https://192.168.50.228:8444/healthz ...
	I0805 12:58:54.970883  450884 api_server.go:279] https://192.168.50.228:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0805 12:58:54.970928  450884 api_server.go:103] status: https://192.168.50.228:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0805 12:58:55.134250  450884 api_server.go:253] Checking apiserver healthz at https://192.168.50.228:8444/healthz ...
	I0805 12:58:55.139762  450884 api_server.go:279] https://192.168.50.228:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:55.139798  450884 api_server.go:103] status: https://192.168.50.228:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:55.634499  450884 api_server.go:253] Checking apiserver healthz at https://192.168.50.228:8444/healthz ...
	I0805 12:58:55.644495  450884 api_server.go:279] https://192.168.50.228:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:55.644532  450884 api_server.go:103] status: https://192.168.50.228:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:56.134123  450884 api_server.go:253] Checking apiserver healthz at https://192.168.50.228:8444/healthz ...
	I0805 12:58:56.141958  450884 api_server.go:279] https://192.168.50.228:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:56.142002  450884 api_server.go:103] status: https://192.168.50.228:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:56.633573  450884 api_server.go:253] Checking apiserver healthz at https://192.168.50.228:8444/healthz ...
	I0805 12:58:56.640578  450884 api_server.go:279] https://192.168.50.228:8444/healthz returned 200:
	ok
	I0805 12:58:56.649624  450884 api_server.go:141] control plane version: v1.30.3
	I0805 12:58:56.649659  450884 api_server.go:131] duration metric: took 5.016299114s to wait for apiserver health ...
	I0805 12:58:56.649671  450884 cni.go:84] Creating CNI manager for ""
	I0805 12:58:56.649681  450884 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:58:56.651587  450884 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0805 12:58:54.478201  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:54.478619  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:54.478650  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:54.478579  452186 retry.go:31] will retry after 2.992766963s: waiting for machine to come up
	I0805 12:58:56.652853  450884 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0805 12:58:56.663878  450884 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0805 12:58:56.699765  450884 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 12:58:56.715040  450884 system_pods.go:59] 8 kube-system pods found
	I0805 12:58:56.715078  450884 system_pods.go:61] "coredns-7db6d8ff4d-8rzb7" [df42e41d-4544-493f-a09d-678df1fb5258] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0805 12:58:56.715085  450884 system_pods.go:61] "etcd-default-k8s-diff-port-371585" [1ab6cd59-432a-44b8-95f2-948c585d9bbf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0805 12:58:56.715092  450884 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-371585" [c9173b98-c77e-4ad0-aea5-c894c045e0c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0805 12:58:56.715101  450884 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-371585" [283737ec-1afa-4994-9cee-b655a8397a37] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0805 12:58:56.715105  450884 system_pods.go:61] "kube-proxy-5dr9v" [767ccb8b-2db0-4b59-b3b0-e099185bc725] Running
	I0805 12:58:56.715111  450884 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-371585" [fb3cfdea-9370-4842-a5ab-5ac24804f59e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0805 12:58:56.715116  450884 system_pods.go:61] "metrics-server-569cc877fc-dsrqr" [0d4c79e4-aa6c-42f5-840b-91b9d714d078] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 12:58:56.715125  450884 system_pods.go:61] "storage-provisioner" [2dba6f50-5cdc-4195-8daf-c19dac38f488] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0805 12:58:56.715133  450884 system_pods.go:74] duration metric: took 15.343284ms to wait for pod list to return data ...
	I0805 12:58:56.715144  450884 node_conditions.go:102] verifying NodePressure condition ...
	I0805 12:58:56.720006  450884 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 12:58:56.720031  450884 node_conditions.go:123] node cpu capacity is 2
	I0805 12:58:56.720042  450884 node_conditions.go:105] duration metric: took 4.893566ms to run NodePressure ...
	I0805 12:58:56.720059  450884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:56.985822  450884 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0805 12:58:56.990461  450884 kubeadm.go:739] kubelet initialised
	I0805 12:58:56.990484  450884 kubeadm.go:740] duration metric: took 4.636814ms waiting for restarted kubelet to initialise ...
	I0805 12:58:56.990493  450884 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 12:58:56.996266  450884 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-8rzb7" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:57.001407  450884 pod_ready.go:97] node "default-k8s-diff-port-371585" hosting pod "coredns-7db6d8ff4d-8rzb7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-371585" has status "Ready":"False"
	I0805 12:58:57.001434  450884 pod_ready.go:81] duration metric: took 5.140963ms for pod "coredns-7db6d8ff4d-8rzb7" in "kube-system" namespace to be "Ready" ...
	E0805 12:58:57.001446  450884 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-371585" hosting pod "coredns-7db6d8ff4d-8rzb7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-371585" has status "Ready":"False"
	I0805 12:58:57.001456  450884 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:57.005437  450884 pod_ready.go:97] node "default-k8s-diff-port-371585" hosting pod "etcd-default-k8s-diff-port-371585" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-371585" has status "Ready":"False"
	I0805 12:58:57.005473  450884 pod_ready.go:81] duration metric: took 3.995646ms for pod "etcd-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	E0805 12:58:57.005486  450884 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-371585" hosting pod "etcd-default-k8s-diff-port-371585" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-371585" has status "Ready":"False"
	I0805 12:58:57.005495  450884 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:57.009923  450884 pod_ready.go:97] node "default-k8s-diff-port-371585" hosting pod "kube-apiserver-default-k8s-diff-port-371585" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-371585" has status "Ready":"False"
	I0805 12:58:57.009943  450884 pod_ready.go:81] duration metric: took 4.439871ms for pod "kube-apiserver-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	E0805 12:58:57.009952  450884 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-371585" hosting pod "kube-apiserver-default-k8s-diff-port-371585" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-371585" has status "Ready":"False"
	I0805 12:58:57.009958  450884 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:54.869534  450576 pod_ready.go:102] pod "coredns-6f6b679f8f-nkv88" in "kube-system" namespace has status "Ready":"False"
	I0805 12:58:56.370007  450576 pod_ready.go:92] pod "coredns-6f6b679f8f-nkv88" in "kube-system" namespace has status "Ready":"True"
	I0805 12:58:56.370035  450576 pod_ready.go:81] duration metric: took 5.508756413s for pod "coredns-6f6b679f8f-nkv88" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:56.370045  450576 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.376357  450576 pod_ready.go:92] pod "etcd-no-preload-669469" in "kube-system" namespace has status "Ready":"True"
	I0805 12:58:58.376386  450576 pod_ready.go:81] duration metric: took 2.006334873s for pod "etcd-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.376396  450576 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:57.473094  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:57.473555  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:57.473587  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:57.473495  452186 retry.go:31] will retry after 4.27138033s: waiting for machine to come up
	I0805 12:59:01.750111  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.750558  451238 main.go:141] libmachine: (old-k8s-version-635707) Found IP for machine: 192.168.61.41
	I0805 12:59:01.750586  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has current primary IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.750593  451238 main.go:141] libmachine: (old-k8s-version-635707) Reserving static IP address...
	I0805 12:59:01.751003  451238 main.go:141] libmachine: (old-k8s-version-635707) Reserved static IP address: 192.168.61.41
	I0805 12:59:01.751061  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "old-k8s-version-635707", mac: "52:54:00:2a:da:c5", ip: "192.168.61.41"} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:01.751081  451238 main.go:141] libmachine: (old-k8s-version-635707) Waiting for SSH to be available...
	I0805 12:59:01.751112  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | skip adding static IP to network mk-old-k8s-version-635707 - found existing host DHCP lease matching {name: "old-k8s-version-635707", mac: "52:54:00:2a:da:c5", ip: "192.168.61.41"}
	I0805 12:59:01.751130  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | Getting to WaitForSSH function...
	I0805 12:59:01.753240  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.753634  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:01.753672  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.753810  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | Using SSH client type: external
	I0805 12:59:01.753854  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | Using SSH private key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/id_rsa (-rw-------)
	I0805 12:59:01.753900  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.41 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 12:59:01.753919  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | About to run SSH command:
	I0805 12:59:01.753933  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | exit 0
	I0805 12:59:01.875919  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | SSH cmd err, output: <nil>: 
	I0805 12:59:01.876298  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetConfigRaw
	I0805 12:59:01.877028  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetIP
	I0805 12:59:01.879644  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.880120  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:01.880164  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.880508  451238 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/config.json ...
	I0805 12:59:01.880778  451238 machine.go:94] provisionDockerMachine start ...
	I0805 12:59:01.880805  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:59:01.881039  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:01.882998  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.883362  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:01.883389  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.883553  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:01.883755  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:01.883900  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:01.884012  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:01.884248  451238 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:01.884496  451238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I0805 12:59:01.884511  451238 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 12:58:57.103049  450884 pod_ready.go:97] node "default-k8s-diff-port-371585" hosting pod "kube-controller-manager-default-k8s-diff-port-371585" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-371585" has status "Ready":"False"
	I0805 12:58:57.103095  450884 pod_ready.go:81] duration metric: took 93.113727ms for pod "kube-controller-manager-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	E0805 12:58:57.103109  450884 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-371585" hosting pod "kube-controller-manager-default-k8s-diff-port-371585" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-371585" has status "Ready":"False"
	I0805 12:58:57.103116  450884 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5dr9v" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:57.503531  450884 pod_ready.go:92] pod "kube-proxy-5dr9v" in "kube-system" namespace has status "Ready":"True"
	I0805 12:58:57.503556  450884 pod_ready.go:81] duration metric: took 400.433562ms for pod "kube-proxy-5dr9v" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:57.503565  450884 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:59.514591  450884 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:02.011308  450884 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:03.148902  450393 start.go:364] duration metric: took 56.514427046s to acquireMachinesLock for "embed-certs-321139"
	I0805 12:59:03.148967  450393 start.go:96] Skipping create...Using existing machine configuration
	I0805 12:59:03.148976  450393 fix.go:54] fixHost starting: 
	I0805 12:59:03.149432  450393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:59:03.149473  450393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:59:03.166485  450393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43007
	I0805 12:59:03.166934  450393 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:59:03.167443  450393 main.go:141] libmachine: Using API Version  1
	I0805 12:59:03.167469  450393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:59:03.167808  450393 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:59:03.168062  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:03.168258  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetState
	I0805 12:59:03.170011  450393 fix.go:112] recreateIfNeeded on embed-certs-321139: state=Stopped err=<nil>
	I0805 12:59:03.170036  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	W0805 12:59:03.170221  450393 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 12:59:03.172109  450393 out.go:177] * Restarting existing kvm2 VM for "embed-certs-321139" ...
	I0805 12:58:58.886766  450576 pod_ready.go:92] pod "kube-apiserver-no-preload-669469" in "kube-system" namespace has status "Ready":"True"
	I0805 12:58:58.886792  450576 pod_ready.go:81] duration metric: took 510.389529ms for pod "kube-apiserver-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.886804  450576 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.891878  450576 pod_ready.go:92] pod "kube-controller-manager-no-preload-669469" in "kube-system" namespace has status "Ready":"True"
	I0805 12:58:58.891907  450576 pod_ready.go:81] duration metric: took 5.094036ms for pod "kube-controller-manager-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.891919  450576 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cwq69" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.896953  450576 pod_ready.go:92] pod "kube-proxy-cwq69" in "kube-system" namespace has status "Ready":"True"
	I0805 12:58:58.896981  450576 pod_ready.go:81] duration metric: took 5.054422ms for pod "kube-proxy-cwq69" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.896995  450576 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.902437  450576 pod_ready.go:92] pod "kube-scheduler-no-preload-669469" in "kube-system" namespace has status "Ready":"True"
	I0805 12:58:58.902456  450576 pod_ready.go:81] duration metric: took 5.453487ms for pod "kube-scheduler-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.902465  450576 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:00.909633  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:03.410487  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:03.173728  450393 main.go:141] libmachine: (embed-certs-321139) Calling .Start
	I0805 12:59:03.173932  450393 main.go:141] libmachine: (embed-certs-321139) Ensuring networks are active...
	I0805 12:59:03.174932  450393 main.go:141] libmachine: (embed-certs-321139) Ensuring network default is active
	I0805 12:59:03.175441  450393 main.go:141] libmachine: (embed-certs-321139) Ensuring network mk-embed-certs-321139 is active
	I0805 12:59:03.176102  450393 main.go:141] libmachine: (embed-certs-321139) Getting domain xml...
	I0805 12:59:03.176848  450393 main.go:141] libmachine: (embed-certs-321139) Creating domain...
	I0805 12:59:01.984198  451238 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 12:59:01.984237  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetMachineName
	I0805 12:59:01.984501  451238 buildroot.go:166] provisioning hostname "old-k8s-version-635707"
	I0805 12:59:01.984534  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetMachineName
	I0805 12:59:01.984750  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:01.987690  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.988085  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:01.988115  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.988240  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:01.988470  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:01.988782  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:01.988945  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:01.989173  451238 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:01.989407  451238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I0805 12:59:01.989425  451238 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-635707 && echo "old-k8s-version-635707" | sudo tee /etc/hostname
	I0805 12:59:02.108368  451238 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-635707
	
	I0805 12:59:02.108406  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:02.111301  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.111669  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:02.111712  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.111837  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:02.112027  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:02.112212  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:02.112393  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:02.112563  451238 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:02.112797  451238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I0805 12:59:02.112824  451238 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-635707' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-635707/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-635707' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 12:59:02.225638  451238 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 12:59:02.225681  451238 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19377-383955/.minikube CaCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19377-383955/.minikube}
	I0805 12:59:02.225731  451238 buildroot.go:174] setting up certificates
	I0805 12:59:02.225745  451238 provision.go:84] configureAuth start
	I0805 12:59:02.225760  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetMachineName
	I0805 12:59:02.226099  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetIP
	I0805 12:59:02.229252  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.229643  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:02.229671  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.229885  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:02.232479  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.232912  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:02.232951  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.233125  451238 provision.go:143] copyHostCerts
	I0805 12:59:02.233188  451238 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem, removing ...
	I0805 12:59:02.233201  451238 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 12:59:02.233271  451238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem (1123 bytes)
	I0805 12:59:02.233412  451238 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem, removing ...
	I0805 12:59:02.233426  451238 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 12:59:02.233459  451238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem (1675 bytes)
	I0805 12:59:02.233543  451238 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem, removing ...
	I0805 12:59:02.233553  451238 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 12:59:02.233581  451238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem (1082 bytes)
	I0805 12:59:02.233661  451238 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-635707 san=[127.0.0.1 192.168.61.41 localhost minikube old-k8s-version-635707]
	I0805 12:59:02.470213  451238 provision.go:177] copyRemoteCerts
	I0805 12:59:02.470328  451238 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 12:59:02.470369  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:02.473450  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.473791  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:02.473829  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.473964  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:02.474173  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:02.474313  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:02.474429  451238 sshutil.go:53] new ssh client: &{IP:192.168.61.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/id_rsa Username:docker}
	I0805 12:59:02.558831  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 12:59:02.583652  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0805 12:59:02.609154  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0805 12:59:02.635827  451238 provision.go:87] duration metric: took 410.067115ms to configureAuth
	I0805 12:59:02.635862  451238 buildroot.go:189] setting minikube options for container-runtime
	I0805 12:59:02.636109  451238 config.go:182] Loaded profile config "old-k8s-version-635707": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0805 12:59:02.636357  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:02.638964  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.639466  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:02.639489  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.639644  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:02.639953  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:02.640197  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:02.640454  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:02.640733  451238 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:02.640975  451238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I0805 12:59:02.641000  451238 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 12:59:02.917466  451238 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 12:59:02.917499  451238 machine.go:97] duration metric: took 1.036701572s to provisionDockerMachine
	I0805 12:59:02.917512  451238 start.go:293] postStartSetup for "old-k8s-version-635707" (driver="kvm2")
	I0805 12:59:02.917522  451238 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 12:59:02.917539  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:59:02.917946  451238 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 12:59:02.917979  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:02.920900  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.921383  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:02.921426  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.921552  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:02.921773  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:02.921958  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:02.922220  451238 sshutil.go:53] new ssh client: &{IP:192.168.61.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/id_rsa Username:docker}
	I0805 12:59:03.003670  451238 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 12:59:03.008348  451238 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 12:59:03.008384  451238 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/addons for local assets ...
	I0805 12:59:03.008468  451238 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/files for local assets ...
	I0805 12:59:03.008588  451238 filesync.go:149] local asset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> 3912192.pem in /etc/ssl/certs
	I0805 12:59:03.008727  451238 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 12:59:03.019098  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:59:03.042969  451238 start.go:296] duration metric: took 125.441712ms for postStartSetup
	I0805 12:59:03.043011  451238 fix.go:56] duration metric: took 20.961935899s for fixHost
	I0805 12:59:03.043034  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:03.045667  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.046030  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:03.046062  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.046254  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:03.046508  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:03.046701  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:03.046824  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:03.047002  451238 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:03.047182  451238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I0805 12:59:03.047192  451238 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 12:59:03.148773  451238 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722862743.120260193
	
	I0805 12:59:03.148798  451238 fix.go:216] guest clock: 1722862743.120260193
	I0805 12:59:03.148807  451238 fix.go:229] Guest: 2024-08-05 12:59:03.120260193 +0000 UTC Remote: 2024-08-05 12:59:03.043015059 +0000 UTC m=+231.118249223 (delta=77.245134ms)
	I0805 12:59:03.148831  451238 fix.go:200] guest clock delta is within tolerance: 77.245134ms
	I0805 12:59:03.148836  451238 start.go:83] releasing machines lock for "old-k8s-version-635707", held for 21.067801046s
	I0805 12:59:03.148857  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:59:03.149131  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetIP
	I0805 12:59:03.152026  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.152444  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:03.152475  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.152645  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:59:03.153237  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:59:03.153423  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:59:03.153495  451238 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 12:59:03.153551  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:03.153860  451238 ssh_runner.go:195] Run: cat /version.json
	I0805 12:59:03.153895  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:03.156566  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.156903  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:03.156963  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.156994  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.157187  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:03.157411  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:03.157479  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:03.157508  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.157594  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:03.157770  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:03.157782  451238 sshutil.go:53] new ssh client: &{IP:192.168.61.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/id_rsa Username:docker}
	I0805 12:59:03.157924  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:03.158107  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:03.158344  451238 sshutil.go:53] new ssh client: &{IP:192.168.61.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/id_rsa Username:docker}
	I0805 12:59:03.254162  451238 ssh_runner.go:195] Run: systemctl --version
	I0805 12:59:03.260684  451238 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 12:59:03.409837  451238 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 12:59:03.416010  451238 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 12:59:03.416093  451238 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 12:59:03.433548  451238 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 12:59:03.433584  451238 start.go:495] detecting cgroup driver to use...
	I0805 12:59:03.433667  451238 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 12:59:03.450756  451238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 12:59:03.467281  451238 docker.go:217] disabling cri-docker service (if available) ...
	I0805 12:59:03.467341  451238 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 12:59:03.482537  451238 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 12:59:03.498623  451238 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 12:59:03.621224  451238 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 12:59:03.781777  451238 docker.go:233] disabling docker service ...
	I0805 12:59:03.781842  451238 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 12:59:03.798020  451238 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 12:59:03.818262  451238 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 12:59:03.940897  451238 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 12:59:04.075622  451238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 12:59:04.092487  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 12:59:04.112699  451238 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0805 12:59:04.112769  451238 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:04.124102  451238 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 12:59:04.124181  451238 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:04.136339  451238 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:04.147689  451238 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:04.158552  451238 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 12:59:04.171412  451238 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 12:59:04.183284  451238 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0805 12:59:04.183336  451238 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0805 12:59:04.199465  451238 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 12:59:04.215571  451238 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:59:04.342540  451238 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 12:59:04.521705  451238 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 12:59:04.521786  451238 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 12:59:04.526734  451238 start.go:563] Will wait 60s for crictl version
	I0805 12:59:04.526795  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:04.530528  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 12:59:04.572468  451238 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 12:59:04.572557  451238 ssh_runner.go:195] Run: crio --version
	I0805 12:59:04.602411  451238 ssh_runner.go:195] Run: crio --version
	I0805 12:59:04.636641  451238 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0805 12:59:04.638062  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetIP
	I0805 12:59:04.641240  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:04.641734  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:04.641763  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:04.641991  451238 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0805 12:59:04.646446  451238 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:59:04.659876  451238 kubeadm.go:883] updating cluster {Name:old-k8s-version-635707 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-635707 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.41 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 12:59:04.660037  451238 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0805 12:59:04.660105  451238 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:59:04.709636  451238 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0805 12:59:04.709725  451238 ssh_runner.go:195] Run: which lz4
	I0805 12:59:04.714439  451238 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0805 12:59:04.719014  451238 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 12:59:04.719047  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0805 12:59:06.414858  451238 crio.go:462] duration metric: took 1.70045694s to copy over tarball
	I0805 12:59:06.414950  451238 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 12:59:04.513198  450884 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:07.018197  450884 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:05.911274  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:07.911405  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:04.478626  450393 main.go:141] libmachine: (embed-certs-321139) Waiting to get IP...
	I0805 12:59:04.479615  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:04.480147  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:04.480209  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:04.480103  452359 retry.go:31] will retry after 236.369287ms: waiting for machine to come up
	I0805 12:59:04.718716  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:04.719184  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:04.719209  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:04.719125  452359 retry.go:31] will retry after 296.553947ms: waiting for machine to come up
	I0805 12:59:05.017667  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:05.018198  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:05.018235  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:05.018143  452359 retry.go:31] will retry after 427.78496ms: waiting for machine to come up
	I0805 12:59:05.447507  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:05.448075  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:05.448105  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:05.448038  452359 retry.go:31] will retry after 469.229133ms: waiting for machine to come up
	I0805 12:59:05.918469  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:05.919013  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:05.919047  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:05.918998  452359 retry.go:31] will retry after 720.005641ms: waiting for machine to come up
	I0805 12:59:06.641103  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:06.641679  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:06.641708  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:06.641634  452359 retry.go:31] will retry after 591.439327ms: waiting for machine to come up
	I0805 12:59:07.234573  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:07.235179  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:07.235207  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:07.235063  452359 retry.go:31] will retry after 1.087958168s: waiting for machine to come up
	I0805 12:59:08.324599  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:08.325179  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:08.325212  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:08.325129  452359 retry.go:31] will retry after 1.316276197s: waiting for machine to come up
	I0805 12:59:09.473711  451238 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.058718584s)
	I0805 12:59:09.473740  451238 crio.go:469] duration metric: took 3.058854233s to extract the tarball
	I0805 12:59:09.473748  451238 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0805 12:59:09.524420  451238 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:59:09.562003  451238 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0805 12:59:09.562035  451238 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0805 12:59:09.562107  451238 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:59:09.562159  451238 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0805 12:59:09.562156  451238 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0805 12:59:09.562194  451238 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0805 12:59:09.562228  451238 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0805 12:59:09.562256  451238 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0805 12:59:09.562374  451238 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0805 12:59:09.562274  451238 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0805 12:59:09.563981  451238 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0805 12:59:09.563993  451238 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0805 12:59:09.564007  451238 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0805 12:59:09.564015  451238 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0805 12:59:09.564032  451238 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0805 12:59:09.564041  451238 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0805 12:59:09.564076  451238 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:59:09.564075  451238 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0805 12:59:09.727888  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0805 12:59:09.732060  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0805 12:59:09.732150  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0805 12:59:09.736408  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0805 12:59:09.748051  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0805 12:59:09.753579  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0805 12:59:09.762561  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0805 12:59:09.822623  451238 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0805 12:59:09.822681  451238 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0805 12:59:09.822742  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:09.824314  451238 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0805 12:59:09.824360  451238 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0805 12:59:09.824404  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:09.905619  451238 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0805 12:59:09.905778  451238 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0805 12:59:09.905738  451238 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0805 12:59:09.905944  451238 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0805 12:59:09.905998  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:09.905851  451238 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0805 12:59:09.906075  451238 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0805 12:59:09.906133  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:09.905861  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:09.916767  451238 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0805 12:59:09.916796  451238 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0805 12:59:09.916812  451238 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0805 12:59:09.916830  451238 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0805 12:59:09.916864  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:09.916868  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:09.916905  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0805 12:59:09.916958  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0805 12:59:09.918683  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0805 12:59:09.918718  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0805 12:59:09.918776  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0805 12:59:10.007687  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0805 12:59:10.007721  451238 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0805 12:59:10.007871  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0805 12:59:10.042432  451238 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0805 12:59:10.061343  451238 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0805 12:59:10.061400  451238 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0805 12:59:10.061469  451238 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0805 12:59:10.073852  451238 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0805 12:59:10.084957  451238 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0805 12:59:10.423355  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:59:10.563992  451238 cache_images.go:92] duration metric: took 1.001937985s to LoadCachedImages
	W0805 12:59:10.564184  451238 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0805 12:59:10.564211  451238 kubeadm.go:934] updating node { 192.168.61.41 8443 v1.20.0 crio true true} ...
	I0805 12:59:10.564345  451238 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-635707 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.41
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-635707 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 12:59:10.564427  451238 ssh_runner.go:195] Run: crio config
	I0805 12:59:10.612146  451238 cni.go:84] Creating CNI manager for ""
	I0805 12:59:10.612180  451238 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:59:10.612197  451238 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 12:59:10.612226  451238 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.41 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-635707 NodeName:old-k8s-version-635707 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.41"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.41 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0805 12:59:10.612415  451238 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.41
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-635707"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.41
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.41"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 12:59:10.612507  451238 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0805 12:59:10.623036  451238 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 12:59:10.623121  451238 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 12:59:10.633484  451238 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0805 12:59:10.652444  451238 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 12:59:10.673192  451238 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0805 12:59:10.694533  451238 ssh_runner.go:195] Run: grep 192.168.61.41	control-plane.minikube.internal$ /etc/hosts
	I0805 12:59:10.699901  451238 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.41	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:59:10.714251  451238 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:59:10.838992  451238 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 12:59:10.857248  451238 certs.go:68] Setting up /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707 for IP: 192.168.61.41
	I0805 12:59:10.857279  451238 certs.go:194] generating shared ca certs ...
	I0805 12:59:10.857303  451238 certs.go:226] acquiring lock for ca certs: {Name:mk0abfcaff3883fbb5243c47b487f9200d9166d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:59:10.857515  451238 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key
	I0805 12:59:10.857587  451238 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key
	I0805 12:59:10.857602  451238 certs.go:256] generating profile certs ...
	I0805 12:59:10.857746  451238 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/client.key
	I0805 12:59:10.857847  451238 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/apiserver.key.3f42c485
	I0805 12:59:10.857907  451238 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/proxy-client.key
	I0805 12:59:10.858072  451238 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem (1338 bytes)
	W0805 12:59:10.858122  451238 certs.go:480] ignoring /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219_empty.pem, impossibly tiny 0 bytes
	I0805 12:59:10.858143  451238 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 12:59:10.858177  451238 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem (1082 bytes)
	I0805 12:59:10.858207  451238 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem (1123 bytes)
	I0805 12:59:10.858235  451238 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem (1675 bytes)
	I0805 12:59:10.858294  451238 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:59:10.859247  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 12:59:10.908518  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 12:59:10.949310  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 12:59:10.981447  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 12:59:11.008085  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0805 12:59:11.035539  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0805 12:59:11.071371  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 12:59:11.099842  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 12:59:11.135629  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 12:59:11.164194  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem --> /usr/share/ca-certificates/391219.pem (1338 bytes)
	I0805 12:59:11.190595  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /usr/share/ca-certificates/3912192.pem (1708 bytes)
	I0805 12:59:11.219765  451238 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 12:59:11.240836  451238 ssh_runner.go:195] Run: openssl version
	I0805 12:59:11.247516  451238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3912192.pem && ln -fs /usr/share/ca-certificates/3912192.pem /etc/ssl/certs/3912192.pem"
	I0805 12:59:11.260736  451238 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3912192.pem
	I0805 12:59:11.266004  451238 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 11:39 /usr/share/ca-certificates/3912192.pem
	I0805 12:59:11.266100  451238 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3912192.pem
	I0805 12:59:11.273012  451238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3912192.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 12:59:11.285453  451238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 12:59:11.296934  451238 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:59:11.301588  451238 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:59:11.301655  451238 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:59:11.307459  451238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 12:59:11.318833  451238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391219.pem && ln -fs /usr/share/ca-certificates/391219.pem /etc/ssl/certs/391219.pem"
	I0805 12:59:11.330224  451238 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/391219.pem
	I0805 12:59:11.334864  451238 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 11:39 /usr/share/ca-certificates/391219.pem
	I0805 12:59:11.334917  451238 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391219.pem
	I0805 12:59:11.341338  451238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391219.pem /etc/ssl/certs/51391683.0"
	I0805 12:59:11.353084  451238 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 12:59:11.358532  451238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 12:59:11.365419  451238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 12:59:11.371581  451238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 12:59:11.378308  451238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 12:59:11.384640  451238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 12:59:11.390622  451238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 12:59:11.397027  451238 kubeadm.go:392] StartCluster: {Name:old-k8s-version-635707 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-635707 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.41 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:59:11.397199  451238 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 12:59:11.397286  451238 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:59:11.436612  451238 cri.go:89] found id: ""
	I0805 12:59:11.436689  451238 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 12:59:11.447906  451238 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0805 12:59:11.447927  451238 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0805 12:59:11.447984  451238 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0805 12:59:11.459282  451238 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0805 12:59:11.460548  451238 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-635707" does not appear in /home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 12:59:11.461355  451238 kubeconfig.go:62] /home/jenkins/minikube-integration/19377-383955/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-635707" cluster setting kubeconfig missing "old-k8s-version-635707" context setting]
	I0805 12:59:11.462324  451238 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/kubeconfig: {Name:mkf2ea766e58530103015ce4ba9d1ed3336f3926 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:59:11.476306  451238 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0805 12:59:11.487869  451238 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.41
	I0805 12:59:11.487911  451238 kubeadm.go:1160] stopping kube-system containers ...
	I0805 12:59:11.487927  451238 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0805 12:59:11.487988  451238 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:59:11.526601  451238 cri.go:89] found id: ""
	I0805 12:59:11.526674  451238 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0805 12:59:11.545429  451238 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 12:59:11.556725  451238 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 12:59:11.556755  451238 kubeadm.go:157] found existing configuration files:
	
	I0805 12:59:11.556820  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 12:59:11.566564  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 12:59:11.566648  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 12:59:11.576859  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 12:59:11.586237  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 12:59:11.586329  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 12:59:11.596721  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 12:59:11.607239  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 12:59:11.607340  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 12:59:11.617626  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 12:59:11.627179  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 12:59:11.627251  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 12:59:11.637566  451238 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 12:59:11.648889  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:11.780270  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:08.018320  450884 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"True"
	I0805 12:59:08.018363  450884 pod_ready.go:81] duration metric: took 10.514788401s for pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:08.018379  450884 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:10.270876  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:10.409419  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:12.410565  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:09.643077  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:09.643655  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:09.643692  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:09.643554  452359 retry.go:31] will retry after 1.473183692s: waiting for machine to come up
	I0805 12:59:11.118468  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:11.119005  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:11.119035  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:11.118943  452359 retry.go:31] will retry after 2.036333626s: waiting for machine to come up
	I0805 12:59:13.156866  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:13.157390  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:13.157419  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:13.157339  452359 retry.go:31] will retry after 2.095065362s: waiting for machine to come up
	I0805 12:59:12.549918  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:12.781853  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:12.877381  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:12.978141  451238 api_server.go:52] waiting for apiserver process to appear ...
	I0805 12:59:12.978250  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:13.479242  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:13.978456  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:14.478575  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:14.978783  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:15.479342  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:15.978307  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:16.479180  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:12.526543  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:15.027362  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:14.909480  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:16.911090  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:15.253589  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:15.254081  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:15.254111  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:15.254020  452359 retry.go:31] will retry after 2.859783781s: waiting for machine to come up
	I0805 12:59:18.116972  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:18.117528  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:18.117559  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:18.117486  452359 retry.go:31] will retry after 4.456427854s: waiting for machine to come up
	I0805 12:59:16.978915  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:17.479019  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:17.978574  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:18.478343  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:18.978820  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:19.478488  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:19.978335  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:20.478945  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:20.979040  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:21.479324  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:17.525332  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:19.525407  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:22.025092  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:19.410416  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:21.908646  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:22.576842  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.577261  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has current primary IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.577291  450393 main.go:141] libmachine: (embed-certs-321139) Found IP for machine: 192.168.39.196
	I0805 12:59:22.577306  450393 main.go:141] libmachine: (embed-certs-321139) Reserving static IP address...
	I0805 12:59:22.577834  450393 main.go:141] libmachine: (embed-certs-321139) Reserved static IP address: 192.168.39.196
	I0805 12:59:22.577877  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "embed-certs-321139", mac: "52:54:00:6c:ad:fd", ip: "192.168.39.196"} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:22.577893  450393 main.go:141] libmachine: (embed-certs-321139) Waiting for SSH to be available...
	I0805 12:59:22.577915  450393 main.go:141] libmachine: (embed-certs-321139) DBG | skip adding static IP to network mk-embed-certs-321139 - found existing host DHCP lease matching {name: "embed-certs-321139", mac: "52:54:00:6c:ad:fd", ip: "192.168.39.196"}
	I0805 12:59:22.577922  450393 main.go:141] libmachine: (embed-certs-321139) DBG | Getting to WaitForSSH function...
	I0805 12:59:22.580080  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.580520  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:22.580552  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.580707  450393 main.go:141] libmachine: (embed-certs-321139) DBG | Using SSH client type: external
	I0805 12:59:22.580742  450393 main.go:141] libmachine: (embed-certs-321139) DBG | Using SSH private key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa (-rw-------)
	I0805 12:59:22.580764  450393 main.go:141] libmachine: (embed-certs-321139) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.196 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 12:59:22.580778  450393 main.go:141] libmachine: (embed-certs-321139) DBG | About to run SSH command:
	I0805 12:59:22.580793  450393 main.go:141] libmachine: (embed-certs-321139) DBG | exit 0
	I0805 12:59:22.703872  450393 main.go:141] libmachine: (embed-certs-321139) DBG | SSH cmd err, output: <nil>: 
	I0805 12:59:22.704333  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetConfigRaw
	I0805 12:59:22.705046  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetIP
	I0805 12:59:22.707544  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.707919  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:22.707951  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.708240  450393 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139/config.json ...
	I0805 12:59:22.708474  450393 machine.go:94] provisionDockerMachine start ...
	I0805 12:59:22.708501  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:22.708755  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:22.711177  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.711488  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:22.711510  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.711639  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:22.711842  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:22.711998  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:22.712157  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:22.712378  450393 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:22.712581  450393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0805 12:59:22.712595  450393 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 12:59:22.816371  450393 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 12:59:22.816433  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetMachineName
	I0805 12:59:22.816708  450393 buildroot.go:166] provisioning hostname "embed-certs-321139"
	I0805 12:59:22.816743  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetMachineName
	I0805 12:59:22.816959  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:22.819715  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.820085  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:22.820108  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.820321  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:22.820510  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:22.820656  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:22.820794  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:22.820952  450393 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:22.821203  450393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0805 12:59:22.821229  450393 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-321139 && echo "embed-certs-321139" | sudo tee /etc/hostname
	I0805 12:59:22.938845  450393 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-321139
	
	I0805 12:59:22.938888  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:22.942264  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.942651  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:22.942684  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.942904  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:22.943161  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:22.943383  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:22.943568  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:22.943777  450393 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:22.943987  450393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0805 12:59:22.944011  450393 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-321139' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-321139/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-321139' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 12:59:23.062700  450393 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 12:59:23.062734  450393 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19377-383955/.minikube CaCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19377-383955/.minikube}
	I0805 12:59:23.062762  450393 buildroot.go:174] setting up certificates
	I0805 12:59:23.062774  450393 provision.go:84] configureAuth start
	I0805 12:59:23.062800  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetMachineName
	I0805 12:59:23.063142  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetIP
	I0805 12:59:23.065839  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.066140  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.066175  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.066359  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:23.069214  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.069562  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.069597  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.069746  450393 provision.go:143] copyHostCerts
	I0805 12:59:23.069813  450393 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem, removing ...
	I0805 12:59:23.069827  450393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 12:59:23.069897  450393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem (1082 bytes)
	I0805 12:59:23.070014  450393 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem, removing ...
	I0805 12:59:23.070025  450393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 12:59:23.070083  450393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem (1123 bytes)
	I0805 12:59:23.070185  450393 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem, removing ...
	I0805 12:59:23.070197  450393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 12:59:23.070226  450393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem (1675 bytes)
	I0805 12:59:23.070308  450393 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem org=jenkins.embed-certs-321139 san=[127.0.0.1 192.168.39.196 embed-certs-321139 localhost minikube]
	I0805 12:59:23.223660  450393 provision.go:177] copyRemoteCerts
	I0805 12:59:23.223759  450393 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 12:59:23.223799  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:23.226548  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.226980  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.227014  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.227195  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:23.227449  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:23.227624  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:23.227801  450393 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa Username:docker}
	I0805 12:59:23.311952  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0805 12:59:23.336888  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0805 12:59:23.363397  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 12:59:23.388197  450393 provision.go:87] duration metric: took 325.408192ms to configureAuth
	I0805 12:59:23.388234  450393 buildroot.go:189] setting minikube options for container-runtime
	I0805 12:59:23.388470  450393 config.go:182] Loaded profile config "embed-certs-321139": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:59:23.388596  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:23.391247  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.391597  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.391626  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.391843  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:23.392054  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:23.392240  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:23.392371  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:23.392528  450393 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:23.392825  450393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0805 12:59:23.392853  450393 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 12:59:23.675427  450393 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 12:59:23.675459  450393 machine.go:97] duration metric: took 966.969142ms to provisionDockerMachine
	I0805 12:59:23.675472  450393 start.go:293] postStartSetup for "embed-certs-321139" (driver="kvm2")
	I0805 12:59:23.675484  450393 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 12:59:23.675515  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:23.675885  450393 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 12:59:23.675912  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:23.678780  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.679100  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.679152  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.679333  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:23.679524  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:23.679657  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:23.679860  450393 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa Username:docker}
	I0805 12:59:23.764372  450393 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 12:59:23.769059  450393 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 12:59:23.769088  450393 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/addons for local assets ...
	I0805 12:59:23.769162  450393 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/files for local assets ...
	I0805 12:59:23.769231  450393 filesync.go:149] local asset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> 3912192.pem in /etc/ssl/certs
	I0805 12:59:23.769334  450393 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 12:59:23.781287  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:59:23.808609  450393 start.go:296] duration metric: took 133.117086ms for postStartSetup
	I0805 12:59:23.808665  450393 fix.go:56] duration metric: took 20.659690035s for fixHost
	I0805 12:59:23.808694  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:23.811519  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.811948  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.811978  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.812164  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:23.812366  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:23.812539  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:23.812708  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:23.812897  450393 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:23.813137  450393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0805 12:59:23.813151  450393 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 12:59:23.916498  450393 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722862763.883942670
	
	I0805 12:59:23.916521  450393 fix.go:216] guest clock: 1722862763.883942670
	I0805 12:59:23.916536  450393 fix.go:229] Guest: 2024-08-05 12:59:23.88394267 +0000 UTC Remote: 2024-08-05 12:59:23.8086712 +0000 UTC m=+359.764794687 (delta=75.27147ms)
	I0805 12:59:23.916570  450393 fix.go:200] guest clock delta is within tolerance: 75.27147ms
	I0805 12:59:23.916578  450393 start.go:83] releasing machines lock for "embed-certs-321139", held for 20.767637373s
	I0805 12:59:23.916598  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:23.916867  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetIP
	I0805 12:59:23.919570  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.919972  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.919999  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.920142  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:23.920666  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:23.920837  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:23.920930  450393 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 12:59:23.920981  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:23.921063  450393 ssh_runner.go:195] Run: cat /version.json
	I0805 12:59:23.921083  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:23.924176  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.924209  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.924557  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.924588  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.924613  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.924635  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.924749  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:23.924936  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:23.925021  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:23.925127  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:23.925219  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:23.925286  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:23.925369  450393 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa Username:docker}
	I0805 12:59:23.925454  450393 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa Username:docker}
	I0805 12:59:24.000693  450393 ssh_runner.go:195] Run: systemctl --version
	I0805 12:59:24.023194  450393 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 12:59:24.178807  450393 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 12:59:24.184954  450393 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 12:59:24.185031  450393 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 12:59:24.201420  450393 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 12:59:24.201453  450393 start.go:495] detecting cgroup driver to use...
	I0805 12:59:24.201543  450393 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 12:59:24.218603  450393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 12:59:24.233928  450393 docker.go:217] disabling cri-docker service (if available) ...
	I0805 12:59:24.233999  450393 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 12:59:24.248455  450393 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 12:59:24.263355  450393 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 12:59:24.386806  450393 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 12:59:24.565128  450393 docker.go:233] disabling docker service ...
	I0805 12:59:24.565229  450393 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 12:59:24.581053  450393 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 12:59:24.594297  450393 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 12:59:24.716615  450393 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 12:59:24.835687  450393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 12:59:24.850666  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 12:59:24.870993  450393 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0805 12:59:24.871055  450393 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:24.881731  450393 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 12:59:24.881815  450393 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:24.893156  450393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:24.903802  450393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:24.915189  450393 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 12:59:24.926967  450393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:24.938008  450393 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:24.956033  450393 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:24.967863  450393 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 12:59:24.977758  450393 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0805 12:59:24.977822  450393 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0805 12:59:24.993837  450393 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 12:59:25.005009  450393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:59:25.135856  450393 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 12:59:25.277425  450393 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 12:59:25.277513  450393 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 12:59:25.282628  450393 start.go:563] Will wait 60s for crictl version
	I0805 12:59:25.282704  450393 ssh_runner.go:195] Run: which crictl
	I0805 12:59:25.287324  450393 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 12:59:25.335315  450393 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 12:59:25.335396  450393 ssh_runner.go:195] Run: crio --version
	I0805 12:59:25.367574  450393 ssh_runner.go:195] Run: crio --version
	I0805 12:59:25.398926  450393 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0805 12:59:21.979289  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:22.478367  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:22.978424  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:23.478877  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:23.978841  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:24.478635  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:24.978824  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:25.479076  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:25.979222  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:26.478928  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:24.025234  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:26.028817  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:23.909428  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:25.910877  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:27.911235  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:25.400219  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetIP
	I0805 12:59:25.403052  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:25.403508  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:25.403552  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:25.403849  450393 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0805 12:59:25.408402  450393 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:59:25.423146  450393 kubeadm.go:883] updating cluster {Name:embed-certs-321139 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-321139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 12:59:25.423301  450393 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 12:59:25.423368  450393 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:59:25.460713  450393 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0805 12:59:25.460795  450393 ssh_runner.go:195] Run: which lz4
	I0805 12:59:25.464997  450393 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0805 12:59:25.469397  450393 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 12:59:25.469452  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0805 12:59:26.966110  450393 crio.go:462] duration metric: took 1.501152522s to copy over tarball
	I0805 12:59:26.966207  450393 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 12:59:26.978648  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:27.478951  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:27.978405  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:28.479008  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:28.978521  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:29.479199  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:29.979288  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:30.479030  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:30.978372  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:31.479194  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:28.525888  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:31.025690  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:30.410973  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:32.910889  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:29.287605  450393 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.321364872s)
	I0805 12:59:29.287636  450393 crio.go:469] duration metric: took 2.321487153s to extract the tarball
	I0805 12:59:29.287647  450393 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0805 12:59:29.329182  450393 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:59:29.372183  450393 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 12:59:29.372211  450393 cache_images.go:84] Images are preloaded, skipping loading
	I0805 12:59:29.372220  450393 kubeadm.go:934] updating node { 192.168.39.196 8443 v1.30.3 crio true true} ...
	I0805 12:59:29.372349  450393 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-321139 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.196
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-321139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 12:59:29.372433  450393 ssh_runner.go:195] Run: crio config
	I0805 12:59:29.426003  450393 cni.go:84] Creating CNI manager for ""
	I0805 12:59:29.426025  450393 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:59:29.426036  450393 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 12:59:29.426059  450393 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.196 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-321139 NodeName:embed-certs-321139 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.196"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.196 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 12:59:29.426192  450393 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.196
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-321139"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.196
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.196"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 12:59:29.426250  450393 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 12:59:29.436248  450393 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 12:59:29.436315  450393 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 12:59:29.445844  450393 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0805 12:59:29.463125  450393 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 12:59:29.479685  450393 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0805 12:59:29.499033  450393 ssh_runner.go:195] Run: grep 192.168.39.196	control-plane.minikube.internal$ /etc/hosts
	I0805 12:59:29.503175  450393 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.196	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:59:29.516141  450393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:59:29.645914  450393 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 12:59:29.664578  450393 certs.go:68] Setting up /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139 for IP: 192.168.39.196
	I0805 12:59:29.664608  450393 certs.go:194] generating shared ca certs ...
	I0805 12:59:29.664626  450393 certs.go:226] acquiring lock for ca certs: {Name:mk0abfcaff3883fbb5243c47b487f9200d9166d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:59:29.664853  450393 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key
	I0805 12:59:29.664922  450393 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key
	I0805 12:59:29.664939  450393 certs.go:256] generating profile certs ...
	I0805 12:59:29.665058  450393 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139/client.key
	I0805 12:59:29.665143  450393 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139/apiserver.key.ce53eda3
	I0805 12:59:29.665183  450393 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139/proxy-client.key
	I0805 12:59:29.665293  450393 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem (1338 bytes)
	W0805 12:59:29.665324  450393 certs.go:480] ignoring /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219_empty.pem, impossibly tiny 0 bytes
	I0805 12:59:29.665331  450393 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 12:59:29.665360  450393 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem (1082 bytes)
	I0805 12:59:29.665382  450393 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem (1123 bytes)
	I0805 12:59:29.665405  450393 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem (1675 bytes)
	I0805 12:59:29.665442  450393 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:59:29.666287  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 12:59:29.705969  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 12:59:29.752700  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 12:59:29.779819  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 12:59:29.806578  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0805 12:59:29.832277  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0805 12:59:29.861682  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 12:59:29.888113  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0805 12:59:29.915023  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem --> /usr/share/ca-certificates/391219.pem (1338 bytes)
	I0805 12:59:29.942582  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /usr/share/ca-certificates/3912192.pem (1708 bytes)
	I0805 12:59:29.971225  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 12:59:29.999278  450393 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 12:59:30.018294  450393 ssh_runner.go:195] Run: openssl version
	I0805 12:59:30.024645  450393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 12:59:30.035446  450393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:59:30.040216  450393 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:59:30.040279  450393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:59:30.046151  450393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 12:59:30.057664  450393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391219.pem && ln -fs /usr/share/ca-certificates/391219.pem /etc/ssl/certs/391219.pem"
	I0805 12:59:30.068822  450393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/391219.pem
	I0805 12:59:30.074073  450393 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 11:39 /usr/share/ca-certificates/391219.pem
	I0805 12:59:30.074138  450393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391219.pem
	I0805 12:59:30.080126  450393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391219.pem /etc/ssl/certs/51391683.0"
	I0805 12:59:30.091168  450393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3912192.pem && ln -fs /usr/share/ca-certificates/3912192.pem /etc/ssl/certs/3912192.pem"
	I0805 12:59:30.103171  450393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3912192.pem
	I0805 12:59:30.108840  450393 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 11:39 /usr/share/ca-certificates/3912192.pem
	I0805 12:59:30.108924  450393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3912192.pem
	I0805 12:59:30.115469  450393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3912192.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 12:59:30.126742  450393 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 12:59:30.132008  450393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 12:59:30.138285  450393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 12:59:30.144251  450393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 12:59:30.150718  450393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 12:59:30.157183  450393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 12:59:30.163709  450393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 12:59:30.170852  450393 kubeadm.go:392] StartCluster: {Name:embed-certs-321139 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-321139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:59:30.170987  450393 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 12:59:30.171055  450393 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:59:30.216014  450393 cri.go:89] found id: ""
	I0805 12:59:30.216103  450393 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 12:59:30.234046  450393 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0805 12:59:30.234076  450393 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0805 12:59:30.234151  450393 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0805 12:59:30.245861  450393 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0805 12:59:30.247434  450393 kubeconfig.go:125] found "embed-certs-321139" server: "https://192.168.39.196:8443"
	I0805 12:59:30.250024  450393 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0805 12:59:30.261066  450393 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.196
	I0805 12:59:30.261116  450393 kubeadm.go:1160] stopping kube-system containers ...
	I0805 12:59:30.261140  450393 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0805 12:59:30.261201  450393 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:59:30.306587  450393 cri.go:89] found id: ""
	I0805 12:59:30.306678  450393 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0805 12:59:30.326818  450393 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 12:59:30.336908  450393 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 12:59:30.336931  450393 kubeadm.go:157] found existing configuration files:
	
	I0805 12:59:30.336984  450393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 12:59:30.346004  450393 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 12:59:30.346105  450393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 12:59:30.355979  450393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 12:59:30.366124  450393 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 12:59:30.366185  450393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 12:59:30.376923  450393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 12:59:30.386526  450393 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 12:59:30.386599  450393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 12:59:30.396661  450393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 12:59:30.406693  450393 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 12:59:30.406765  450393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 12:59:30.417789  450393 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 12:59:30.428214  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:30.554777  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:31.703579  450393 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.14876196s)
	I0805 12:59:31.703620  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:31.925724  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:31.999840  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:32.089948  450393 api_server.go:52] waiting for apiserver process to appear ...
	I0805 12:59:32.090084  450393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:32.590152  450393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:33.090222  450393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:33.115351  450393 api_server.go:72] duration metric: took 1.025404322s to wait for apiserver process to appear ...
	I0805 12:59:33.115385  450393 api_server.go:88] waiting for apiserver healthz status ...
	I0805 12:59:33.115411  450393 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0805 12:59:33.115983  450393 api_server.go:269] stopped: https://192.168.39.196:8443/healthz: Get "https://192.168.39.196:8443/healthz": dial tcp 192.168.39.196:8443: connect: connection refused
	I0805 12:59:33.616210  450393 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0805 12:59:31.978481  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:32.479031  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:32.978796  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:33.478677  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:33.979377  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:34.478595  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:34.979227  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:35.478695  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:35.978911  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:36.479327  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:33.027363  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:35.525528  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:36.274855  450393 api_server.go:279] https://192.168.39.196:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0805 12:59:36.274895  450393 api_server.go:103] status: https://192.168.39.196:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0805 12:59:36.274912  450393 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0805 12:59:36.314290  450393 api_server.go:279] https://192.168.39.196:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0805 12:59:36.314325  450393 api_server.go:103] status: https://192.168.39.196:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0805 12:59:36.615566  450393 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0805 12:59:36.620594  450393 api_server.go:279] https://192.168.39.196:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:59:36.620626  450393 api_server.go:103] status: https://192.168.39.196:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:59:37.116251  450393 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0805 12:59:37.120719  450393 api_server.go:279] https://192.168.39.196:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:59:37.120749  450393 api_server.go:103] status: https://192.168.39.196:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:59:37.616330  450393 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0805 12:59:37.620778  450393 api_server.go:279] https://192.168.39.196:8443/healthz returned 200:
	ok
	I0805 12:59:37.627608  450393 api_server.go:141] control plane version: v1.30.3
	I0805 12:59:37.627640  450393 api_server.go:131] duration metric: took 4.512246076s to wait for apiserver health ...
	I0805 12:59:37.627652  450393 cni.go:84] Creating CNI manager for ""
	I0805 12:59:37.627661  450393 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:59:37.628987  450393 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0805 12:59:35.410070  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:37.411719  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:37.630068  450393 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0805 12:59:37.650034  450393 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0805 12:59:37.691891  450393 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 12:59:37.704810  450393 system_pods.go:59] 8 kube-system pods found
	I0805 12:59:37.704855  450393 system_pods.go:61] "coredns-7db6d8ff4d-wm7lh" [e3851d79-431c-4629-bfdc-ed9615cd46aa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0805 12:59:37.704866  450393 system_pods.go:61] "etcd-embed-certs-321139" [98de664b-92d7-432d-9881-496dd8edd9f3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0805 12:59:37.704887  450393 system_pods.go:61] "kube-apiserver-embed-certs-321139" [2d93e6df-1933-4ac1-82f6-d0d8f74f6d4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0805 12:59:37.704900  450393 system_pods.go:61] "kube-controller-manager-embed-certs-321139" [84165f78-f74b-4714-81b9-eeac2771b86b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0805 12:59:37.704916  450393 system_pods.go:61] "kube-proxy-shgv2" [a19c5991-505f-4105-8c20-7afd63dd8e61] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0805 12:59:37.704928  450393 system_pods.go:61] "kube-scheduler-embed-certs-321139" [961a5013-fd55-48a2-adc2-acde33f6aed5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0805 12:59:37.704946  450393 system_pods.go:61] "metrics-server-569cc877fc-k8mrt" [6d400b20-5de5-4046-b773-39766c67cdb4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 12:59:37.704956  450393 system_pods.go:61] "storage-provisioner" [8b2db057-5262-4648-93ea-f2f0ed51a19b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0805 12:59:37.704967  450393 system_pods.go:74] duration metric: took 13.04358ms to wait for pod list to return data ...
	I0805 12:59:37.704980  450393 node_conditions.go:102] verifying NodePressure condition ...
	I0805 12:59:37.710340  450393 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 12:59:37.710367  450393 node_conditions.go:123] node cpu capacity is 2
	I0805 12:59:37.710382  450393 node_conditions.go:105] duration metric: took 5.392102ms to run NodePressure ...
	I0805 12:59:37.710402  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:37.995945  450393 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0805 12:59:38.000274  450393 kubeadm.go:739] kubelet initialised
	I0805 12:59:38.000295  450393 kubeadm.go:740] duration metric: took 4.323835ms waiting for restarted kubelet to initialise ...
	I0805 12:59:38.000302  450393 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 12:59:38.006122  450393 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-wm7lh" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:38.012368  450393 pod_ready.go:97] node "embed-certs-321139" hosting pod "coredns-7db6d8ff4d-wm7lh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.012392  450393 pod_ready.go:81] duration metric: took 6.243837ms for pod "coredns-7db6d8ff4d-wm7lh" in "kube-system" namespace to be "Ready" ...
	E0805 12:59:38.012400  450393 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-321139" hosting pod "coredns-7db6d8ff4d-wm7lh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.012406  450393 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:38.016338  450393 pod_ready.go:97] node "embed-certs-321139" hosting pod "etcd-embed-certs-321139" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.016357  450393 pod_ready.go:81] duration metric: took 3.943012ms for pod "etcd-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	E0805 12:59:38.016364  450393 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-321139" hosting pod "etcd-embed-certs-321139" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.016369  450393 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:38.021019  450393 pod_ready.go:97] node "embed-certs-321139" hosting pod "kube-apiserver-embed-certs-321139" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.021044  450393 pod_ready.go:81] duration metric: took 4.667242ms for pod "kube-apiserver-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	E0805 12:59:38.021055  450393 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-321139" hosting pod "kube-apiserver-embed-certs-321139" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.021063  450393 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:38.096303  450393 pod_ready.go:97] node "embed-certs-321139" hosting pod "kube-controller-manager-embed-certs-321139" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.096334  450393 pod_ready.go:81] duration metric: took 75.253785ms for pod "kube-controller-manager-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	E0805 12:59:38.096345  450393 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-321139" hosting pod "kube-controller-manager-embed-certs-321139" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.096351  450393 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-shgv2" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:38.495648  450393 pod_ready.go:97] node "embed-certs-321139" hosting pod "kube-proxy-shgv2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.495677  450393 pod_ready.go:81] duration metric: took 399.318117ms for pod "kube-proxy-shgv2" in "kube-system" namespace to be "Ready" ...
	E0805 12:59:38.495687  450393 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-321139" hosting pod "kube-proxy-shgv2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.495694  450393 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:38.896066  450393 pod_ready.go:97] node "embed-certs-321139" hosting pod "kube-scheduler-embed-certs-321139" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.896091  450393 pod_ready.go:81] duration metric: took 400.39101ms for pod "kube-scheduler-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	E0805 12:59:38.896101  450393 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-321139" hosting pod "kube-scheduler-embed-certs-321139" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.896108  450393 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:39.295587  450393 pod_ready.go:97] node "embed-certs-321139" hosting pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:39.295618  450393 pod_ready.go:81] duration metric: took 399.499354ms for pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace to be "Ready" ...
	E0805 12:59:39.295632  450393 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-321139" hosting pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:39.295653  450393 pod_ready.go:38] duration metric: took 1.295340252s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 12:59:39.295675  450393 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 12:59:39.308136  450393 ops.go:34] apiserver oom_adj: -16
	I0805 12:59:39.308161  450393 kubeadm.go:597] duration metric: took 9.07407738s to restartPrimaryControlPlane
	I0805 12:59:39.308170  450393 kubeadm.go:394] duration metric: took 9.137335392s to StartCluster
	I0805 12:59:39.308188  450393 settings.go:142] acquiring lock: {Name:mkef693333292ed53a03690c72ec170ce2e26d3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:59:39.308272  450393 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 12:59:39.310750  450393 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/kubeconfig: {Name:mkf2ea766e58530103015ce4ba9d1ed3336f3926 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:59:39.311015  450393 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 12:59:39.311149  450393 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 12:59:39.311240  450393 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-321139"
	I0805 12:59:39.311289  450393 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-321139"
	W0805 12:59:39.311303  450393 addons.go:243] addon storage-provisioner should already be in state true
	I0805 12:59:39.311301  450393 addons.go:69] Setting metrics-server=true in profile "embed-certs-321139"
	I0805 12:59:39.311305  450393 addons.go:69] Setting default-storageclass=true in profile "embed-certs-321139"
	I0805 12:59:39.311351  450393 host.go:66] Checking if "embed-certs-321139" exists ...
	I0805 12:59:39.311360  450393 addons.go:234] Setting addon metrics-server=true in "embed-certs-321139"
	W0805 12:59:39.311371  450393 addons.go:243] addon metrics-server should already be in state true
	I0805 12:59:39.311371  450393 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-321139"
	I0805 12:59:39.311454  450393 host.go:66] Checking if "embed-certs-321139" exists ...
	I0805 12:59:39.311287  450393 config.go:182] Loaded profile config "embed-certs-321139": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:59:39.311848  450393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:59:39.311897  450393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:59:39.311906  450393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:59:39.311912  450393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:59:39.311964  450393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:59:39.312115  450393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:59:39.313050  450393 out.go:177] * Verifying Kubernetes components...
	I0805 12:59:39.314390  450393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:59:39.327427  450393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36355
	I0805 12:59:39.327687  450393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39217
	I0805 12:59:39.328016  450393 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:59:39.328155  450393 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:59:39.328609  450393 main.go:141] libmachine: Using API Version  1
	I0805 12:59:39.328649  450393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:59:39.328735  450393 main.go:141] libmachine: Using API Version  1
	I0805 12:59:39.328786  450393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:59:39.329013  450393 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:59:39.329086  450393 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:59:39.329560  450393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:59:39.329599  450393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:59:39.329676  450393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:59:39.329721  450393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:59:39.330884  450393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34247
	I0805 12:59:39.331381  450393 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:59:39.331878  450393 main.go:141] libmachine: Using API Version  1
	I0805 12:59:39.331902  450393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:59:39.332289  450393 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:59:39.332529  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetState
	I0805 12:59:39.336244  450393 addons.go:234] Setting addon default-storageclass=true in "embed-certs-321139"
	W0805 12:59:39.336269  450393 addons.go:243] addon default-storageclass should already be in state true
	I0805 12:59:39.336305  450393 host.go:66] Checking if "embed-certs-321139" exists ...
	I0805 12:59:39.336688  450393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:59:39.336735  450393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:59:39.347255  450393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41715
	I0805 12:59:39.347411  450393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43729
	I0805 12:59:39.347776  450393 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:59:39.347910  450393 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:59:39.348271  450393 main.go:141] libmachine: Using API Version  1
	I0805 12:59:39.348291  450393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:59:39.348464  450393 main.go:141] libmachine: Using API Version  1
	I0805 12:59:39.348476  450393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:59:39.348603  450393 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:59:39.348760  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetState
	I0805 12:59:39.348817  450393 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:59:39.348955  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetState
	I0805 12:59:39.350697  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:39.350906  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:39.352896  450393 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:59:39.352895  450393 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0805 12:59:39.354185  450393 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0805 12:59:39.354207  450393 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0805 12:59:39.354224  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:39.354266  450393 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 12:59:39.354277  450393 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 12:59:39.354292  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:39.356641  450393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41381
	I0805 12:59:39.357213  450393 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:59:39.357546  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:39.357791  450393 main.go:141] libmachine: Using API Version  1
	I0805 12:59:39.357814  450393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:59:39.357867  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:39.358001  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:39.358020  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:39.359294  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:39.359322  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:39.359337  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:39.359345  450393 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:59:39.359353  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:39.359488  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:39.359624  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:39.359669  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:39.359783  450393 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa Username:docker}
	I0805 12:59:39.359977  450393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:59:39.360009  450393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:59:39.360077  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:39.360210  450393 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa Username:docker}
	I0805 12:59:39.380935  450393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33787
	I0805 12:59:39.381394  450393 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:59:39.381987  450393 main.go:141] libmachine: Using API Version  1
	I0805 12:59:39.382029  450393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:59:39.382362  450393 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:59:39.382603  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetState
	I0805 12:59:39.384225  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:39.384497  450393 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 12:59:39.384515  450393 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 12:59:39.384536  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:39.389471  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:39.389972  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:39.390001  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:39.390124  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:39.390303  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:39.390604  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:39.390791  450393 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa Username:docker}
	I0805 12:59:39.513696  450393 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 12:59:39.533291  450393 node_ready.go:35] waiting up to 6m0s for node "embed-certs-321139" to be "Ready" ...
	I0805 12:59:39.597816  450393 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 12:59:39.700234  450393 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 12:59:39.719936  450393 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0805 12:59:39.719958  450393 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0805 12:59:39.760405  450393 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0805 12:59:39.760441  450393 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0805 12:59:39.808765  450393 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0805 12:59:39.808794  450393 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0805 12:59:39.833073  450393 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0805 12:59:39.946594  450393 main.go:141] libmachine: Making call to close driver server
	I0805 12:59:39.946633  450393 main.go:141] libmachine: (embed-certs-321139) Calling .Close
	I0805 12:59:39.946968  450393 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:59:39.946995  450393 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:59:39.947052  450393 main.go:141] libmachine: (embed-certs-321139) DBG | Closing plugin on server side
	I0805 12:59:39.947121  450393 main.go:141] libmachine: Making call to close driver server
	I0805 12:59:39.947137  450393 main.go:141] libmachine: (embed-certs-321139) Calling .Close
	I0805 12:59:39.947456  450393 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:59:39.947477  450393 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:59:39.947490  450393 main.go:141] libmachine: (embed-certs-321139) DBG | Closing plugin on server side
	I0805 12:59:39.953919  450393 main.go:141] libmachine: Making call to close driver server
	I0805 12:59:39.953942  450393 main.go:141] libmachine: (embed-certs-321139) Calling .Close
	I0805 12:59:39.954189  450393 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:59:39.954209  450393 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:59:40.636249  450393 main.go:141] libmachine: Making call to close driver server
	I0805 12:59:40.636274  450393 main.go:141] libmachine: (embed-certs-321139) Calling .Close
	I0805 12:59:40.636638  450393 main.go:141] libmachine: (embed-certs-321139) DBG | Closing plugin on server side
	I0805 12:59:40.636715  450393 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:59:40.636729  450393 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:59:40.636745  450393 main.go:141] libmachine: Making call to close driver server
	I0805 12:59:40.636757  450393 main.go:141] libmachine: (embed-certs-321139) Calling .Close
	I0805 12:59:40.636989  450393 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:59:40.637008  450393 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:59:40.671789  450393 main.go:141] libmachine: Making call to close driver server
	I0805 12:59:40.671819  450393 main.go:141] libmachine: (embed-certs-321139) Calling .Close
	I0805 12:59:40.672189  450393 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:59:40.672207  450393 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:59:40.672217  450393 main.go:141] libmachine: Making call to close driver server
	I0805 12:59:40.672225  450393 main.go:141] libmachine: (embed-certs-321139) Calling .Close
	I0805 12:59:40.672468  450393 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:59:40.672485  450393 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:59:40.672499  450393 addons.go:475] Verifying addon metrics-server=true in "embed-certs-321139"
	I0805 12:59:40.674497  450393 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0805 12:59:36.978361  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:37.478380  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:37.978354  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:38.478283  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:38.979257  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:39.478407  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:39.978772  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:40.478395  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:40.979309  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:41.478302  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:38.026001  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:40.026706  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:39.909336  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:41.910240  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:40.675778  450393 addons.go:510] duration metric: took 1.364642066s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0805 12:59:41.537321  450393 node_ready.go:53] node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:44.037571  450393 node_ready.go:53] node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:41.978791  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:42.478841  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:42.979289  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:43.478344  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:43.978613  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:44.478756  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:44.978392  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:45.478363  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:45.978354  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:46.478417  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:42.524568  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:45.024950  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:47.025453  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:44.408846  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:46.410085  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:46.537183  450393 node_ready.go:53] node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:47.037178  450393 node_ready.go:49] node "embed-certs-321139" has status "Ready":"True"
	I0805 12:59:47.037206  450393 node_ready.go:38] duration metric: took 7.503884334s for node "embed-certs-321139" to be "Ready" ...
	I0805 12:59:47.037221  450393 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 12:59:47.043159  450393 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wm7lh" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:47.048037  450393 pod_ready.go:92] pod "coredns-7db6d8ff4d-wm7lh" in "kube-system" namespace has status "Ready":"True"
	I0805 12:59:47.048088  450393 pod_ready.go:81] duration metric: took 4.901694ms for pod "coredns-7db6d8ff4d-wm7lh" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:47.048102  450393 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.055429  450393 pod_ready.go:92] pod "etcd-embed-certs-321139" in "kube-system" namespace has status "Ready":"True"
	I0805 12:59:49.055454  450393 pod_ready.go:81] duration metric: took 2.007345086s for pod "etcd-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.055464  450393 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.060072  450393 pod_ready.go:92] pod "kube-apiserver-embed-certs-321139" in "kube-system" namespace has status "Ready":"True"
	I0805 12:59:49.060095  450393 pod_ready.go:81] duration metric: took 4.624968ms for pod "kube-apiserver-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.060103  450393 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.065663  450393 pod_ready.go:92] pod "kube-controller-manager-embed-certs-321139" in "kube-system" namespace has status "Ready":"True"
	I0805 12:59:49.065689  450393 pod_ready.go:81] duration metric: took 5.578205ms for pod "kube-controller-manager-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.065708  450393 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-shgv2" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.071143  450393 pod_ready.go:92] pod "kube-proxy-shgv2" in "kube-system" namespace has status "Ready":"True"
	I0805 12:59:49.071166  450393 pod_ready.go:81] duration metric: took 5.450104ms for pod "kube-proxy-shgv2" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.071174  450393 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:46.978356  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:47.478322  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:47.978417  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:48.478966  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:48.979317  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:49.478449  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:49.978364  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:50.479294  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:50.978435  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:51.478614  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:49.028075  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:51.524299  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:48.908177  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:50.908490  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:52.909257  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:49.438002  450393 pod_ready.go:92] pod "kube-scheduler-embed-certs-321139" in "kube-system" namespace has status "Ready":"True"
	I0805 12:59:49.438032  450393 pod_ready.go:81] duration metric: took 366.851004ms for pod "kube-scheduler-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.438042  450393 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:51.443490  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:53.444534  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:51.978526  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:52.479187  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:52.979090  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:53.478733  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:53.978571  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:54.478525  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:54.979125  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:55.478711  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:55.979266  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:56.478956  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:53.525369  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:55.526660  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:54.909757  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:57.409489  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:55.445189  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:57.944983  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:56.979226  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:57.479019  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:57.978634  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:58.478338  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:58.978987  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:59.479290  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:59.978383  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:00.478373  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:00.978412  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:01.479312  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:57.527240  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:00.024177  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:02.024749  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:59.908362  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:01.909101  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:00.445471  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:02.944535  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:01.978392  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:02.479119  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:02.978313  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:03.478401  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:03.979029  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:04.478963  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:04.978393  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:05.478418  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:05.978381  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:06.479229  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:04.028522  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:06.525385  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:04.409119  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:06.409863  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:05.444313  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:07.452452  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:06.979172  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:07.479251  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:07.979183  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:08.478722  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:08.979248  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:09.478527  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:09.978581  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:10.478499  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:10.978520  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:11.478843  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:09.025651  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:11.525086  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:08.909528  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:11.408408  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:13.410472  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:09.945614  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:12.443723  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:11.978536  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:12.478504  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:12.979179  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:12.979258  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:13.022653  451238 cri.go:89] found id: ""
	I0805 13:00:13.022680  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.022689  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:13.022696  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:13.022766  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:13.059292  451238 cri.go:89] found id: ""
	I0805 13:00:13.059326  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.059336  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:13.059343  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:13.059399  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:13.098750  451238 cri.go:89] found id: ""
	I0805 13:00:13.098782  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.098793  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:13.098802  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:13.098866  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:13.133307  451238 cri.go:89] found id: ""
	I0805 13:00:13.133338  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.133346  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:13.133353  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:13.133420  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:13.171124  451238 cri.go:89] found id: ""
	I0805 13:00:13.171160  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.171170  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:13.171177  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:13.171237  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:13.209200  451238 cri.go:89] found id: ""
	I0805 13:00:13.209235  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.209247  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:13.209254  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:13.209312  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:13.244261  451238 cri.go:89] found id: ""
	I0805 13:00:13.244302  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.244313  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:13.244324  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:13.244397  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:13.283295  451238 cri.go:89] found id: ""
	I0805 13:00:13.283331  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.283342  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:13.283356  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:13.283372  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:13.344134  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:13.344174  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:13.384084  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:13.384119  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:13.433784  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:13.433821  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:13.449756  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:13.449786  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:13.573090  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:16.074053  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:16.087817  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:16.087900  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:16.130938  451238 cri.go:89] found id: ""
	I0805 13:00:16.130970  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.130981  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:16.130989  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:16.131058  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:16.184208  451238 cri.go:89] found id: ""
	I0805 13:00:16.184245  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.184259  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:16.184269  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:16.184346  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:16.230959  451238 cri.go:89] found id: ""
	I0805 13:00:16.230998  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.231011  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:16.231020  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:16.231100  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:16.282886  451238 cri.go:89] found id: ""
	I0805 13:00:16.282940  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.282954  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:16.282963  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:16.283024  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:16.320345  451238 cri.go:89] found id: ""
	I0805 13:00:16.320381  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.320397  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:16.320404  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:16.320521  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:16.356390  451238 cri.go:89] found id: ""
	I0805 13:00:16.356427  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.356439  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:16.356447  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:16.356503  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:16.400477  451238 cri.go:89] found id: ""
	I0805 13:00:16.400510  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.400529  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:16.400539  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:16.400612  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:16.440634  451238 cri.go:89] found id: ""
	I0805 13:00:16.440662  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.440673  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:16.440685  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:16.440702  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:16.510879  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:16.510922  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:16.554294  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:16.554332  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:16.607798  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:16.607853  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:16.622618  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:16.622655  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:16.702599  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:14.025025  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:16.025182  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:15.909245  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:18.409729  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:14.445222  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:16.445451  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:18.944533  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:19.202789  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:19.215776  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:19.215851  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:19.250503  451238 cri.go:89] found id: ""
	I0805 13:00:19.250540  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.250551  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:19.250558  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:19.250630  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:19.287358  451238 cri.go:89] found id: ""
	I0805 13:00:19.287392  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.287403  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:19.287412  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:19.287484  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:19.322167  451238 cri.go:89] found id: ""
	I0805 13:00:19.322195  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.322203  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:19.322209  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:19.322262  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:19.356874  451238 cri.go:89] found id: ""
	I0805 13:00:19.356905  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.356923  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:19.356931  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:19.357006  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:19.395172  451238 cri.go:89] found id: ""
	I0805 13:00:19.395206  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.395217  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:19.395227  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:19.395294  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:19.438404  451238 cri.go:89] found id: ""
	I0805 13:00:19.438431  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.438439  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:19.438445  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:19.438510  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:19.474727  451238 cri.go:89] found id: ""
	I0805 13:00:19.474755  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.474762  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:19.474769  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:19.474832  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:19.513906  451238 cri.go:89] found id: ""
	I0805 13:00:19.513945  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.513953  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:19.513963  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:19.513977  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:19.528337  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:19.528378  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:19.601135  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:19.601168  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:19.601185  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:19.676792  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:19.676844  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:19.716861  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:19.716894  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:18.025634  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:20.027525  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:20.909150  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:22.910153  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:20.945009  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:23.444529  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:22.266971  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:22.280346  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:22.280422  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:22.314788  451238 cri.go:89] found id: ""
	I0805 13:00:22.314816  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.314824  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:22.314831  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:22.314884  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:22.357357  451238 cri.go:89] found id: ""
	I0805 13:00:22.357394  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.357405  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:22.357414  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:22.357483  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:22.393254  451238 cri.go:89] found id: ""
	I0805 13:00:22.393288  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.393296  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:22.393302  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:22.393366  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:22.434766  451238 cri.go:89] found id: ""
	I0805 13:00:22.434796  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.434807  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:22.434815  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:22.434887  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:22.475649  451238 cri.go:89] found id: ""
	I0805 13:00:22.475676  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.475684  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:22.475690  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:22.475754  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:22.515633  451238 cri.go:89] found id: ""
	I0805 13:00:22.515662  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.515670  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:22.515677  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:22.515757  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:22.550716  451238 cri.go:89] found id: ""
	I0805 13:00:22.550749  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.550759  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:22.550767  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:22.550849  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:22.588537  451238 cri.go:89] found id: ""
	I0805 13:00:22.588571  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.588583  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:22.588595  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:22.588609  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:22.638535  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:22.638577  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:22.654879  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:22.654919  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:22.721482  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:22.721513  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:22.721529  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:22.801442  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:22.801489  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:25.343805  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:25.358068  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:25.358176  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:25.393734  451238 cri.go:89] found id: ""
	I0805 13:00:25.393767  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.393778  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:25.393785  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:25.393849  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:25.428217  451238 cri.go:89] found id: ""
	I0805 13:00:25.428244  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.428252  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:25.428257  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:25.428316  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:25.462826  451238 cri.go:89] found id: ""
	I0805 13:00:25.462858  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.462869  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:25.462877  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:25.462961  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:25.502960  451238 cri.go:89] found id: ""
	I0805 13:00:25.502989  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.502998  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:25.503006  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:25.503072  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:25.538859  451238 cri.go:89] found id: ""
	I0805 13:00:25.538888  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.538897  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:25.538902  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:25.538964  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:25.577850  451238 cri.go:89] found id: ""
	I0805 13:00:25.577883  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.577894  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:25.577901  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:25.577988  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:25.611728  451238 cri.go:89] found id: ""
	I0805 13:00:25.611773  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.611785  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:25.611793  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:25.611865  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:25.654987  451238 cri.go:89] found id: ""
	I0805 13:00:25.655018  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.655027  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:25.655039  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:25.655052  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:25.669124  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:25.669160  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:25.747354  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:25.747380  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:25.747398  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:25.825198  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:25.825241  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:25.865511  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:25.865546  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:22.526638  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:25.024414  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:27.025393  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:25.409361  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:27.411148  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:25.444607  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:27.447460  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:28.418263  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:28.431831  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:28.431895  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:28.470249  451238 cri.go:89] found id: ""
	I0805 13:00:28.470280  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.470291  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:28.470301  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:28.470373  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:28.506935  451238 cri.go:89] found id: ""
	I0805 13:00:28.506968  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.506977  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:28.506985  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:28.507053  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:28.546621  451238 cri.go:89] found id: ""
	I0805 13:00:28.546652  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.546663  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:28.546671  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:28.546749  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:28.584699  451238 cri.go:89] found id: ""
	I0805 13:00:28.584734  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.584745  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:28.584753  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:28.584820  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:28.620693  451238 cri.go:89] found id: ""
	I0805 13:00:28.620726  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.620736  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:28.620744  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:28.620814  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:28.657340  451238 cri.go:89] found id: ""
	I0805 13:00:28.657370  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.657379  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:28.657385  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:28.657438  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:28.695126  451238 cri.go:89] found id: ""
	I0805 13:00:28.695156  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.695166  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:28.695174  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:28.695239  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:28.729757  451238 cri.go:89] found id: ""
	I0805 13:00:28.729808  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.729821  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:28.729834  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:28.729852  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:28.769642  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:28.769675  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:28.818076  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:28.818114  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:28.831466  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:28.831496  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:28.902788  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:28.902818  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:28.902836  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:31.482482  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:31.497767  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:31.497867  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:31.536922  451238 cri.go:89] found id: ""
	I0805 13:00:31.536948  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.536960  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:31.536969  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:31.537040  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:31.572422  451238 cri.go:89] found id: ""
	I0805 13:00:31.572456  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.572466  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:31.572472  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:31.572531  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:31.607961  451238 cri.go:89] found id: ""
	I0805 13:00:31.607996  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.608008  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:31.608016  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:31.608082  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:31.641771  451238 cri.go:89] found id: ""
	I0805 13:00:31.641800  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.641822  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:31.641830  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:31.641904  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:31.681661  451238 cri.go:89] found id: ""
	I0805 13:00:31.681695  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.681707  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:31.681715  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:31.681791  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:31.723777  451238 cri.go:89] found id: ""
	I0805 13:00:31.723814  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.723823  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:31.723829  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:31.723922  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:31.759898  451238 cri.go:89] found id: ""
	I0805 13:00:31.759935  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.759948  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:31.759957  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:31.760022  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:31.798433  451238 cri.go:89] found id: ""
	I0805 13:00:31.798462  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.798470  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:31.798480  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:31.798497  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:31.872005  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:31.872030  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:31.872045  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:31.952201  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:31.952240  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:29.524445  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:31.525646  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:29.909901  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:32.408826  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:29.944170  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:31.944427  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:31.995920  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:31.995955  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:32.047453  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:32.047493  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:34.562369  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:34.576644  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:34.576708  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:34.613002  451238 cri.go:89] found id: ""
	I0805 13:00:34.613036  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.613047  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:34.613056  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:34.613127  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:34.650723  451238 cri.go:89] found id: ""
	I0805 13:00:34.650757  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.650769  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:34.650777  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:34.650851  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:34.689047  451238 cri.go:89] found id: ""
	I0805 13:00:34.689073  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.689081  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:34.689088  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:34.689148  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:34.727552  451238 cri.go:89] found id: ""
	I0805 13:00:34.727592  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.727604  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:34.727612  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:34.727683  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:34.761661  451238 cri.go:89] found id: ""
	I0805 13:00:34.761696  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.761707  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:34.761715  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:34.761791  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:34.800062  451238 cri.go:89] found id: ""
	I0805 13:00:34.800116  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.800128  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:34.800137  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:34.800198  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:34.833536  451238 cri.go:89] found id: ""
	I0805 13:00:34.833566  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.833578  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:34.833586  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:34.833654  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:34.868079  451238 cri.go:89] found id: ""
	I0805 13:00:34.868117  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.868126  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:34.868135  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:34.868149  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:34.920092  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:34.920124  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:34.934484  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:34.934510  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:35.007716  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:35.007751  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:35.007768  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:35.088183  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:35.088233  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:34.024704  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:36.025754  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:34.409917  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:36.409993  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:34.444842  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:36.943985  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:38.944649  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:37.633443  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:37.647405  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:37.647470  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:37.684682  451238 cri.go:89] found id: ""
	I0805 13:00:37.684711  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.684720  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:37.684727  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:37.684779  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:37.723413  451238 cri.go:89] found id: ""
	I0805 13:00:37.723442  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.723449  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:37.723455  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:37.723506  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:37.758388  451238 cri.go:89] found id: ""
	I0805 13:00:37.758418  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.758428  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:37.758437  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:37.758501  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:37.797846  451238 cri.go:89] found id: ""
	I0805 13:00:37.797879  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.797890  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:37.797901  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:37.797971  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:37.837053  451238 cri.go:89] found id: ""
	I0805 13:00:37.837082  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.837092  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:37.837104  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:37.837163  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:37.876185  451238 cri.go:89] found id: ""
	I0805 13:00:37.876211  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.876220  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:37.876226  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:37.876294  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:37.915318  451238 cri.go:89] found id: ""
	I0805 13:00:37.915350  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.915362  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:37.915370  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:37.915429  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:37.953916  451238 cri.go:89] found id: ""
	I0805 13:00:37.953944  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.953954  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:37.953964  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:37.953976  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:37.991116  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:37.991154  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:38.043796  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:38.043838  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:38.058636  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:38.058669  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:38.143022  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:38.143051  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:38.143067  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:40.721468  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:40.735679  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:40.735774  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:40.773583  451238 cri.go:89] found id: ""
	I0805 13:00:40.773609  451238 logs.go:276] 0 containers: []
	W0805 13:00:40.773617  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:40.773626  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:40.773685  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:40.819857  451238 cri.go:89] found id: ""
	I0805 13:00:40.819886  451238 logs.go:276] 0 containers: []
	W0805 13:00:40.819895  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:40.819901  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:40.819963  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:40.857156  451238 cri.go:89] found id: ""
	I0805 13:00:40.857184  451238 logs.go:276] 0 containers: []
	W0805 13:00:40.857192  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:40.857198  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:40.857251  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:40.892933  451238 cri.go:89] found id: ""
	I0805 13:00:40.892970  451238 logs.go:276] 0 containers: []
	W0805 13:00:40.892981  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:40.892990  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:40.893046  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:40.927128  451238 cri.go:89] found id: ""
	I0805 13:00:40.927163  451238 logs.go:276] 0 containers: []
	W0805 13:00:40.927173  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:40.927182  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:40.927237  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:40.961790  451238 cri.go:89] found id: ""
	I0805 13:00:40.961817  451238 logs.go:276] 0 containers: []
	W0805 13:00:40.961826  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:40.961832  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:40.961886  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:40.996249  451238 cri.go:89] found id: ""
	I0805 13:00:40.996282  451238 logs.go:276] 0 containers: []
	W0805 13:00:40.996293  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:40.996300  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:40.996371  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:41.032305  451238 cri.go:89] found id: ""
	I0805 13:00:41.032332  451238 logs.go:276] 0 containers: []
	W0805 13:00:41.032342  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:41.032358  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:41.032375  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:41.075993  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:41.076027  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:41.126020  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:41.126057  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:41.140263  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:41.140288  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:41.216648  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:41.216670  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:41.216683  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:38.524812  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:41.024597  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:38.909518  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:40.910256  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:43.410062  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:41.443930  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:43.945026  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:43.796367  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:43.810086  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:43.810162  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:43.844373  451238 cri.go:89] found id: ""
	I0805 13:00:43.844410  451238 logs.go:276] 0 containers: []
	W0805 13:00:43.844422  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:43.844430  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:43.844502  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:43.880249  451238 cri.go:89] found id: ""
	I0805 13:00:43.880285  451238 logs.go:276] 0 containers: []
	W0805 13:00:43.880295  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:43.880303  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:43.880376  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:43.921279  451238 cri.go:89] found id: ""
	I0805 13:00:43.921313  451238 logs.go:276] 0 containers: []
	W0805 13:00:43.921323  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:43.921329  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:43.921382  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:43.963736  451238 cri.go:89] found id: ""
	I0805 13:00:43.963782  451238 logs.go:276] 0 containers: []
	W0805 13:00:43.963794  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:43.963803  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:43.963869  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:44.009001  451238 cri.go:89] found id: ""
	I0805 13:00:44.009038  451238 logs.go:276] 0 containers: []
	W0805 13:00:44.009050  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:44.009057  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:44.009128  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:44.059484  451238 cri.go:89] found id: ""
	I0805 13:00:44.059514  451238 logs.go:276] 0 containers: []
	W0805 13:00:44.059526  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:44.059534  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:44.059605  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:44.102043  451238 cri.go:89] found id: ""
	I0805 13:00:44.102075  451238 logs.go:276] 0 containers: []
	W0805 13:00:44.102088  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:44.102094  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:44.102170  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:44.137518  451238 cri.go:89] found id: ""
	I0805 13:00:44.137558  451238 logs.go:276] 0 containers: []
	W0805 13:00:44.137569  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:44.137584  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:44.137600  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:44.188139  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:44.188175  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:44.202544  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:44.202588  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:44.278486  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:44.278508  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:44.278521  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:44.363419  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:44.363458  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:46.905665  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:46.922141  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:46.922206  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:43.025461  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:45.523997  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:45.908437  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:48.409410  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:46.445919  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:48.944243  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:46.963468  451238 cri.go:89] found id: ""
	I0805 13:00:46.963494  451238 logs.go:276] 0 containers: []
	W0805 13:00:46.963502  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:46.963508  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:46.963557  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:47.003445  451238 cri.go:89] found id: ""
	I0805 13:00:47.003472  451238 logs.go:276] 0 containers: []
	W0805 13:00:47.003480  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:47.003486  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:47.003537  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:47.043271  451238 cri.go:89] found id: ""
	I0805 13:00:47.043306  451238 logs.go:276] 0 containers: []
	W0805 13:00:47.043318  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:47.043326  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:47.043394  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:47.079843  451238 cri.go:89] found id: ""
	I0805 13:00:47.079874  451238 logs.go:276] 0 containers: []
	W0805 13:00:47.079884  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:47.079893  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:47.079954  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:47.116819  451238 cri.go:89] found id: ""
	I0805 13:00:47.116847  451238 logs.go:276] 0 containers: []
	W0805 13:00:47.116856  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:47.116861  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:47.116917  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:47.156302  451238 cri.go:89] found id: ""
	I0805 13:00:47.156331  451238 logs.go:276] 0 containers: []
	W0805 13:00:47.156340  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:47.156353  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:47.156410  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:47.200419  451238 cri.go:89] found id: ""
	I0805 13:00:47.200449  451238 logs.go:276] 0 containers: []
	W0805 13:00:47.200463  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:47.200469  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:47.200533  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:47.237483  451238 cri.go:89] found id: ""
	I0805 13:00:47.237515  451238 logs.go:276] 0 containers: []
	W0805 13:00:47.237522  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:47.237532  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:47.237545  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:47.251598  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:47.251632  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:47.326457  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:47.326483  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:47.326501  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:47.410413  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:47.410455  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:47.452696  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:47.452732  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:50.005335  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:50.019610  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:50.019679  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:50.057401  451238 cri.go:89] found id: ""
	I0805 13:00:50.057435  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.057447  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:50.057456  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:50.057516  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:50.101710  451238 cri.go:89] found id: ""
	I0805 13:00:50.101743  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.101751  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:50.101758  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:50.101822  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:50.139624  451238 cri.go:89] found id: ""
	I0805 13:00:50.139658  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.139669  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:50.139677  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:50.139761  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:50.176004  451238 cri.go:89] found id: ""
	I0805 13:00:50.176031  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.176039  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:50.176045  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:50.176123  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:50.219319  451238 cri.go:89] found id: ""
	I0805 13:00:50.219352  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.219362  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:50.219369  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:50.219437  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:50.287443  451238 cri.go:89] found id: ""
	I0805 13:00:50.287478  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.287489  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:50.287498  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:50.287582  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:50.321018  451238 cri.go:89] found id: ""
	I0805 13:00:50.321047  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.321056  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:50.321063  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:50.321124  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:50.354559  451238 cri.go:89] found id: ""
	I0805 13:00:50.354597  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.354610  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:50.354625  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:50.354642  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:50.398621  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:50.398657  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:50.451693  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:50.451735  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:50.466810  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:50.466851  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:50.542431  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:50.542461  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:50.542482  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:47.525977  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:50.025280  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:52.025760  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:50.410198  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:52.908466  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:50.946086  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:53.445962  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:53.128466  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:53.144139  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:53.144216  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:53.178383  451238 cri.go:89] found id: ""
	I0805 13:00:53.178427  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.178438  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:53.178447  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:53.178516  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:53.220312  451238 cri.go:89] found id: ""
	I0805 13:00:53.220348  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.220358  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:53.220365  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:53.220432  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:53.255352  451238 cri.go:89] found id: ""
	I0805 13:00:53.255380  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.255390  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:53.255398  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:53.255473  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:53.293254  451238 cri.go:89] found id: ""
	I0805 13:00:53.293292  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.293311  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:53.293320  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:53.293395  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:53.329407  451238 cri.go:89] found id: ""
	I0805 13:00:53.329436  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.329448  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:53.329455  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:53.329523  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:53.362838  451238 cri.go:89] found id: ""
	I0805 13:00:53.362868  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.362876  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:53.362883  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:53.362957  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:53.399283  451238 cri.go:89] found id: ""
	I0805 13:00:53.399313  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.399324  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:53.399332  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:53.399405  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:53.438527  451238 cri.go:89] found id: ""
	I0805 13:00:53.438558  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.438567  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:53.438578  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:53.438597  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:53.492709  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:53.492760  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:53.507522  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:53.507555  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:53.581690  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:53.581710  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:53.581724  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:53.664402  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:53.664451  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:56.209640  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:56.224403  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:56.224487  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:56.266214  451238 cri.go:89] found id: ""
	I0805 13:00:56.266243  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.266254  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:56.266263  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:56.266328  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:56.304034  451238 cri.go:89] found id: ""
	I0805 13:00:56.304070  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.304082  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:56.304091  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:56.304172  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:56.342133  451238 cri.go:89] found id: ""
	I0805 13:00:56.342159  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.342167  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:56.342173  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:56.342225  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:56.378549  451238 cri.go:89] found id: ""
	I0805 13:00:56.378588  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.378599  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:56.378606  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:56.378667  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:56.415613  451238 cri.go:89] found id: ""
	I0805 13:00:56.415641  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.415651  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:56.415657  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:56.415715  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:56.451915  451238 cri.go:89] found id: ""
	I0805 13:00:56.451944  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.451953  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:56.451960  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:56.452021  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:56.492219  451238 cri.go:89] found id: ""
	I0805 13:00:56.492255  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.492267  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:56.492275  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:56.492347  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:56.534564  451238 cri.go:89] found id: ""
	I0805 13:00:56.534606  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.534618  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:56.534632  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:56.534652  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:56.548772  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:56.548813  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:56.625649  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:56.625678  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:56.625695  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:56.716735  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:56.716787  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:56.771881  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:56.771910  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:54.525355  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:57.025659  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:54.908805  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:56.909601  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:55.943885  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:57.945233  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:59.325624  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:59.338796  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:59.338869  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:59.375002  451238 cri.go:89] found id: ""
	I0805 13:00:59.375039  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.375050  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:59.375059  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:59.375138  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:59.410778  451238 cri.go:89] found id: ""
	I0805 13:00:59.410800  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.410810  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:59.410817  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:59.410873  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:59.453728  451238 cri.go:89] found id: ""
	I0805 13:00:59.453760  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.453771  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:59.453779  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:59.453845  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:59.492968  451238 cri.go:89] found id: ""
	I0805 13:00:59.493002  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.493013  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:59.493021  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:59.493091  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:59.533342  451238 cri.go:89] found id: ""
	I0805 13:00:59.533372  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.533383  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:59.533390  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:59.533445  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:59.569677  451238 cri.go:89] found id: ""
	I0805 13:00:59.569705  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.569715  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:59.569722  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:59.569789  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:59.605106  451238 cri.go:89] found id: ""
	I0805 13:00:59.605139  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.605150  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:59.605158  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:59.605228  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:59.639948  451238 cri.go:89] found id: ""
	I0805 13:00:59.639980  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.639989  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:59.640000  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:59.640016  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:59.679926  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:59.679956  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:59.731545  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:59.731591  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:59.746286  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:59.746320  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:59.828398  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:59.828420  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:59.828439  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:59.524365  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:01.525092  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:59.410713  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:01.909619  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:59.945483  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:02.445780  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:02.412560  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:02.429633  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:02.429718  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:02.475916  451238 cri.go:89] found id: ""
	I0805 13:01:02.475951  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.475963  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:02.475971  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:02.476061  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:02.528807  451238 cri.go:89] found id: ""
	I0805 13:01:02.528837  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.528849  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:02.528856  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:02.528924  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:02.575164  451238 cri.go:89] found id: ""
	I0805 13:01:02.575194  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.575210  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:02.575218  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:02.575286  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:02.614709  451238 cri.go:89] found id: ""
	I0805 13:01:02.614800  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.614815  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:02.614824  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:02.614902  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:02.654941  451238 cri.go:89] found id: ""
	I0805 13:01:02.654979  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.654990  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:02.654997  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:02.655069  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:02.690552  451238 cri.go:89] found id: ""
	I0805 13:01:02.690586  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.690595  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:02.690602  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:02.690657  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:02.725607  451238 cri.go:89] found id: ""
	I0805 13:01:02.725644  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.725656  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:02.725665  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:02.725745  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:02.760180  451238 cri.go:89] found id: ""
	I0805 13:01:02.760211  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.760223  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:02.760244  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:02.760262  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:02.813071  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:02.813128  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:02.828633  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:02.828665  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:02.898049  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:02.898074  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:02.898087  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:02.988077  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:02.988124  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:05.532719  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:05.546423  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:05.546489  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:05.590978  451238 cri.go:89] found id: ""
	I0805 13:01:05.591006  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.591013  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:05.591019  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:05.591071  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:05.631251  451238 cri.go:89] found id: ""
	I0805 13:01:05.631287  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.631298  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:05.631306  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:05.631391  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:05.671826  451238 cri.go:89] found id: ""
	I0805 13:01:05.671863  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.671875  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:05.671883  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:05.671951  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:05.708147  451238 cri.go:89] found id: ""
	I0805 13:01:05.708176  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.708186  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:05.708194  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:05.708262  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:05.741962  451238 cri.go:89] found id: ""
	I0805 13:01:05.741994  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.742006  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:05.742015  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:05.742087  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:05.777930  451238 cri.go:89] found id: ""
	I0805 13:01:05.777965  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.777976  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:05.777985  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:05.778061  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:05.813066  451238 cri.go:89] found id: ""
	I0805 13:01:05.813099  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.813111  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:05.813119  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:05.813189  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:05.849382  451238 cri.go:89] found id: ""
	I0805 13:01:05.849410  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.849418  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:05.849428  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:05.849440  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:05.903376  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:05.903423  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:05.918540  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:05.918575  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:05.990608  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:05.990637  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:05.990658  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:06.072524  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:06.072571  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:04.025528  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:06.525325  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:04.409190  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:06.409231  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:04.944649  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:07.445278  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:08.617528  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:08.631637  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:08.631713  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:08.669999  451238 cri.go:89] found id: ""
	I0805 13:01:08.670039  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.670050  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:08.670065  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:08.670147  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:08.705322  451238 cri.go:89] found id: ""
	I0805 13:01:08.705356  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.705365  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:08.705370  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:08.705442  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:08.744884  451238 cri.go:89] found id: ""
	I0805 13:01:08.744915  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.744927  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:08.744936  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:08.745018  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:08.782394  451238 cri.go:89] found id: ""
	I0805 13:01:08.782428  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.782440  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:08.782448  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:08.782518  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:08.816989  451238 cri.go:89] found id: ""
	I0805 13:01:08.817018  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.817027  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:08.817034  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:08.817106  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:08.856389  451238 cri.go:89] found id: ""
	I0805 13:01:08.856420  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.856431  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:08.856439  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:08.856506  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:08.891942  451238 cri.go:89] found id: ""
	I0805 13:01:08.891975  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.891986  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:08.891995  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:08.892064  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:08.930329  451238 cri.go:89] found id: ""
	I0805 13:01:08.930364  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.930375  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:08.930389  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:08.930406  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:08.972574  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:08.972610  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:09.026194  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:09.026228  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:09.040973  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:09.041002  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:09.115094  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:09.115121  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:09.115143  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:11.698322  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:11.711841  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:11.711927  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:11.749152  451238 cri.go:89] found id: ""
	I0805 13:01:11.749187  451238 logs.go:276] 0 containers: []
	W0805 13:01:11.749199  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:11.749207  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:11.749274  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:11.785395  451238 cri.go:89] found id: ""
	I0805 13:01:11.785430  451238 logs.go:276] 0 containers: []
	W0805 13:01:11.785441  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:11.785449  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:11.785516  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:11.822240  451238 cri.go:89] found id: ""
	I0805 13:01:11.822282  451238 logs.go:276] 0 containers: []
	W0805 13:01:11.822293  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:11.822302  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:11.822372  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:11.858755  451238 cri.go:89] found id: ""
	I0805 13:01:11.858794  451238 logs.go:276] 0 containers: []
	W0805 13:01:11.858805  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:11.858814  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:11.858884  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:11.893064  451238 cri.go:89] found id: ""
	I0805 13:01:11.893101  451238 logs.go:276] 0 containers: []
	W0805 13:01:11.893113  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:11.893121  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:11.893195  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:11.930965  451238 cri.go:89] found id: ""
	I0805 13:01:11.931003  451238 logs.go:276] 0 containers: []
	W0805 13:01:11.931015  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:11.931025  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:11.931089  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:09.025566  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:11.525069  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:08.910618  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:11.409157  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:09.944797  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:12.445029  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:11.967594  451238 cri.go:89] found id: ""
	I0805 13:01:11.967620  451238 logs.go:276] 0 containers: []
	W0805 13:01:11.967630  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:11.967638  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:11.967697  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:12.004978  451238 cri.go:89] found id: ""
	I0805 13:01:12.005007  451238 logs.go:276] 0 containers: []
	W0805 13:01:12.005015  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:12.005025  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:12.005037  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:12.087476  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:12.087500  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:12.087515  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:12.177690  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:12.177757  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:12.222858  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:12.222889  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:12.273322  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:12.273362  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:14.788210  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:14.802351  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:14.802426  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:14.837705  451238 cri.go:89] found id: ""
	I0805 13:01:14.837736  451238 logs.go:276] 0 containers: []
	W0805 13:01:14.837746  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:14.837755  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:14.837824  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:14.873389  451238 cri.go:89] found id: ""
	I0805 13:01:14.873420  451238 logs.go:276] 0 containers: []
	W0805 13:01:14.873430  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:14.873438  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:14.873506  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:14.913969  451238 cri.go:89] found id: ""
	I0805 13:01:14.913999  451238 logs.go:276] 0 containers: []
	W0805 13:01:14.914009  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:14.914018  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:14.914081  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:14.953478  451238 cri.go:89] found id: ""
	I0805 13:01:14.953510  451238 logs.go:276] 0 containers: []
	W0805 13:01:14.953521  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:14.953528  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:14.953584  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:14.992166  451238 cri.go:89] found id: ""
	I0805 13:01:14.992197  451238 logs.go:276] 0 containers: []
	W0805 13:01:14.992206  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:14.992212  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:14.992291  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:15.031258  451238 cri.go:89] found id: ""
	I0805 13:01:15.031285  451238 logs.go:276] 0 containers: []
	W0805 13:01:15.031293  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:15.031300  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:15.031353  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:15.068944  451238 cri.go:89] found id: ""
	I0805 13:01:15.068972  451238 logs.go:276] 0 containers: []
	W0805 13:01:15.068980  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:15.068986  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:15.069042  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:15.105413  451238 cri.go:89] found id: ""
	I0805 13:01:15.105443  451238 logs.go:276] 0 containers: []
	W0805 13:01:15.105454  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:15.105467  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:15.105489  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:15.161925  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:15.161969  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:15.177174  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:15.177206  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:15.257950  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:15.257975  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:15.257989  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:15.336672  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:15.336716  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:13.526088  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:16.025513  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:13.908773  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:15.908817  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:17.910431  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:14.945842  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:17.444869  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:17.876314  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:17.889842  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:17.889909  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:17.928050  451238 cri.go:89] found id: ""
	I0805 13:01:17.928077  451238 logs.go:276] 0 containers: []
	W0805 13:01:17.928086  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:17.928092  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:17.928150  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:17.965713  451238 cri.go:89] found id: ""
	I0805 13:01:17.965751  451238 logs.go:276] 0 containers: []
	W0805 13:01:17.965762  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:17.965770  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:17.965837  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:18.002938  451238 cri.go:89] found id: ""
	I0805 13:01:18.002972  451238 logs.go:276] 0 containers: []
	W0805 13:01:18.002984  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:18.002992  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:18.003062  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:18.040140  451238 cri.go:89] found id: ""
	I0805 13:01:18.040178  451238 logs.go:276] 0 containers: []
	W0805 13:01:18.040190  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:18.040198  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:18.040269  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:18.075427  451238 cri.go:89] found id: ""
	I0805 13:01:18.075463  451238 logs.go:276] 0 containers: []
	W0805 13:01:18.075475  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:18.075490  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:18.075558  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:18.113469  451238 cri.go:89] found id: ""
	I0805 13:01:18.113507  451238 logs.go:276] 0 containers: []
	W0805 13:01:18.113521  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:18.113528  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:18.113587  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:18.152626  451238 cri.go:89] found id: ""
	I0805 13:01:18.152662  451238 logs.go:276] 0 containers: []
	W0805 13:01:18.152672  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:18.152678  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:18.152745  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:18.189540  451238 cri.go:89] found id: ""
	I0805 13:01:18.189577  451238 logs.go:276] 0 containers: []
	W0805 13:01:18.189590  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:18.189602  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:18.189618  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:18.244314  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:18.244353  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:18.257912  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:18.257939  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:18.339659  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:18.339682  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:18.339699  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:18.425391  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:18.425449  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:20.975889  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:20.989798  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:20.989868  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:21.030858  451238 cri.go:89] found id: ""
	I0805 13:01:21.030894  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.030906  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:21.030915  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:21.030979  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:21.067367  451238 cri.go:89] found id: ""
	I0805 13:01:21.067402  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.067411  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:21.067419  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:21.067476  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:21.104307  451238 cri.go:89] found id: ""
	I0805 13:01:21.104337  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.104352  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:21.104361  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:21.104424  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:21.141486  451238 cri.go:89] found id: ""
	I0805 13:01:21.141519  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.141531  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:21.141539  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:21.141606  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:21.179247  451238 cri.go:89] found id: ""
	I0805 13:01:21.179305  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.179317  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:21.179330  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:21.179406  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:21.215030  451238 cri.go:89] found id: ""
	I0805 13:01:21.215065  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.215075  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:21.215083  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:21.215152  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:21.252982  451238 cri.go:89] found id: ""
	I0805 13:01:21.253008  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.253016  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:21.253022  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:21.253097  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:21.290256  451238 cri.go:89] found id: ""
	I0805 13:01:21.290292  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.290302  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:21.290325  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:21.290343  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:21.342809  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:21.342855  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:21.357959  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:21.358000  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:21.433087  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:21.433120  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:21.433143  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:21.514261  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:21.514312  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:18.025965  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:20.524832  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:20.409943  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:22.909233  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:19.445074  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:21.445547  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:23.445637  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:24.060402  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:24.076056  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:24.076131  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:24.115976  451238 cri.go:89] found id: ""
	I0805 13:01:24.116009  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.116022  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:24.116031  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:24.116111  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:24.158411  451238 cri.go:89] found id: ""
	I0805 13:01:24.158440  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.158448  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:24.158454  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:24.158520  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:24.194589  451238 cri.go:89] found id: ""
	I0805 13:01:24.194624  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.194635  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:24.194644  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:24.194720  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:24.231528  451238 cri.go:89] found id: ""
	I0805 13:01:24.231562  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.231569  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:24.231576  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:24.231649  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:24.268491  451238 cri.go:89] found id: ""
	I0805 13:01:24.268523  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.268532  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:24.268538  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:24.268602  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:24.306718  451238 cri.go:89] found id: ""
	I0805 13:01:24.306752  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.306763  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:24.306772  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:24.306839  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:24.343552  451238 cri.go:89] found id: ""
	I0805 13:01:24.343578  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.343586  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:24.343593  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:24.343649  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:24.384555  451238 cri.go:89] found id: ""
	I0805 13:01:24.384590  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.384602  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:24.384615  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:24.384633  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:24.430256  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:24.430298  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:24.484616  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:24.484661  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:24.500926  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:24.500958  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:24.581379  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:24.581410  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:24.581424  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:22.525806  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:24.526411  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:27.024452  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:25.408887  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:27.409717  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:25.945113  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:28.444740  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:27.167538  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:27.181959  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:27.182035  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:27.223243  451238 cri.go:89] found id: ""
	I0805 13:01:27.223282  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.223293  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:27.223301  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:27.223374  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:27.257806  451238 cri.go:89] found id: ""
	I0805 13:01:27.257843  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.257856  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:27.257864  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:27.257940  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:27.304306  451238 cri.go:89] found id: ""
	I0805 13:01:27.304342  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.304353  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:27.304370  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:27.304439  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:27.342595  451238 cri.go:89] found id: ""
	I0805 13:01:27.342623  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.342631  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:27.342638  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:27.342707  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:27.385628  451238 cri.go:89] found id: ""
	I0805 13:01:27.385661  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.385670  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:27.385677  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:27.385760  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:27.425059  451238 cri.go:89] found id: ""
	I0805 13:01:27.425091  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.425100  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:27.425106  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:27.425175  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:27.465739  451238 cri.go:89] found id: ""
	I0805 13:01:27.465783  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.465794  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:27.465807  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:27.465869  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:27.506431  451238 cri.go:89] found id: ""
	I0805 13:01:27.506460  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.506468  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:27.506477  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:27.506494  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:27.586440  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:27.586467  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:27.586482  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:27.667826  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:27.667869  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:27.710458  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:27.710496  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:27.763057  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:27.763100  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:30.278799  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:30.293788  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:30.293874  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:30.336209  451238 cri.go:89] found id: ""
	I0805 13:01:30.336240  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.336248  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:30.336255  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:30.336323  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:30.371593  451238 cri.go:89] found id: ""
	I0805 13:01:30.371627  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.371642  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:30.371649  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:30.371714  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:30.408266  451238 cri.go:89] found id: ""
	I0805 13:01:30.408298  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.408317  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:30.408325  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:30.408388  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:30.448841  451238 cri.go:89] found id: ""
	I0805 13:01:30.448864  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.448872  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:30.448878  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:30.448940  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:30.488367  451238 cri.go:89] found id: ""
	I0805 13:01:30.488403  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.488411  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:30.488418  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:30.488485  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:30.527131  451238 cri.go:89] found id: ""
	I0805 13:01:30.527163  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.527173  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:30.527181  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:30.527249  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:30.568089  451238 cri.go:89] found id: ""
	I0805 13:01:30.568122  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.568131  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:30.568138  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:30.568203  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:30.605952  451238 cri.go:89] found id: ""
	I0805 13:01:30.605990  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.606007  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:30.606021  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:30.606041  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:30.656449  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:30.656491  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:30.710124  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:30.710164  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:30.724417  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:30.724455  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:30.820639  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:30.820669  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:30.820687  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:29.025377  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:31.525340  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:29.909043  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:32.410359  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:30.445047  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:32.445931  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:33.403497  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:33.419581  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:33.419651  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:33.462011  451238 cri.go:89] found id: ""
	I0805 13:01:33.462042  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.462051  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:33.462057  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:33.462126  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:33.502476  451238 cri.go:89] found id: ""
	I0805 13:01:33.502509  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.502519  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:33.502527  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:33.502601  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:33.547392  451238 cri.go:89] found id: ""
	I0805 13:01:33.547421  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.547430  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:33.547437  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:33.547490  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:33.584013  451238 cri.go:89] found id: ""
	I0805 13:01:33.584040  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.584048  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:33.584054  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:33.584125  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:33.617325  451238 cri.go:89] found id: ""
	I0805 13:01:33.617359  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.617367  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:33.617374  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:33.617429  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:33.651922  451238 cri.go:89] found id: ""
	I0805 13:01:33.651959  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.651971  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:33.651980  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:33.652049  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:33.689487  451238 cri.go:89] found id: ""
	I0805 13:01:33.689515  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.689522  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:33.689529  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:33.689580  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:33.723220  451238 cri.go:89] found id: ""
	I0805 13:01:33.723251  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.723260  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:33.723270  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:33.723282  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:33.777271  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:33.777311  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:33.792497  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:33.792532  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:33.866801  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:33.866826  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:33.866842  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:33.946739  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:33.946774  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:36.486108  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:36.501316  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:36.501397  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:36.542082  451238 cri.go:89] found id: ""
	I0805 13:01:36.542118  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.542130  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:36.542139  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:36.542217  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:36.581005  451238 cri.go:89] found id: ""
	I0805 13:01:36.581047  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.581059  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:36.581068  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:36.581148  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:36.623945  451238 cri.go:89] found id: ""
	I0805 13:01:36.623974  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.623982  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:36.623987  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:36.624041  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:36.661632  451238 cri.go:89] found id: ""
	I0805 13:01:36.661665  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.661673  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:36.661680  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:36.661738  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:36.701808  451238 cri.go:89] found id: ""
	I0805 13:01:36.701839  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.701850  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:36.701857  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:36.701941  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:36.742287  451238 cri.go:89] found id: ""
	I0805 13:01:36.742320  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.742331  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:36.742340  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:36.742410  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:36.794581  451238 cri.go:89] found id: ""
	I0805 13:01:36.794610  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.794621  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:36.794629  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:36.794690  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:36.833271  451238 cri.go:89] found id: ""
	I0805 13:01:36.833301  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.833311  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:36.833325  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:36.833346  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:36.921427  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:36.921467  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:34.024353  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:36.025557  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:34.909401  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:36.909529  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:34.945077  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:36.945632  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:36.965468  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:36.965503  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:37.018475  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:37.018515  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:37.033671  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:37.033697  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:37.105339  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:39.606042  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:39.619215  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:39.619296  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:39.655614  451238 cri.go:89] found id: ""
	I0805 13:01:39.655648  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.655660  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:39.655668  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:39.655760  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:39.691489  451238 cri.go:89] found id: ""
	I0805 13:01:39.691523  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.691535  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:39.691543  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:39.691610  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:39.726394  451238 cri.go:89] found id: ""
	I0805 13:01:39.726427  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.726438  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:39.726446  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:39.726518  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:39.759847  451238 cri.go:89] found id: ""
	I0805 13:01:39.759897  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.759909  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:39.759918  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:39.759988  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:39.795011  451238 cri.go:89] found id: ""
	I0805 13:01:39.795043  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.795051  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:39.795057  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:39.795120  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:39.831302  451238 cri.go:89] found id: ""
	I0805 13:01:39.831336  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.831346  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:39.831356  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:39.831432  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:39.866506  451238 cri.go:89] found id: ""
	I0805 13:01:39.866540  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.866547  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:39.866554  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:39.866622  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:39.898083  451238 cri.go:89] found id: ""
	I0805 13:01:39.898108  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.898115  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:39.898128  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:39.898147  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:39.912192  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:39.912221  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:39.989216  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:39.989246  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:39.989262  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:40.069702  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:40.069746  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:40.118390  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:40.118428  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:38.525929  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:40.527120  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:38.909905  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:41.408953  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:43.409966  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:39.445474  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:41.944704  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:43.944956  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:42.669421  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:42.682287  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:42.682359  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:42.722933  451238 cri.go:89] found id: ""
	I0805 13:01:42.722961  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.722969  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:42.722975  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:42.723037  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:42.757604  451238 cri.go:89] found id: ""
	I0805 13:01:42.757635  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.757646  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:42.757654  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:42.757723  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:42.795825  451238 cri.go:89] found id: ""
	I0805 13:01:42.795852  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.795863  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:42.795871  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:42.795939  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:42.831749  451238 cri.go:89] found id: ""
	I0805 13:01:42.831779  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.831791  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:42.831800  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:42.831862  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:42.866280  451238 cri.go:89] found id: ""
	I0805 13:01:42.866310  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.866322  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:42.866330  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:42.866390  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:42.904393  451238 cri.go:89] found id: ""
	I0805 13:01:42.904427  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.904436  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:42.904445  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:42.904510  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:42.943175  451238 cri.go:89] found id: ""
	I0805 13:01:42.943204  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.943215  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:42.943223  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:42.943292  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:42.979117  451238 cri.go:89] found id: ""
	I0805 13:01:42.979144  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.979152  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:42.979174  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:42.979191  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:43.032032  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:43.032070  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:43.046285  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:43.046315  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:43.120300  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:43.120327  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:43.120347  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:43.209800  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:43.209851  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:45.759057  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:45.771984  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:45.772056  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:45.805421  451238 cri.go:89] found id: ""
	I0805 13:01:45.805451  451238 logs.go:276] 0 containers: []
	W0805 13:01:45.805459  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:45.805466  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:45.805521  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:45.841552  451238 cri.go:89] found id: ""
	I0805 13:01:45.841579  451238 logs.go:276] 0 containers: []
	W0805 13:01:45.841588  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:45.841597  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:45.841672  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:45.878502  451238 cri.go:89] found id: ""
	I0805 13:01:45.878529  451238 logs.go:276] 0 containers: []
	W0805 13:01:45.878537  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:45.878546  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:45.878622  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:45.921145  451238 cri.go:89] found id: ""
	I0805 13:01:45.921187  451238 logs.go:276] 0 containers: []
	W0805 13:01:45.921198  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:45.921207  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:45.921273  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:45.958408  451238 cri.go:89] found id: ""
	I0805 13:01:45.958437  451238 logs.go:276] 0 containers: []
	W0805 13:01:45.958445  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:45.958452  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:45.958521  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:45.994632  451238 cri.go:89] found id: ""
	I0805 13:01:45.994660  451238 logs.go:276] 0 containers: []
	W0805 13:01:45.994669  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:45.994676  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:45.994727  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:46.032930  451238 cri.go:89] found id: ""
	I0805 13:01:46.032961  451238 logs.go:276] 0 containers: []
	W0805 13:01:46.032971  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:46.032978  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:46.033041  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:46.074396  451238 cri.go:89] found id: ""
	I0805 13:01:46.074429  451238 logs.go:276] 0 containers: []
	W0805 13:01:46.074441  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:46.074454  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:46.074475  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:46.131977  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:46.132020  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:46.147924  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:46.147957  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:46.222005  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:46.222038  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:46.222054  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:46.306799  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:46.306842  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:43.024643  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:45.524936  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:45.410385  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:47.909281  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:46.444746  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:48.950198  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:48.856982  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:48.870945  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:48.871025  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:48.930811  451238 cri.go:89] found id: ""
	I0805 13:01:48.930837  451238 logs.go:276] 0 containers: []
	W0805 13:01:48.930852  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:48.930858  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:48.930917  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:48.986604  451238 cri.go:89] found id: ""
	I0805 13:01:48.986629  451238 logs.go:276] 0 containers: []
	W0805 13:01:48.986637  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:48.986643  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:48.986706  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:49.039433  451238 cri.go:89] found id: ""
	I0805 13:01:49.039468  451238 logs.go:276] 0 containers: []
	W0805 13:01:49.039479  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:49.039487  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:49.039555  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:49.079593  451238 cri.go:89] found id: ""
	I0805 13:01:49.079625  451238 logs.go:276] 0 containers: []
	W0805 13:01:49.079637  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:49.079645  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:49.079714  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:49.116243  451238 cri.go:89] found id: ""
	I0805 13:01:49.116274  451238 logs.go:276] 0 containers: []
	W0805 13:01:49.116284  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:49.116292  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:49.116360  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:49.158744  451238 cri.go:89] found id: ""
	I0805 13:01:49.158779  451238 logs.go:276] 0 containers: []
	W0805 13:01:49.158790  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:49.158799  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:49.158868  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:49.193747  451238 cri.go:89] found id: ""
	I0805 13:01:49.193778  451238 logs.go:276] 0 containers: []
	W0805 13:01:49.193786  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:49.193792  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:49.193843  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:49.227663  451238 cri.go:89] found id: ""
	I0805 13:01:49.227691  451238 logs.go:276] 0 containers: []
	W0805 13:01:49.227704  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:49.227714  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:49.227727  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:49.281380  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:49.281424  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:49.296286  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:49.296318  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:49.368584  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:49.368609  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:49.368625  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:49.453857  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:49.453909  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:48.024987  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:50.026076  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:50.408363  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:52.410039  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:51.444602  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:53.445118  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:51.993057  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:52.006066  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:52.006148  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:52.043179  451238 cri.go:89] found id: ""
	I0805 13:01:52.043212  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.043223  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:52.043231  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:52.043300  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:52.076469  451238 cri.go:89] found id: ""
	I0805 13:01:52.076502  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.076512  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:52.076520  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:52.076586  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:52.112443  451238 cri.go:89] found id: ""
	I0805 13:01:52.112477  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.112488  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:52.112497  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:52.112569  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:52.147589  451238 cri.go:89] found id: ""
	I0805 13:01:52.147620  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.147631  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:52.147638  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:52.147702  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:52.184016  451238 cri.go:89] found id: ""
	I0805 13:01:52.184053  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.184063  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:52.184072  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:52.184134  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:52.219670  451238 cri.go:89] found id: ""
	I0805 13:01:52.219702  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.219714  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:52.219727  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:52.219820  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:52.258697  451238 cri.go:89] found id: ""
	I0805 13:01:52.258731  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.258744  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:52.258752  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:52.258818  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:52.299599  451238 cri.go:89] found id: ""
	I0805 13:01:52.299636  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.299649  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:52.299665  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:52.299683  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:52.351730  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:52.351772  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:52.365993  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:52.366022  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:52.436019  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:52.436041  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:52.436056  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:52.520082  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:52.520118  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:55.064214  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:55.077358  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:55.077454  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:55.110523  451238 cri.go:89] found id: ""
	I0805 13:01:55.110555  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.110564  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:55.110570  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:55.110630  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:55.147870  451238 cri.go:89] found id: ""
	I0805 13:01:55.147905  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.147916  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:55.147925  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:55.147998  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:55.180769  451238 cri.go:89] found id: ""
	I0805 13:01:55.180803  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.180814  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:55.180822  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:55.180890  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:55.217290  451238 cri.go:89] found id: ""
	I0805 13:01:55.217332  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.217343  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:55.217353  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:55.217420  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:55.254185  451238 cri.go:89] found id: ""
	I0805 13:01:55.254221  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.254232  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:55.254239  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:55.254295  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:55.290633  451238 cri.go:89] found id: ""
	I0805 13:01:55.290662  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.290673  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:55.290681  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:55.290747  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:55.325830  451238 cri.go:89] found id: ""
	I0805 13:01:55.325862  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.325873  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:55.325880  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:55.325947  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:55.359887  451238 cri.go:89] found id: ""
	I0805 13:01:55.359922  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.359931  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:55.359941  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:55.359953  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:55.418251  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:55.418299  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:55.432007  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:55.432038  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:55.507177  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:55.507205  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:55.507219  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:55.586919  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:55.586965  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:52.525480  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:54.525653  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:57.024834  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:54.410408  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:56.909810  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:55.944741  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:57.946654  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:58.128822  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:58.142726  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:58.142799  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:58.178027  451238 cri.go:89] found id: ""
	I0805 13:01:58.178056  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.178067  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:58.178075  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:58.178147  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:58.213309  451238 cri.go:89] found id: ""
	I0805 13:01:58.213340  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.213351  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:58.213358  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:58.213430  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:58.247296  451238 cri.go:89] found id: ""
	I0805 13:01:58.247323  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.247332  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:58.247338  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:58.247393  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:58.280226  451238 cri.go:89] found id: ""
	I0805 13:01:58.280255  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.280266  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:58.280277  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:58.280335  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:58.316934  451238 cri.go:89] found id: ""
	I0805 13:01:58.316969  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.316981  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:58.316989  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:58.317055  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:58.360931  451238 cri.go:89] found id: ""
	I0805 13:01:58.360967  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.360979  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:58.360987  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:58.361055  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:58.399112  451238 cri.go:89] found id: ""
	I0805 13:01:58.399150  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.399163  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:58.399171  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:58.399244  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:58.441903  451238 cri.go:89] found id: ""
	I0805 13:01:58.441930  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.441941  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:58.441952  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:58.441967  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:58.524869  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:58.524908  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:58.562598  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:58.562634  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:58.618274  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:58.618313  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:58.633011  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:58.633039  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:58.706287  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:01.206971  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:01.222277  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:01.222357  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:01.266949  451238 cri.go:89] found id: ""
	I0805 13:02:01.266982  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.266993  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:01.267007  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:01.267108  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:01.306765  451238 cri.go:89] found id: ""
	I0805 13:02:01.306791  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.306799  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:01.306805  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:01.306859  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:01.345108  451238 cri.go:89] found id: ""
	I0805 13:02:01.345145  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.345157  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:01.345164  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:01.345227  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:01.383201  451238 cri.go:89] found id: ""
	I0805 13:02:01.383231  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.383239  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:01.383245  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:01.383307  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:01.419292  451238 cri.go:89] found id: ""
	I0805 13:02:01.419320  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.419331  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:01.419338  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:01.419410  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:01.456447  451238 cri.go:89] found id: ""
	I0805 13:02:01.456482  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.456492  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:01.456500  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:01.456568  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:01.496266  451238 cri.go:89] found id: ""
	I0805 13:02:01.496298  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.496306  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:01.496312  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:01.496375  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:01.541492  451238 cri.go:89] found id: ""
	I0805 13:02:01.541529  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.541541  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:01.541555  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:01.541571  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:01.593140  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:01.593185  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:01.606641  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:01.606670  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:01.681989  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:01.682015  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:01.682030  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:01.765612  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:01.765655  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:59.025355  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:01.025443  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:59.408591  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:01.409368  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:00.445254  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:02.944495  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:04.311066  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:04.326530  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:04.326599  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:04.360091  451238 cri.go:89] found id: ""
	I0805 13:02:04.360124  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.360136  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:04.360142  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:04.360214  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:04.398983  451238 cri.go:89] found id: ""
	I0805 13:02:04.399014  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.399026  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:04.399045  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:04.399122  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:04.433444  451238 cri.go:89] found id: ""
	I0805 13:02:04.433474  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.433483  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:04.433495  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:04.433546  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:04.470113  451238 cri.go:89] found id: ""
	I0805 13:02:04.470145  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.470156  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:04.470167  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:04.470233  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:04.505695  451238 cri.go:89] found id: ""
	I0805 13:02:04.505721  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.505731  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:04.505738  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:04.505801  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:04.544093  451238 cri.go:89] found id: ""
	I0805 13:02:04.544121  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.544129  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:04.544136  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:04.544196  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:04.579663  451238 cri.go:89] found id: ""
	I0805 13:02:04.579702  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.579715  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:04.579724  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:04.579803  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:04.616524  451238 cri.go:89] found id: ""
	I0805 13:02:04.616565  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.616577  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:04.616590  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:04.616607  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:04.693014  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:04.693035  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:04.693048  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:04.772508  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:04.772550  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:04.813014  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:04.813043  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:04.864653  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:04.864702  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:03.525225  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:06.024868  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:03.908365  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:05.908993  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:07.910958  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:05.444593  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:07.444737  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:07.378816  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:07.392347  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:07.392439  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:07.425843  451238 cri.go:89] found id: ""
	I0805 13:02:07.425876  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.425887  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:07.425895  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:07.425958  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:07.461547  451238 cri.go:89] found id: ""
	I0805 13:02:07.461575  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.461584  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:07.461591  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:07.461651  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:07.496461  451238 cri.go:89] found id: ""
	I0805 13:02:07.496500  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.496510  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:07.496521  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:07.496599  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:07.531520  451238 cri.go:89] found id: ""
	I0805 13:02:07.531556  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.531566  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:07.531574  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:07.531642  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:07.571821  451238 cri.go:89] found id: ""
	I0805 13:02:07.571855  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.571866  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:07.571876  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:07.571948  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:07.611111  451238 cri.go:89] found id: ""
	I0805 13:02:07.611151  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.611159  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:07.611165  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:07.611226  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:07.651428  451238 cri.go:89] found id: ""
	I0805 13:02:07.651456  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.651464  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:07.651470  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:07.651520  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:07.689828  451238 cri.go:89] found id: ""
	I0805 13:02:07.689858  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.689866  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:07.689877  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:07.689893  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:07.746381  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:07.746422  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:07.760953  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:07.760989  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:07.834859  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:07.834883  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:07.834901  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:07.915344  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:07.915376  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:10.459232  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:10.472789  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:10.472853  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:10.508434  451238 cri.go:89] found id: ""
	I0805 13:02:10.508462  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.508470  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:10.508477  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:10.508539  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:10.543487  451238 cri.go:89] found id: ""
	I0805 13:02:10.543515  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.543524  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:10.543530  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:10.543582  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:10.588274  451238 cri.go:89] found id: ""
	I0805 13:02:10.588302  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.588310  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:10.588317  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:10.588379  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:10.620810  451238 cri.go:89] found id: ""
	I0805 13:02:10.620851  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.620863  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:10.620871  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:10.620945  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:10.657882  451238 cri.go:89] found id: ""
	I0805 13:02:10.657913  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.657923  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:10.657929  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:10.657993  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:10.696188  451238 cri.go:89] found id: ""
	I0805 13:02:10.696220  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.696229  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:10.696235  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:10.696294  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:10.729942  451238 cri.go:89] found id: ""
	I0805 13:02:10.729977  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.729988  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:10.729996  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:10.730050  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:10.761972  451238 cri.go:89] found id: ""
	I0805 13:02:10.762000  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.762008  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:10.762018  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:10.762032  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:10.816859  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:10.816890  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:10.830348  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:10.830379  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:10.902720  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:10.902753  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:10.902771  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:10.981464  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:10.981505  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:08.024948  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:10.525441  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:10.408841  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:12.409506  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:09.445359  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:11.944853  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:13.528296  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:13.541813  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:13.541887  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:13.575632  451238 cri.go:89] found id: ""
	I0805 13:02:13.575669  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.575681  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:13.575689  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:13.575766  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:13.612646  451238 cri.go:89] found id: ""
	I0805 13:02:13.612680  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.612691  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:13.612699  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:13.612755  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:13.650310  451238 cri.go:89] found id: ""
	I0805 13:02:13.650341  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.650361  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:13.650369  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:13.650439  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:13.686941  451238 cri.go:89] found id: ""
	I0805 13:02:13.686970  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.686981  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:13.686990  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:13.687054  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:13.722250  451238 cri.go:89] found id: ""
	I0805 13:02:13.722285  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.722297  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:13.722306  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:13.722388  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:13.758337  451238 cri.go:89] found id: ""
	I0805 13:02:13.758367  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.758375  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:13.758382  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:13.758443  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:13.792980  451238 cri.go:89] found id: ""
	I0805 13:02:13.793016  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.793028  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:13.793036  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:13.793127  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:13.831511  451238 cri.go:89] found id: ""
	I0805 13:02:13.831539  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.831547  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:13.831558  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:13.831579  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:13.885124  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:13.885169  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:13.899112  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:13.899155  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:13.977058  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:13.977099  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:13.977115  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:14.060873  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:14.060911  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:16.602595  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:16.617557  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:16.617638  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:16.660212  451238 cri.go:89] found id: ""
	I0805 13:02:16.660244  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.660256  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:16.660264  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:16.660323  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:16.695515  451238 cri.go:89] found id: ""
	I0805 13:02:16.695553  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.695564  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:16.695572  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:16.695638  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:16.732844  451238 cri.go:89] found id: ""
	I0805 13:02:16.732875  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.732884  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:16.732891  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:16.732943  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:16.772465  451238 cri.go:89] found id: ""
	I0805 13:02:16.772497  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.772504  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:16.772517  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:16.772582  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:16.809826  451238 cri.go:89] found id: ""
	I0805 13:02:16.809863  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.809875  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:16.809882  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:16.809949  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:16.849480  451238 cri.go:89] found id: ""
	I0805 13:02:16.849512  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.849523  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:16.849531  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:16.849598  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:16.884098  451238 cri.go:89] found id: ""
	I0805 13:02:16.884132  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.884144  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:16.884152  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:16.884222  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:16.920497  451238 cri.go:89] found id: ""
	I0805 13:02:16.920523  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.920530  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:16.920541  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:16.920556  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:13.025299  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:15.525474  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:14.908633  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:16.909254  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:14.445321  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:16.945044  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:18.945630  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:16.975287  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:16.975317  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:16.989524  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:16.989552  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:17.057997  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:17.058022  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:17.058037  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:17.133721  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:17.133763  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:19.672385  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:19.687948  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:19.688017  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:19.724105  451238 cri.go:89] found id: ""
	I0805 13:02:19.724132  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.724140  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:19.724147  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:19.724199  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:19.758263  451238 cri.go:89] found id: ""
	I0805 13:02:19.758296  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.758306  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:19.758314  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:19.758381  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:19.792924  451238 cri.go:89] found id: ""
	I0805 13:02:19.792954  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.792961  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:19.792967  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:19.793023  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:19.826340  451238 cri.go:89] found id: ""
	I0805 13:02:19.826367  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.826375  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:19.826382  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:19.826434  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:19.864289  451238 cri.go:89] found id: ""
	I0805 13:02:19.864323  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.864334  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:19.864343  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:19.864413  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:19.899630  451238 cri.go:89] found id: ""
	I0805 13:02:19.899661  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.899673  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:19.899682  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:19.899786  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:19.935798  451238 cri.go:89] found id: ""
	I0805 13:02:19.935826  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.935836  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:19.935843  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:19.935896  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:19.977984  451238 cri.go:89] found id: ""
	I0805 13:02:19.978019  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.978031  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:19.978044  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:19.978062  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:20.030096  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:20.030131  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:20.043878  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:20.043940  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:20.119251  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:20.119279  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:20.119297  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:20.202445  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:20.202488  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:18.026282  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:20.524225  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:19.408760  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:21.410108  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:21.445045  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:23.944150  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:22.744728  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:22.758606  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:22.758675  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:22.791663  451238 cri.go:89] found id: ""
	I0805 13:02:22.791696  451238 logs.go:276] 0 containers: []
	W0805 13:02:22.791708  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:22.791717  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:22.791821  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:22.826568  451238 cri.go:89] found id: ""
	I0805 13:02:22.826594  451238 logs.go:276] 0 containers: []
	W0805 13:02:22.826603  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:22.826609  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:22.826671  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:22.860430  451238 cri.go:89] found id: ""
	I0805 13:02:22.860459  451238 logs.go:276] 0 containers: []
	W0805 13:02:22.860470  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:22.860479  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:22.860543  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:22.893815  451238 cri.go:89] found id: ""
	I0805 13:02:22.893846  451238 logs.go:276] 0 containers: []
	W0805 13:02:22.893854  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:22.893860  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:22.893929  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:22.929804  451238 cri.go:89] found id: ""
	I0805 13:02:22.929830  451238 logs.go:276] 0 containers: []
	W0805 13:02:22.929840  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:22.929849  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:22.929915  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:22.964918  451238 cri.go:89] found id: ""
	I0805 13:02:22.964950  451238 logs.go:276] 0 containers: []
	W0805 13:02:22.964961  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:22.964969  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:22.965035  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:23.000236  451238 cri.go:89] found id: ""
	I0805 13:02:23.000271  451238 logs.go:276] 0 containers: []
	W0805 13:02:23.000282  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:23.000290  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:23.000354  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:23.052075  451238 cri.go:89] found id: ""
	I0805 13:02:23.052108  451238 logs.go:276] 0 containers: []
	W0805 13:02:23.052117  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:23.052128  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:23.052141  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:23.104213  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:23.104248  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:23.118811  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:23.118851  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:23.188552  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:23.188578  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:23.188595  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:23.272518  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:23.272562  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:25.811116  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:25.825030  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:25.825113  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:25.864282  451238 cri.go:89] found id: ""
	I0805 13:02:25.864318  451238 logs.go:276] 0 containers: []
	W0805 13:02:25.864331  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:25.864339  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:25.864413  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:25.901712  451238 cri.go:89] found id: ""
	I0805 13:02:25.901746  451238 logs.go:276] 0 containers: []
	W0805 13:02:25.901754  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:25.901760  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:25.901822  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:25.937036  451238 cri.go:89] found id: ""
	I0805 13:02:25.937068  451238 logs.go:276] 0 containers: []
	W0805 13:02:25.937077  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:25.937083  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:25.937146  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:25.974598  451238 cri.go:89] found id: ""
	I0805 13:02:25.974627  451238 logs.go:276] 0 containers: []
	W0805 13:02:25.974638  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:25.974646  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:25.974713  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:26.011083  451238 cri.go:89] found id: ""
	I0805 13:02:26.011116  451238 logs.go:276] 0 containers: []
	W0805 13:02:26.011124  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:26.011130  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:26.011190  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:26.050187  451238 cri.go:89] found id: ""
	I0805 13:02:26.050219  451238 logs.go:276] 0 containers: []
	W0805 13:02:26.050231  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:26.050242  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:26.050317  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:26.085038  451238 cri.go:89] found id: ""
	I0805 13:02:26.085067  451238 logs.go:276] 0 containers: []
	W0805 13:02:26.085077  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:26.085086  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:26.085151  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:26.122121  451238 cri.go:89] found id: ""
	I0805 13:02:26.122150  451238 logs.go:276] 0 containers: []
	W0805 13:02:26.122158  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:26.122173  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:26.122191  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:26.193819  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:26.193850  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:26.193865  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:26.273453  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:26.273492  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:26.312474  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:26.312509  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:26.363176  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:26.363215  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:22.524303  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:24.525047  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:26.528347  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:23.909120  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:26.409913  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:25.944824  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:28.444803  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:28.878523  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:28.892242  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:28.892330  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:28.928650  451238 cri.go:89] found id: ""
	I0805 13:02:28.928682  451238 logs.go:276] 0 containers: []
	W0805 13:02:28.928693  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:28.928702  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:28.928772  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:28.965582  451238 cri.go:89] found id: ""
	I0805 13:02:28.965615  451238 logs.go:276] 0 containers: []
	W0805 13:02:28.965626  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:28.965634  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:28.965698  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:29.001824  451238 cri.go:89] found id: ""
	I0805 13:02:29.001855  451238 logs.go:276] 0 containers: []
	W0805 13:02:29.001865  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:29.001874  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:29.001939  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:29.037688  451238 cri.go:89] found id: ""
	I0805 13:02:29.037715  451238 logs.go:276] 0 containers: []
	W0805 13:02:29.037722  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:29.037730  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:29.037780  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:29.078495  451238 cri.go:89] found id: ""
	I0805 13:02:29.078540  451238 logs.go:276] 0 containers: []
	W0805 13:02:29.078552  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:29.078559  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:29.078627  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:29.113728  451238 cri.go:89] found id: ""
	I0805 13:02:29.113764  451238 logs.go:276] 0 containers: []
	W0805 13:02:29.113776  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:29.113786  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:29.113851  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:29.147590  451238 cri.go:89] found id: ""
	I0805 13:02:29.147618  451238 logs.go:276] 0 containers: []
	W0805 13:02:29.147629  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:29.147638  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:29.147702  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:29.186015  451238 cri.go:89] found id: ""
	I0805 13:02:29.186043  451238 logs.go:276] 0 containers: []
	W0805 13:02:29.186052  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:29.186062  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:29.186074  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:29.242795  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:29.242850  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:29.257012  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:29.257046  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:29.330528  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:29.330555  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:29.330569  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:29.418109  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:29.418145  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:29.025256  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:31.526187  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:28.909283  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:31.409736  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:30.944380  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:32.945421  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:31.986351  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:32.001265  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:32.001349  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:32.035152  451238 cri.go:89] found id: ""
	I0805 13:02:32.035191  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.035200  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:32.035208  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:32.035262  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:32.069086  451238 cri.go:89] found id: ""
	I0805 13:02:32.069118  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.069128  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:32.069136  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:32.069204  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:32.103788  451238 cri.go:89] found id: ""
	I0805 13:02:32.103814  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.103822  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:32.103831  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:32.103893  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:32.139104  451238 cri.go:89] found id: ""
	I0805 13:02:32.139138  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.139149  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:32.139157  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:32.139222  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:32.192759  451238 cri.go:89] found id: ""
	I0805 13:02:32.192789  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.192798  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:32.192804  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:32.192865  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:32.231080  451238 cri.go:89] found id: ""
	I0805 13:02:32.231115  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.231126  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:32.231135  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:32.231200  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:32.266547  451238 cri.go:89] found id: ""
	I0805 13:02:32.266578  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.266587  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:32.266594  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:32.266647  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:32.301828  451238 cri.go:89] found id: ""
	I0805 13:02:32.301856  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.301865  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:32.301875  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:32.301888  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:32.358439  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:32.358479  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:32.372349  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:32.372383  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:32.442335  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:32.442369  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:32.442388  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:32.521705  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:32.521744  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:35.060867  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:35.074370  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:35.074433  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:35.111149  451238 cri.go:89] found id: ""
	I0805 13:02:35.111181  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.111191  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:35.111200  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:35.111268  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:35.153781  451238 cri.go:89] found id: ""
	I0805 13:02:35.153814  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.153825  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:35.153832  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:35.153894  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:35.193207  451238 cri.go:89] found id: ""
	I0805 13:02:35.193239  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.193256  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:35.193291  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:35.193370  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:35.243879  451238 cri.go:89] found id: ""
	I0805 13:02:35.243915  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.243928  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:35.243936  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:35.243994  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:35.297922  451238 cri.go:89] found id: ""
	I0805 13:02:35.297954  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.297966  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:35.297973  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:35.298039  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:35.333201  451238 cri.go:89] found id: ""
	I0805 13:02:35.333234  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.333245  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:35.333254  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:35.333316  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:35.366327  451238 cri.go:89] found id: ""
	I0805 13:02:35.366361  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.366373  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:35.366381  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:35.366449  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:35.401515  451238 cri.go:89] found id: ""
	I0805 13:02:35.401546  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.401555  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:35.401565  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:35.401578  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:35.451057  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:35.451090  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:35.465054  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:35.465095  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:35.547111  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:35.547142  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:35.547160  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:35.627451  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:35.627490  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:34.025104  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:36.524904  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:33.908489  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:35.909183  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:37.909360  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:35.445317  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:37.446056  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:38.169022  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:38.181892  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:38.181968  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:38.217919  451238 cri.go:89] found id: ""
	I0805 13:02:38.217951  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.217961  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:38.217970  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:38.218041  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:38.253967  451238 cri.go:89] found id: ""
	I0805 13:02:38.253999  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.254008  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:38.254020  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:38.254073  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:38.293757  451238 cri.go:89] found id: ""
	I0805 13:02:38.293789  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.293801  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:38.293809  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:38.293904  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:38.329657  451238 cri.go:89] found id: ""
	I0805 13:02:38.329686  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.329697  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:38.329705  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:38.329772  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:38.364602  451238 cri.go:89] found id: ""
	I0805 13:02:38.364635  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.364647  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:38.364656  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:38.364732  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:38.396352  451238 cri.go:89] found id: ""
	I0805 13:02:38.396382  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.396394  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:38.396403  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:38.396471  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:38.429172  451238 cri.go:89] found id: ""
	I0805 13:02:38.429203  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.429214  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:38.429223  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:38.429293  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:38.464855  451238 cri.go:89] found id: ""
	I0805 13:02:38.464891  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.464903  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:38.464916  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:38.464931  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:38.514924  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:38.514967  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:38.530076  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:38.530113  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:38.602472  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:38.602494  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:38.602509  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:38.683905  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:38.683948  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:41.226878  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:41.245027  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:41.245100  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:41.280482  451238 cri.go:89] found id: ""
	I0805 13:02:41.280511  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.280523  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:41.280532  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:41.280597  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:41.316592  451238 cri.go:89] found id: ""
	I0805 13:02:41.316622  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.316633  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:41.316641  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:41.316708  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:41.353282  451238 cri.go:89] found id: ""
	I0805 13:02:41.353313  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.353324  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:41.353333  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:41.353397  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:41.393379  451238 cri.go:89] found id: ""
	I0805 13:02:41.393406  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.393417  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:41.393426  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:41.393502  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:41.430980  451238 cri.go:89] found id: ""
	I0805 13:02:41.431012  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.431023  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:41.431031  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:41.431106  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:41.467228  451238 cri.go:89] found id: ""
	I0805 13:02:41.467261  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.467273  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:41.467281  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:41.467348  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:41.502105  451238 cri.go:89] found id: ""
	I0805 13:02:41.502153  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.502166  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:41.502175  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:41.502250  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:41.539286  451238 cri.go:89] found id: ""
	I0805 13:02:41.539314  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.539325  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:41.539338  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:41.539353  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:41.592135  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:41.592175  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:41.608151  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:41.608184  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:41.680096  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:41.680131  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:41.680148  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:41.759589  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:41.759628  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:39.025448  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:41.526590  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:40.409447  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:42.909412  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:39.945459  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:42.444630  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:44.300461  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:44.314310  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:44.314388  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:44.348516  451238 cri.go:89] found id: ""
	I0805 13:02:44.348549  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.348562  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:44.348570  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:44.348635  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:44.388256  451238 cri.go:89] found id: ""
	I0805 13:02:44.388289  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.388299  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:44.388309  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:44.388383  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:44.426743  451238 cri.go:89] found id: ""
	I0805 13:02:44.426778  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.426786  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:44.426792  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:44.426848  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:44.463008  451238 cri.go:89] found id: ""
	I0805 13:02:44.463044  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.463054  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:44.463062  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:44.463129  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:44.497662  451238 cri.go:89] found id: ""
	I0805 13:02:44.497696  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.497707  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:44.497715  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:44.497789  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:44.534253  451238 cri.go:89] found id: ""
	I0805 13:02:44.534281  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.534288  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:44.534294  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:44.534378  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:44.574350  451238 cri.go:89] found id: ""
	I0805 13:02:44.574380  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.574390  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:44.574398  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:44.574468  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:44.609984  451238 cri.go:89] found id: ""
	I0805 13:02:44.610018  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.610031  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:44.610044  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:44.610060  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:44.650363  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:44.650402  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:44.700997  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:44.701032  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:44.716841  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:44.716874  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:44.785482  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:44.785502  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:44.785517  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:44.023932  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:46.025733  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:44.909613  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:47.409724  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:44.445234  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:46.944157  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:48.946098  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:47.365382  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:47.378779  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:47.378851  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:47.413615  451238 cri.go:89] found id: ""
	I0805 13:02:47.413636  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.413645  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:47.413651  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:47.413699  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:47.448536  451238 cri.go:89] found id: ""
	I0805 13:02:47.448563  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.448572  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:47.448578  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:47.448629  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:47.490817  451238 cri.go:89] found id: ""
	I0805 13:02:47.490847  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.490856  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:47.490862  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:47.490931  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:47.533151  451238 cri.go:89] found id: ""
	I0805 13:02:47.533179  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.533187  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:47.533193  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:47.533250  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:47.571991  451238 cri.go:89] found id: ""
	I0805 13:02:47.572022  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.572030  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:47.572036  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:47.572096  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:47.606943  451238 cri.go:89] found id: ""
	I0805 13:02:47.606976  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.606987  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:47.606995  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:47.607073  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:47.644704  451238 cri.go:89] found id: ""
	I0805 13:02:47.644741  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.644753  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:47.644762  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:47.644828  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:47.687361  451238 cri.go:89] found id: ""
	I0805 13:02:47.687395  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.687408  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:47.687427  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:47.687453  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:47.766572  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:47.766614  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:47.812209  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:47.812242  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:47.862948  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:47.862987  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:47.878697  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:47.878729  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:47.951680  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:50.452861  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:50.466370  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:50.466440  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:50.500001  451238 cri.go:89] found id: ""
	I0805 13:02:50.500031  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.500043  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:50.500051  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:50.500126  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:50.541752  451238 cri.go:89] found id: ""
	I0805 13:02:50.541786  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.541794  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:50.541800  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:50.541864  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:50.578889  451238 cri.go:89] found id: ""
	I0805 13:02:50.578915  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.578923  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:50.578930  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:50.578984  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:50.614865  451238 cri.go:89] found id: ""
	I0805 13:02:50.614896  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.614906  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:50.614912  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:50.614980  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:50.656169  451238 cri.go:89] found id: ""
	I0805 13:02:50.656195  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.656202  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:50.656209  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:50.656277  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:50.695050  451238 cri.go:89] found id: ""
	I0805 13:02:50.695082  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.695099  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:50.695108  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:50.695187  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:50.733205  451238 cri.go:89] found id: ""
	I0805 13:02:50.733233  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.733242  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:50.733249  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:50.733300  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:50.770654  451238 cri.go:89] found id: ""
	I0805 13:02:50.770683  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.770693  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:50.770706  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:50.770721  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:50.826521  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:50.826567  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:50.842153  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:50.842181  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:50.916445  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:50.916474  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:50.916487  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:50.999973  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:51.000020  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:48.525240  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:51.024459  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:49.907505  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:51.909037  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:50.946199  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:53.444128  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:53.539541  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:53.553804  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:53.553893  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:53.593075  451238 cri.go:89] found id: ""
	I0805 13:02:53.593105  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.593114  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:53.593121  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:53.593190  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:53.629967  451238 cri.go:89] found id: ""
	I0805 13:02:53.630001  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.630012  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:53.630020  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:53.630088  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:53.663535  451238 cri.go:89] found id: ""
	I0805 13:02:53.663564  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.663572  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:53.663577  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:53.663635  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:53.697650  451238 cri.go:89] found id: ""
	I0805 13:02:53.697676  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.697684  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:53.697690  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:53.697741  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:53.732845  451238 cri.go:89] found id: ""
	I0805 13:02:53.732873  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.732883  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:53.732891  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:53.732950  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:53.774673  451238 cri.go:89] found id: ""
	I0805 13:02:53.774703  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.774712  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:53.774719  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:53.774783  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:53.815368  451238 cri.go:89] found id: ""
	I0805 13:02:53.815401  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.815413  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:53.815423  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:53.815487  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:53.849726  451238 cri.go:89] found id: ""
	I0805 13:02:53.849760  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.849771  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:53.849785  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:53.849801  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:53.925356  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:53.925398  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:53.966721  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:53.966751  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:54.023096  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:54.023140  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:54.037634  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:54.037666  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:54.115159  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:56.616326  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:56.629665  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:56.629744  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:56.665665  451238 cri.go:89] found id: ""
	I0805 13:02:56.665701  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.665713  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:56.665722  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:56.665790  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:56.700446  451238 cri.go:89] found id: ""
	I0805 13:02:56.700473  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.700481  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:56.700488  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:56.700554  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:56.737152  451238 cri.go:89] found id: ""
	I0805 13:02:56.737190  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.737202  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:56.737210  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:56.737283  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:56.777909  451238 cri.go:89] found id: ""
	I0805 13:02:56.777942  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.777954  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:56.777961  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:56.778027  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:56.813503  451238 cri.go:89] found id: ""
	I0805 13:02:56.813537  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.813547  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:56.813556  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:56.813625  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:56.848964  451238 cri.go:89] found id: ""
	I0805 13:02:56.848993  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.849002  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:56.849008  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:56.849071  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:56.884310  451238 cri.go:89] found id: ""
	I0805 13:02:56.884339  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.884347  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:56.884356  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:56.884417  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:56.925895  451238 cri.go:89] found id: ""
	I0805 13:02:56.925926  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.925936  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:56.925948  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:56.925962  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:53.025086  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:55.025424  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:57.026117  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:53.909851  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:56.411536  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:55.945123  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:57.945278  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:56.982847  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:56.982882  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:56.997703  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:56.997742  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:57.071130  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:57.071153  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:57.071174  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:57.152985  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:57.153029  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:59.697501  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:59.711799  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:59.711879  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:59.746992  451238 cri.go:89] found id: ""
	I0805 13:02:59.747024  451238 logs.go:276] 0 containers: []
	W0805 13:02:59.747035  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:59.747043  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:59.747115  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:59.780563  451238 cri.go:89] found id: ""
	I0805 13:02:59.780592  451238 logs.go:276] 0 containers: []
	W0805 13:02:59.780604  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:59.780611  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:59.780676  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:59.816973  451238 cri.go:89] found id: ""
	I0805 13:02:59.817007  451238 logs.go:276] 0 containers: []
	W0805 13:02:59.817019  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:59.817027  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:59.817098  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:59.851989  451238 cri.go:89] found id: ""
	I0805 13:02:59.852018  451238 logs.go:276] 0 containers: []
	W0805 13:02:59.852028  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:59.852035  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:59.852086  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:59.887491  451238 cri.go:89] found id: ""
	I0805 13:02:59.887517  451238 logs.go:276] 0 containers: []
	W0805 13:02:59.887525  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:59.887535  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:59.887587  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:59.924965  451238 cri.go:89] found id: ""
	I0805 13:02:59.924997  451238 logs.go:276] 0 containers: []
	W0805 13:02:59.925005  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:59.925012  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:59.925062  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:59.965830  451238 cri.go:89] found id: ""
	I0805 13:02:59.965860  451238 logs.go:276] 0 containers: []
	W0805 13:02:59.965868  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:59.965875  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:59.965932  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:03:00.003208  451238 cri.go:89] found id: ""
	I0805 13:03:00.003241  451238 logs.go:276] 0 containers: []
	W0805 13:03:00.003250  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:03:00.003260  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:00.003275  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:00.056865  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:00.056911  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:00.070563  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:00.070593  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:03:00.137931  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:03:00.137957  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:00.137976  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:00.221598  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:03:00.221649  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:59.525042  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:02.024461  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:58.903499  450576 pod_ready.go:81] duration metric: took 4m0.001018928s for pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace to be "Ready" ...
	E0805 13:02:58.903533  450576 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace to be "Ready" (will not retry!)
	I0805 13:02:58.903556  450576 pod_ready.go:38] duration metric: took 4m8.049032492s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 13:02:58.903598  450576 kubeadm.go:597] duration metric: took 4m18.518107211s to restartPrimaryControlPlane
	W0805 13:02:58.903786  450576 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0805 13:02:58.903819  450576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0805 13:02:59.945464  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:02.443954  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:02.761328  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:03:02.775836  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:03:02.775904  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:03:02.812714  451238 cri.go:89] found id: ""
	I0805 13:03:02.812752  451238 logs.go:276] 0 containers: []
	W0805 13:03:02.812764  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:03:02.812773  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:03:02.812848  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:03:02.850072  451238 cri.go:89] found id: ""
	I0805 13:03:02.850103  451238 logs.go:276] 0 containers: []
	W0805 13:03:02.850130  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:03:02.850138  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:03:02.850197  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:03:02.886956  451238 cri.go:89] found id: ""
	I0805 13:03:02.887081  451238 logs.go:276] 0 containers: []
	W0805 13:03:02.887103  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:03:02.887114  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:03:02.887188  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:03:02.924874  451238 cri.go:89] found id: ""
	I0805 13:03:02.924906  451238 logs.go:276] 0 containers: []
	W0805 13:03:02.924918  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:03:02.924925  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:03:02.924996  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:03:02.965965  451238 cri.go:89] found id: ""
	I0805 13:03:02.965996  451238 logs.go:276] 0 containers: []
	W0805 13:03:02.966007  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:03:02.966015  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:03:02.966101  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:03:03.001081  451238 cri.go:89] found id: ""
	I0805 13:03:03.001118  451238 logs.go:276] 0 containers: []
	W0805 13:03:03.001130  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:03:03.001140  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:03:03.001201  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:03:03.036194  451238 cri.go:89] found id: ""
	I0805 13:03:03.036223  451238 logs.go:276] 0 containers: []
	W0805 13:03:03.036234  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:03:03.036243  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:03:03.036303  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:03:03.071905  451238 cri.go:89] found id: ""
	I0805 13:03:03.071940  451238 logs.go:276] 0 containers: []
	W0805 13:03:03.071951  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:03:03.071964  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:03.071982  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:03.124400  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:03.124442  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:03.138492  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:03.138520  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:03:03.207300  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:03:03.207326  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:03.207342  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:03.294941  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:03:03.294983  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:03:05.836187  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:03:05.850504  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:03:05.850609  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:03:05.889692  451238 cri.go:89] found id: ""
	I0805 13:03:05.889718  451238 logs.go:276] 0 containers: []
	W0805 13:03:05.889729  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:03:05.889737  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:03:05.889804  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:03:05.924597  451238 cri.go:89] found id: ""
	I0805 13:03:05.924630  451238 logs.go:276] 0 containers: []
	W0805 13:03:05.924640  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:03:05.924647  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:03:05.924711  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:03:05.960373  451238 cri.go:89] found id: ""
	I0805 13:03:05.960404  451238 logs.go:276] 0 containers: []
	W0805 13:03:05.960413  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:03:05.960419  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:03:05.960471  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:03:05.996583  451238 cri.go:89] found id: ""
	I0805 13:03:05.996617  451238 logs.go:276] 0 containers: []
	W0805 13:03:05.996628  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:03:05.996636  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:03:05.996708  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:03:06.033539  451238 cri.go:89] found id: ""
	I0805 13:03:06.033567  451238 logs.go:276] 0 containers: []
	W0805 13:03:06.033575  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:03:06.033586  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:03:06.033655  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:03:06.069348  451238 cri.go:89] found id: ""
	I0805 13:03:06.069378  451238 logs.go:276] 0 containers: []
	W0805 13:03:06.069391  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:03:06.069401  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:03:06.069466  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:03:06.103570  451238 cri.go:89] found id: ""
	I0805 13:03:06.103599  451238 logs.go:276] 0 containers: []
	W0805 13:03:06.103607  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:03:06.103613  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:03:06.103665  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:03:06.140230  451238 cri.go:89] found id: ""
	I0805 13:03:06.140260  451238 logs.go:276] 0 containers: []
	W0805 13:03:06.140271  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:03:06.140284  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:06.140300  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:06.191073  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:06.191123  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:06.204825  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:06.204857  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:03:06.281309  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:03:06.281339  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:06.281358  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:06.361709  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:03:06.361749  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:03:04.025007  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:06.524506  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:04.444267  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:06.444910  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:08.445441  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:08.903194  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:03:08.921602  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:03:08.921681  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:03:08.960916  451238 cri.go:89] found id: ""
	I0805 13:03:08.960945  451238 logs.go:276] 0 containers: []
	W0805 13:03:08.960975  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:03:08.960986  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:03:08.961055  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:03:08.996316  451238 cri.go:89] found id: ""
	I0805 13:03:08.996417  451238 logs.go:276] 0 containers: []
	W0805 13:03:08.996436  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:03:08.996448  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:03:08.996522  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:03:09.038536  451238 cri.go:89] found id: ""
	I0805 13:03:09.038572  451238 logs.go:276] 0 containers: []
	W0805 13:03:09.038584  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:03:09.038593  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:03:09.038664  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:03:09.075368  451238 cri.go:89] found id: ""
	I0805 13:03:09.075396  451238 logs.go:276] 0 containers: []
	W0805 13:03:09.075405  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:03:09.075412  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:03:09.075474  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:03:09.114232  451238 cri.go:89] found id: ""
	I0805 13:03:09.114262  451238 logs.go:276] 0 containers: []
	W0805 13:03:09.114272  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:03:09.114280  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:03:09.114353  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:03:09.161878  451238 cri.go:89] found id: ""
	I0805 13:03:09.161964  451238 logs.go:276] 0 containers: []
	W0805 13:03:09.161978  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:03:09.161988  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:03:09.162062  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:03:09.206694  451238 cri.go:89] found id: ""
	I0805 13:03:09.206727  451238 logs.go:276] 0 containers: []
	W0805 13:03:09.206739  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:03:09.206748  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:03:09.206890  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:03:09.257029  451238 cri.go:89] found id: ""
	I0805 13:03:09.257066  451238 logs.go:276] 0 containers: []
	W0805 13:03:09.257079  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:03:09.257090  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:09.257107  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:09.278638  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:09.278679  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:03:09.353760  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:03:09.353781  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:09.353793  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:09.438371  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:03:09.438419  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:03:09.487253  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:09.487297  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:08.018954  450884 pod_ready.go:81] duration metric: took 4m0.00055059s for pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace to be "Ready" ...
	E0805 13:03:08.018987  450884 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace to be "Ready" (will not retry!)
	I0805 13:03:08.019010  450884 pod_ready.go:38] duration metric: took 4m11.028507743s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 13:03:08.019048  450884 kubeadm.go:597] duration metric: took 4m19.097834327s to restartPrimaryControlPlane
	W0805 13:03:08.019122  450884 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0805 13:03:08.019157  450884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0805 13:03:10.945002  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:12.945953  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:12.042215  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:03:12.055721  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:03:12.055812  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:03:12.096936  451238 cri.go:89] found id: ""
	I0805 13:03:12.096965  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.096977  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:03:12.096985  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:03:12.097051  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:03:12.136149  451238 cri.go:89] found id: ""
	I0805 13:03:12.136181  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.136192  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:03:12.136199  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:03:12.136276  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:03:12.180568  451238 cri.go:89] found id: ""
	I0805 13:03:12.180606  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.180618  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:03:12.180626  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:03:12.180695  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:03:12.221759  451238 cri.go:89] found id: ""
	I0805 13:03:12.221794  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.221806  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:03:12.221815  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:03:12.221882  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:03:12.259460  451238 cri.go:89] found id: ""
	I0805 13:03:12.259490  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.259498  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:03:12.259508  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:03:12.259563  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:03:12.301245  451238 cri.go:89] found id: ""
	I0805 13:03:12.301277  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.301289  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:03:12.301297  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:03:12.301368  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:03:12.343640  451238 cri.go:89] found id: ""
	I0805 13:03:12.343678  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.343690  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:03:12.343698  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:03:12.343809  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:03:12.382729  451238 cri.go:89] found id: ""
	I0805 13:03:12.382762  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.382774  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:03:12.382787  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:12.382807  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:12.400862  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:12.400897  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:03:12.478755  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:03:12.478788  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:12.478807  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:12.566029  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:03:12.566080  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:03:12.611834  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:12.611929  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:15.171517  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:03:15.185569  451238 kubeadm.go:597] duration metric: took 4m3.737627997s to restartPrimaryControlPlane
	W0805 13:03:15.185662  451238 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0805 13:03:15.185697  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0805 13:03:15.669994  451238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 13:03:15.684794  451238 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 13:03:15.695088  451238 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 13:03:15.705403  451238 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 13:03:15.705427  451238 kubeadm.go:157] found existing configuration files:
	
	I0805 13:03:15.705488  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 13:03:15.714777  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 13:03:15.714833  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 13:03:15.724437  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 13:03:15.733263  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 13:03:15.733317  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 13:03:15.743004  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 13:03:15.752219  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 13:03:15.752278  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 13:03:15.761788  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 13:03:15.771193  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 13:03:15.771245  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 13:03:15.780964  451238 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 13:03:15.855628  451238 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0805 13:03:15.855751  451238 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 13:03:16.015686  451238 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 13:03:16.015880  451238 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 13:03:16.016041  451238 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 13:03:16.207054  451238 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 13:03:16.209133  451238 out.go:204]   - Generating certificates and keys ...
	I0805 13:03:16.209256  451238 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 13:03:16.209376  451238 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 13:03:16.209493  451238 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0805 13:03:16.209597  451238 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0805 13:03:16.209703  451238 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0805 13:03:16.211637  451238 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0805 13:03:16.211726  451238 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0805 13:03:16.211833  451238 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0805 13:03:16.211959  451238 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0805 13:03:16.212690  451238 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0805 13:03:16.212863  451238 kubeadm.go:310] [certs] Using the existing "sa" key
	I0805 13:03:16.212963  451238 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 13:03:16.283080  451238 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 13:03:16.609523  451238 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 13:03:16.765635  451238 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 13:03:16.934487  451238 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 13:03:16.955335  451238 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 13:03:16.956267  451238 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 13:03:16.956328  451238 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 13:03:17.088081  451238 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 13:03:15.445305  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:17.447306  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:17.090118  451238 out.go:204]   - Booting up control plane ...
	I0805 13:03:17.090264  451238 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 13:03:17.100902  451238 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 13:03:17.101263  451238 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 13:03:17.102210  451238 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 13:03:17.112522  451238 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0805 13:03:19.943658  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:21.944253  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:23.945158  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:25.252381  450576 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.348530672s)
	I0805 13:03:25.252504  450576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 13:03:25.269305  450576 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 13:03:25.279322  450576 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 13:03:25.289241  450576 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 13:03:25.289266  450576 kubeadm.go:157] found existing configuration files:
	
	I0805 13:03:25.289304  450576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 13:03:25.298671  450576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 13:03:25.298732  450576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 13:03:25.309962  450576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 13:03:25.320180  450576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 13:03:25.320247  450576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 13:03:25.330481  450576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 13:03:25.340565  450576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 13:03:25.340652  450576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 13:03:25.351244  450576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 13:03:25.361443  450576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 13:03:25.361536  450576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 13:03:25.371655  450576 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 13:03:25.419277  450576 kubeadm.go:310] W0805 13:03:25.398597    2979 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0805 13:03:25.420220  450576 kubeadm.go:310] W0805 13:03:25.399642    2979 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0805 13:03:25.537148  450576 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 13:03:25.945501  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:27.945972  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:33.413703  450576 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-rc.0
	I0805 13:03:33.413775  450576 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 13:03:33.413863  450576 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 13:03:33.414008  450576 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 13:03:33.414152  450576 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0805 13:03:33.414235  450576 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 13:03:33.415804  450576 out.go:204]   - Generating certificates and keys ...
	I0805 13:03:33.415874  450576 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 13:03:33.415949  450576 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 13:03:33.416037  450576 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0805 13:03:33.416101  450576 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0805 13:03:33.416174  450576 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0805 13:03:33.416237  450576 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0805 13:03:33.416289  450576 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0805 13:03:33.416357  450576 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0805 13:03:33.416437  450576 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0805 13:03:33.416518  450576 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0805 13:03:33.416553  450576 kubeadm.go:310] [certs] Using the existing "sa" key
	I0805 13:03:33.416603  450576 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 13:03:33.416646  450576 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 13:03:33.416701  450576 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0805 13:03:33.416745  450576 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 13:03:33.416816  450576 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 13:03:33.416878  450576 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 13:03:33.416971  450576 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 13:03:33.417059  450576 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 13:03:33.418572  450576 out.go:204]   - Booting up control plane ...
	I0805 13:03:33.418671  450576 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 13:03:33.418751  450576 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 13:03:33.418833  450576 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 13:03:33.418965  450576 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 13:03:33.419092  450576 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 13:03:33.419172  450576 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 13:03:33.419342  450576 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0805 13:03:33.419488  450576 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0805 13:03:33.419577  450576 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.308417ms
	I0805 13:03:33.419672  450576 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0805 13:03:33.419780  450576 kubeadm.go:310] [api-check] The API server is healthy after 5.001429681s
	I0805 13:03:33.419908  450576 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 13:03:33.420049  450576 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 13:03:33.420117  450576 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0805 13:03:33.420293  450576 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-669469 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 13:03:33.420385  450576 kubeadm.go:310] [bootstrap-token] Using token: i9zl3x.c4hzh1c9ccxlydzt
	I0805 13:03:33.421925  450576 out.go:204]   - Configuring RBAC rules ...
	I0805 13:03:33.422042  450576 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 13:03:33.422157  450576 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 13:03:33.422352  450576 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 13:03:33.422488  450576 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 13:03:33.422649  450576 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 13:03:33.422784  450576 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 13:03:33.422914  450576 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 13:03:33.422991  450576 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0805 13:03:33.423060  450576 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0805 13:03:33.423070  450576 kubeadm.go:310] 
	I0805 13:03:33.423160  450576 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0805 13:03:33.423173  450576 kubeadm.go:310] 
	I0805 13:03:33.423274  450576 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0805 13:03:33.423283  450576 kubeadm.go:310] 
	I0805 13:03:33.423316  450576 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0805 13:03:33.423409  450576 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 13:03:33.423495  450576 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 13:03:33.423513  450576 kubeadm.go:310] 
	I0805 13:03:33.423616  450576 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0805 13:03:33.423628  450576 kubeadm.go:310] 
	I0805 13:03:33.423692  450576 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 13:03:33.423701  450576 kubeadm.go:310] 
	I0805 13:03:33.423793  450576 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0805 13:03:33.423931  450576 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 13:03:33.424030  450576 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 13:03:33.424039  450576 kubeadm.go:310] 
	I0805 13:03:33.424106  450576 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0805 13:03:33.424176  450576 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0805 13:03:33.424185  450576 kubeadm.go:310] 
	I0805 13:03:33.424282  450576 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token i9zl3x.c4hzh1c9ccxlydzt \
	I0805 13:03:33.424430  450576 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d5d31a77e9c4cbf19599d2fca5d8f2345e115b01301fa4b841f92bcfec86ddc6 \
	I0805 13:03:33.424473  450576 kubeadm.go:310] 	--control-plane 
	I0805 13:03:33.424482  450576 kubeadm.go:310] 
	I0805 13:03:33.424588  450576 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0805 13:03:33.424602  450576 kubeadm.go:310] 
	I0805 13:03:33.424725  450576 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token i9zl3x.c4hzh1c9ccxlydzt \
	I0805 13:03:33.424870  450576 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d5d31a77e9c4cbf19599d2fca5d8f2345e115b01301fa4b841f92bcfec86ddc6 
	I0805 13:03:33.424892  450576 cni.go:84] Creating CNI manager for ""
	I0805 13:03:33.424911  450576 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 13:03:33.426503  450576 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0805 13:03:33.427981  450576 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0805 13:03:33.439484  450576 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0805 13:03:33.458459  450576 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 13:03:33.458547  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:33.458579  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-669469 minikube.k8s.io/updated_at=2024_08_05T13_03_33_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f minikube.k8s.io/name=no-preload-669469 minikube.k8s.io/primary=true
	I0805 13:03:33.488847  450576 ops.go:34] apiserver oom_adj: -16
	I0805 13:03:29.946423  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:32.444923  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:33.674306  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:34.174940  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:34.674936  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:35.174693  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:35.675004  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:36.174801  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:36.674878  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:37.174394  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:37.263948  450576 kubeadm.go:1113] duration metric: took 3.805464287s to wait for elevateKubeSystemPrivileges
	I0805 13:03:37.263985  450576 kubeadm.go:394] duration metric: took 4m56.93214495s to StartCluster
	I0805 13:03:37.264025  450576 settings.go:142] acquiring lock: {Name:mkef693333292ed53a03690c72ec170ce2e26d3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 13:03:37.264143  450576 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 13:03:37.265965  450576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/kubeconfig: {Name:mkf2ea766e58530103015ce4ba9d1ed3336f3926 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 13:03:37.266283  450576 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.223 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 13:03:37.266400  450576 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 13:03:37.266469  450576 addons.go:69] Setting storage-provisioner=true in profile "no-preload-669469"
	I0805 13:03:37.266510  450576 addons.go:234] Setting addon storage-provisioner=true in "no-preload-669469"
	W0805 13:03:37.266518  450576 addons.go:243] addon storage-provisioner should already be in state true
	I0805 13:03:37.266519  450576 config.go:182] Loaded profile config "no-preload-669469": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0805 13:03:37.266551  450576 host.go:66] Checking if "no-preload-669469" exists ...
	I0805 13:03:37.266505  450576 addons.go:69] Setting default-storageclass=true in profile "no-preload-669469"
	I0805 13:03:37.266547  450576 addons.go:69] Setting metrics-server=true in profile "no-preload-669469"
	I0805 13:03:37.266612  450576 addons.go:234] Setting addon metrics-server=true in "no-preload-669469"
	I0805 13:03:37.266616  450576 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-669469"
	W0805 13:03:37.266627  450576 addons.go:243] addon metrics-server should already be in state true
	I0805 13:03:37.266668  450576 host.go:66] Checking if "no-preload-669469" exists ...
	I0805 13:03:37.267002  450576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:03:37.267002  450576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:03:37.267035  450576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:03:37.267049  450576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:03:37.267041  450576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:03:37.267085  450576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:03:37.267985  450576 out.go:177] * Verifying Kubernetes components...
	I0805 13:03:37.269486  450576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 13:03:37.283242  450576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44391
	I0805 13:03:37.283291  450576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35597
	I0805 13:03:37.283245  450576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38679
	I0805 13:03:37.283710  450576 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:03:37.283785  450576 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:03:37.283717  450576 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:03:37.284296  450576 main.go:141] libmachine: Using API Version  1
	I0805 13:03:37.284316  450576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:03:37.284319  450576 main.go:141] libmachine: Using API Version  1
	I0805 13:03:37.284296  450576 main.go:141] libmachine: Using API Version  1
	I0805 13:03:37.284335  450576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:03:37.284360  450576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:03:37.284734  450576 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:03:37.284735  450576 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:03:37.284746  450576 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:03:37.284963  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetState
	I0805 13:03:37.285343  450576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:03:37.285375  450576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:03:37.285387  450576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:03:37.285441  450576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:03:37.288699  450576 addons.go:234] Setting addon default-storageclass=true in "no-preload-669469"
	W0805 13:03:37.288722  450576 addons.go:243] addon default-storageclass should already be in state true
	I0805 13:03:37.288753  450576 host.go:66] Checking if "no-preload-669469" exists ...
	I0805 13:03:37.289023  450576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:03:37.289049  450576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:03:37.303814  450576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38647
	I0805 13:03:37.304491  450576 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:03:37.305081  450576 main.go:141] libmachine: Using API Version  1
	I0805 13:03:37.305104  450576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:03:37.305552  450576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42975
	I0805 13:03:37.305566  450576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36331
	I0805 13:03:37.305583  450576 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:03:37.305928  450576 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:03:37.306007  450576 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:03:37.306148  450576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:03:37.306190  450576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:03:37.306485  450576 main.go:141] libmachine: Using API Version  1
	I0805 13:03:37.306503  450576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:03:37.306595  450576 main.go:141] libmachine: Using API Version  1
	I0805 13:03:37.306611  450576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:03:37.306971  450576 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:03:37.306998  450576 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:03:37.307157  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetState
	I0805 13:03:37.307162  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetState
	I0805 13:03:37.309002  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 13:03:37.309241  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 13:03:37.311054  450576 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0805 13:03:37.311055  450576 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 13:03:37.312682  450576 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0805 13:03:37.312695  450576 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0805 13:03:37.312710  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 13:03:37.312834  450576 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 13:03:37.312856  450576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 13:03:37.312874  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 13:03:37.317044  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 13:03:37.317635  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 13:03:37.317660  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 13:03:37.317753  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 13:03:37.317955  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 13:03:37.318141  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 13:03:37.318360  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 13:03:37.318400  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 13:03:37.318427  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 13:03:37.318539  450576 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa Username:docker}
	I0805 13:03:37.318633  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 13:03:37.318967  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 13:03:37.319111  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 13:03:37.319241  450576 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa Username:docker}
	I0805 13:03:37.325066  450576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46527
	I0805 13:03:37.325633  450576 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:03:37.326052  450576 main.go:141] libmachine: Using API Version  1
	I0805 13:03:37.326071  450576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:03:37.326326  450576 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:03:37.326473  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetState
	I0805 13:03:37.328502  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 13:03:37.328814  450576 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 13:03:37.328826  450576 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 13:03:37.328839  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 13:03:37.331482  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 13:03:37.331853  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 13:03:37.331874  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 13:03:37.332013  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 13:03:37.332169  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 13:03:37.332270  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 13:03:37.332358  450576 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa Username:docker}
	I0805 13:03:37.483477  450576 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 13:03:37.501924  450576 node_ready.go:35] waiting up to 6m0s for node "no-preload-669469" to be "Ready" ...
	I0805 13:03:37.511394  450576 node_ready.go:49] node "no-preload-669469" has status "Ready":"True"
	I0805 13:03:37.511427  450576 node_ready.go:38] duration metric: took 9.462968ms for node "no-preload-669469" to be "Ready" ...
	I0805 13:03:37.511443  450576 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 13:03:37.526505  450576 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 13:03:37.575598  450576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 13:03:37.583338  450576 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0805 13:03:37.583362  450576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0805 13:03:37.594019  450576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 13:03:37.629885  450576 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0805 13:03:37.629913  450576 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0805 13:03:37.684790  450576 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0805 13:03:37.684825  450576 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0805 13:03:37.753629  450576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0805 13:03:37.857352  450576 main.go:141] libmachine: Making call to close driver server
	I0805 13:03:37.857386  450576 main.go:141] libmachine: (no-preload-669469) Calling .Close
	I0805 13:03:37.857777  450576 main.go:141] libmachine: (no-preload-669469) DBG | Closing plugin on server side
	I0805 13:03:37.857780  450576 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:03:37.857812  450576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:03:37.857829  450576 main.go:141] libmachine: Making call to close driver server
	I0805 13:03:37.857838  450576 main.go:141] libmachine: (no-preload-669469) Calling .Close
	I0805 13:03:37.858101  450576 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:03:37.858117  450576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:03:37.858153  450576 main.go:141] libmachine: (no-preload-669469) DBG | Closing plugin on server side
	I0805 13:03:37.871616  450576 main.go:141] libmachine: Making call to close driver server
	I0805 13:03:37.871639  450576 main.go:141] libmachine: (no-preload-669469) Calling .Close
	I0805 13:03:37.871970  450576 main.go:141] libmachine: (no-preload-669469) DBG | Closing plugin on server side
	I0805 13:03:37.872022  450576 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:03:37.872031  450576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:03:38.290429  450576 main.go:141] libmachine: Making call to close driver server
	I0805 13:03:38.290449  450576 main.go:141] libmachine: (no-preload-669469) Calling .Close
	I0805 13:03:38.290784  450576 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:03:38.290856  450576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:03:38.290871  450576 main.go:141] libmachine: Making call to close driver server
	I0805 13:03:38.290880  450576 main.go:141] libmachine: (no-preload-669469) Calling .Close
	I0805 13:03:38.290829  450576 main.go:141] libmachine: (no-preload-669469) DBG | Closing plugin on server side
	I0805 13:03:38.291265  450576 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:03:38.291289  450576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:03:38.291271  450576 main.go:141] libmachine: (no-preload-669469) DBG | Closing plugin on server side
	I0805 13:03:38.880274  450576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.126602375s)
	I0805 13:03:38.880331  450576 main.go:141] libmachine: Making call to close driver server
	I0805 13:03:38.880344  450576 main.go:141] libmachine: (no-preload-669469) Calling .Close
	I0805 13:03:38.880868  450576 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:03:38.880896  450576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:03:38.880906  450576 main.go:141] libmachine: Making call to close driver server
	I0805 13:03:38.880916  450576 main.go:141] libmachine: (no-preload-669469) Calling .Close
	I0805 13:03:38.880871  450576 main.go:141] libmachine: (no-preload-669469) DBG | Closing plugin on server side
	I0805 13:03:38.881196  450576 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:03:38.881204  450576 main.go:141] libmachine: (no-preload-669469) DBG | Closing plugin on server side
	I0805 13:03:38.881211  450576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:03:38.881230  450576 addons.go:475] Verifying addon metrics-server=true in "no-preload-669469"
	I0805 13:03:38.882896  450576 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0805 13:03:34.945631  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:37.446855  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:39.741362  450884 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.722174979s)
	I0805 13:03:39.741438  450884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 13:03:39.760465  450884 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 13:03:39.770587  450884 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 13:03:39.780157  450884 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 13:03:39.780177  450884 kubeadm.go:157] found existing configuration files:
	
	I0805 13:03:39.780215  450884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0805 13:03:39.790172  450884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 13:03:39.790243  450884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 13:03:39.803838  450884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0805 13:03:39.816314  450884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 13:03:39.816367  450884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 13:03:39.826636  450884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0805 13:03:39.836513  450884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 13:03:39.836570  450884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 13:03:39.846356  450884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0805 13:03:39.855694  450884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 13:03:39.855770  450884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 13:03:39.865721  450884 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 13:03:40.081251  450884 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 13:03:38.884521  450576 addons.go:510] duration metric: took 1.618121451s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0805 13:03:39.536758  450576 pod_ready.go:102] pod "etcd-no-preload-669469" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:41.035239  450576 pod_ready.go:92] pod "etcd-no-preload-669469" in "kube-system" namespace has status "Ready":"True"
	I0805 13:03:41.035266  450576 pod_ready.go:81] duration metric: took 3.508734543s for pod "etcd-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 13:03:41.035280  450576 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 13:03:41.042787  450576 pod_ready.go:92] pod "kube-apiserver-no-preload-669469" in "kube-system" namespace has status "Ready":"True"
	I0805 13:03:41.042811  450576 pod_ready.go:81] duration metric: took 7.522909ms for pod "kube-apiserver-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 13:03:41.042824  450576 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 13:03:42.048338  450576 pod_ready.go:92] pod "kube-controller-manager-no-preload-669469" in "kube-system" namespace has status "Ready":"True"
	I0805 13:03:42.048363  450576 pod_ready.go:81] duration metric: took 1.005531569s for pod "kube-controller-manager-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 13:03:42.048373  450576 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 13:03:39.945815  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:42.445704  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:44.056394  450576 pod_ready.go:102] pod "kube-scheduler-no-preload-669469" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:45.555280  450576 pod_ready.go:92] pod "kube-scheduler-no-preload-669469" in "kube-system" namespace has status "Ready":"True"
	I0805 13:03:45.555310  450576 pod_ready.go:81] duration metric: took 3.506927542s for pod "kube-scheduler-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 13:03:45.555321  450576 pod_ready.go:38] duration metric: took 8.043865797s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 13:03:45.555338  450576 api_server.go:52] waiting for apiserver process to appear ...
	I0805 13:03:45.555397  450576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:03:45.572225  450576 api_server.go:72] duration metric: took 8.30589728s to wait for apiserver process to appear ...
	I0805 13:03:45.572249  450576 api_server.go:88] waiting for apiserver healthz status ...
	I0805 13:03:45.572272  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 13:03:45.578042  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 200:
	ok
	I0805 13:03:45.579014  450576 api_server.go:141] control plane version: v1.31.0-rc.0
	I0805 13:03:45.579034  450576 api_server.go:131] duration metric: took 6.778214ms to wait for apiserver health ...
	I0805 13:03:45.579042  450576 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 13:03:45.585537  450576 system_pods.go:59] 9 kube-system pods found
	I0805 13:03:45.585660  450576 system_pods.go:61] "coredns-6f6b679f8f-npbmj" [9eea9e0a-697b-42c9-857c-a3556c658fde] Running
	I0805 13:03:45.585673  450576 system_pods.go:61] "coredns-6f6b679f8f-pqhwx" [3d7bb193-e93e-49b8-be4b-943f2d7fe59d] Running
	I0805 13:03:45.585679  450576 system_pods.go:61] "etcd-no-preload-669469" [550acfbb-f255-470e-9e4f-a6eb36447951] Running
	I0805 13:03:45.585687  450576 system_pods.go:61] "kube-apiserver-no-preload-669469" [57089d30-f83b-4f06-8281-8bcdfb571df9] Running
	I0805 13:03:45.585694  450576 system_pods.go:61] "kube-controller-manager-no-preload-669469" [8f3b2de3-6296-4f95-8d91-b9408c8eb38b] Running
	I0805 13:03:45.585700  450576 system_pods.go:61] "kube-proxy-tpn5s" [f89e32f9-d750-41ac-891e-e3ca4a4fbbd2] Running
	I0805 13:03:45.585705  450576 system_pods.go:61] "kube-scheduler-no-preload-669469" [69af56a0-7269-4bc5-83ea-c632c7b8d060] Running
	I0805 13:03:45.585716  450576 system_pods.go:61] "metrics-server-6867b74b74-x4j7b" [55a747e4-f9a7-41f1-b584-470048ba6fcb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 13:03:45.585726  450576 system_pods.go:61] "storage-provisioner" [cb19adf6-e208-4709-b02f-ae32acc30478] Running
	I0805 13:03:45.585736  450576 system_pods.go:74] duration metric: took 6.688464ms to wait for pod list to return data ...
	I0805 13:03:45.585749  450576 default_sa.go:34] waiting for default service account to be created ...
	I0805 13:03:45.589498  450576 default_sa.go:45] found service account: "default"
	I0805 13:03:45.589526  450576 default_sa.go:55] duration metric: took 3.765664ms for default service account to be created ...
	I0805 13:03:45.589535  450576 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 13:03:45.597499  450576 system_pods.go:86] 9 kube-system pods found
	I0805 13:03:45.597527  450576 system_pods.go:89] "coredns-6f6b679f8f-npbmj" [9eea9e0a-697b-42c9-857c-a3556c658fde] Running
	I0805 13:03:45.597533  450576 system_pods.go:89] "coredns-6f6b679f8f-pqhwx" [3d7bb193-e93e-49b8-be4b-943f2d7fe59d] Running
	I0805 13:03:45.597537  450576 system_pods.go:89] "etcd-no-preload-669469" [550acfbb-f255-470e-9e4f-a6eb36447951] Running
	I0805 13:03:45.597541  450576 system_pods.go:89] "kube-apiserver-no-preload-669469" [57089d30-f83b-4f06-8281-8bcdfb571df9] Running
	I0805 13:03:45.597547  450576 system_pods.go:89] "kube-controller-manager-no-preload-669469" [8f3b2de3-6296-4f95-8d91-b9408c8eb38b] Running
	I0805 13:03:45.597550  450576 system_pods.go:89] "kube-proxy-tpn5s" [f89e32f9-d750-41ac-891e-e3ca4a4fbbd2] Running
	I0805 13:03:45.597554  450576 system_pods.go:89] "kube-scheduler-no-preload-669469" [69af56a0-7269-4bc5-83ea-c632c7b8d060] Running
	I0805 13:03:45.597563  450576 system_pods.go:89] "metrics-server-6867b74b74-x4j7b" [55a747e4-f9a7-41f1-b584-470048ba6fcb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 13:03:45.597568  450576 system_pods.go:89] "storage-provisioner" [cb19adf6-e208-4709-b02f-ae32acc30478] Running
	I0805 13:03:45.597577  450576 system_pods.go:126] duration metric: took 8.035546ms to wait for k8s-apps to be running ...
	I0805 13:03:45.597586  450576 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 13:03:45.597631  450576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 13:03:45.619317  450576 system_svc.go:56] duration metric: took 21.706117ms WaitForService to wait for kubelet
	I0805 13:03:45.619365  450576 kubeadm.go:582] duration metric: took 8.353035332s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 13:03:45.619398  450576 node_conditions.go:102] verifying NodePressure condition ...
	I0805 13:03:45.622763  450576 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 13:03:45.622790  450576 node_conditions.go:123] node cpu capacity is 2
	I0805 13:03:45.622801  450576 node_conditions.go:105] duration metric: took 3.396756ms to run NodePressure ...
	I0805 13:03:45.622814  450576 start.go:241] waiting for startup goroutines ...
	I0805 13:03:45.622821  450576 start.go:246] waiting for cluster config update ...
	I0805 13:03:45.622831  450576 start.go:255] writing updated cluster config ...
	I0805 13:03:45.623102  450576 ssh_runner.go:195] Run: rm -f paused
	I0805 13:03:45.682547  450576 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-rc.0 (minor skew: 1)
	I0805 13:03:45.684415  450576 out.go:177] * Done! kubectl is now configured to use "no-preload-669469" cluster and "default" namespace by default
	I0805 13:03:48.707730  450884 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0805 13:03:48.707817  450884 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 13:03:48.707920  450884 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 13:03:48.708065  450884 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 13:03:48.708218  450884 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 13:03:48.708311  450884 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 13:03:48.709807  450884 out.go:204]   - Generating certificates and keys ...
	I0805 13:03:48.709878  450884 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 13:03:48.709931  450884 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 13:03:48.710008  450884 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0805 13:03:48.710084  450884 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0805 13:03:48.710148  450884 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0805 13:03:48.710196  450884 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0805 13:03:48.710251  450884 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0805 13:03:48.710316  450884 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0805 13:03:48.710415  450884 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0805 13:03:48.710520  450884 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0805 13:03:48.710582  450884 kubeadm.go:310] [certs] Using the existing "sa" key
	I0805 13:03:48.710656  450884 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 13:03:48.710700  450884 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 13:03:48.710746  450884 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0805 13:03:48.710790  450884 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 13:03:48.710843  450884 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 13:03:48.710895  450884 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 13:03:48.710971  450884 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 13:03:48.711055  450884 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 13:03:48.713503  450884 out.go:204]   - Booting up control plane ...
	I0805 13:03:48.713601  450884 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 13:03:48.713687  450884 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 13:03:48.713763  450884 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 13:03:48.713911  450884 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 13:03:48.714039  450884 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 13:03:48.714105  450884 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 13:03:48.714222  450884 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0805 13:03:48.714284  450884 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0805 13:03:48.714345  450884 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.128103ms
	I0805 13:03:48.714423  450884 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0805 13:03:48.714491  450884 kubeadm.go:310] [api-check] The API server is healthy after 5.502076793s
	I0805 13:03:48.714600  450884 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 13:03:48.714730  450884 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 13:03:48.714794  450884 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0805 13:03:48.714987  450884 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-371585 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 13:03:48.715075  450884 kubeadm.go:310] [bootstrap-token] Using token: cpuyhq.sjq5yhx27tk7meks
	I0805 13:03:48.716575  450884 out.go:204]   - Configuring RBAC rules ...
	I0805 13:03:48.716686  450884 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 13:03:48.716775  450884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 13:03:48.716952  450884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 13:03:48.717075  450884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 13:03:48.717196  450884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 13:03:48.717270  450884 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 13:03:48.717391  450884 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 13:03:48.717450  450884 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0805 13:03:48.717512  450884 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0805 13:03:48.717521  450884 kubeadm.go:310] 
	I0805 13:03:48.717613  450884 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0805 13:03:48.717623  450884 kubeadm.go:310] 
	I0805 13:03:48.717724  450884 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0805 13:03:48.717734  450884 kubeadm.go:310] 
	I0805 13:03:48.717768  450884 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0805 13:03:48.717848  450884 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 13:03:48.717892  450884 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 13:03:48.717898  450884 kubeadm.go:310] 
	I0805 13:03:48.717968  450884 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0805 13:03:48.717978  450884 kubeadm.go:310] 
	I0805 13:03:48.718047  450884 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 13:03:48.718057  450884 kubeadm.go:310] 
	I0805 13:03:48.718133  450884 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0805 13:03:48.718220  450884 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 13:03:48.718297  450884 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 13:03:48.718304  450884 kubeadm.go:310] 
	I0805 13:03:48.718422  450884 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0805 13:03:48.718506  450884 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0805 13:03:48.718513  450884 kubeadm.go:310] 
	I0805 13:03:48.718585  450884 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token cpuyhq.sjq5yhx27tk7meks \
	I0805 13:03:48.718669  450884 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d5d31a77e9c4cbf19599d2fca5d8f2345e115b01301fa4b841f92bcfec86ddc6 \
	I0805 13:03:48.718688  450884 kubeadm.go:310] 	--control-plane 
	I0805 13:03:48.718694  450884 kubeadm.go:310] 
	I0805 13:03:48.718761  450884 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0805 13:03:48.718769  450884 kubeadm.go:310] 
	I0805 13:03:48.718848  450884 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token cpuyhq.sjq5yhx27tk7meks \
	I0805 13:03:48.718948  450884 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d5d31a77e9c4cbf19599d2fca5d8f2345e115b01301fa4b841f92bcfec86ddc6 
	I0805 13:03:48.718957  450884 cni.go:84] Creating CNI manager for ""
	I0805 13:03:48.718965  450884 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 13:03:48.720262  450884 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0805 13:03:44.946225  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:47.444313  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:48.721390  450884 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0805 13:03:48.732324  450884 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0805 13:03:48.750318  450884 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 13:03:48.750397  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:48.750398  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-371585 minikube.k8s.io/updated_at=2024_08_05T13_03_48_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f minikube.k8s.io/name=default-k8s-diff-port-371585 minikube.k8s.io/primary=true
	I0805 13:03:48.781590  450884 ops.go:34] apiserver oom_adj: -16
	I0805 13:03:48.966544  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:49.467473  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:49.967093  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:50.466813  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:50.967183  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:51.467350  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:51.967432  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:49.444667  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:49.444719  450393 pod_ready.go:81] duration metric: took 4m0.006667631s for pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace to be "Ready" ...
	E0805 13:03:49.444731  450393 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0805 13:03:49.444738  450393 pod_ready.go:38] duration metric: took 4m2.407503205s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 13:03:49.444757  450393 api_server.go:52] waiting for apiserver process to appear ...
	I0805 13:03:49.444787  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:03:49.444849  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:03:49.502039  450393 cri.go:89] found id: "be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7"
	I0805 13:03:49.502067  450393 cri.go:89] found id: ""
	I0805 13:03:49.502079  450393 logs.go:276] 1 containers: [be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7]
	I0805 13:03:49.502139  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:49.510426  450393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:03:49.510494  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:03:49.553861  450393 cri.go:89] found id: "85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804"
	I0805 13:03:49.553889  450393 cri.go:89] found id: ""
	I0805 13:03:49.553899  450393 logs.go:276] 1 containers: [85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804]
	I0805 13:03:49.553960  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:49.558802  450393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:03:49.558868  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:03:49.594787  450393 cri.go:89] found id: "b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb"
	I0805 13:03:49.594810  450393 cri.go:89] found id: ""
	I0805 13:03:49.594828  450393 logs.go:276] 1 containers: [b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb]
	I0805 13:03:49.594891  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:49.599735  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:03:49.599822  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:03:49.637856  450393 cri.go:89] found id: "8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756"
	I0805 13:03:49.637878  450393 cri.go:89] found id: ""
	I0805 13:03:49.637886  450393 logs.go:276] 1 containers: [8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756]
	I0805 13:03:49.637939  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:49.642228  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:03:49.642295  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:03:49.683822  450393 cri.go:89] found id: "c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0"
	I0805 13:03:49.683844  450393 cri.go:89] found id: ""
	I0805 13:03:49.683853  450393 logs.go:276] 1 containers: [c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0]
	I0805 13:03:49.683913  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:49.688077  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:03:49.688155  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:03:49.724887  450393 cri.go:89] found id: "75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f"
	I0805 13:03:49.724913  450393 cri.go:89] found id: ""
	I0805 13:03:49.724923  450393 logs.go:276] 1 containers: [75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f]
	I0805 13:03:49.724987  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:49.728965  450393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:03:49.729052  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:03:49.765826  450393 cri.go:89] found id: ""
	I0805 13:03:49.765859  450393 logs.go:276] 0 containers: []
	W0805 13:03:49.765871  450393 logs.go:278] No container was found matching "kindnet"
	I0805 13:03:49.765878  450393 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0805 13:03:49.765944  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0805 13:03:49.803790  450393 cri.go:89] found id: "07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b"
	I0805 13:03:49.803811  450393 cri.go:89] found id: "2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86"
	I0805 13:03:49.803815  450393 cri.go:89] found id: ""
	I0805 13:03:49.803823  450393 logs.go:276] 2 containers: [07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b 2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86]
	I0805 13:03:49.803887  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:49.808064  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:49.812308  450393 logs.go:123] Gathering logs for storage-provisioner [2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86] ...
	I0805 13:03:49.812332  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86"
	I0805 13:03:49.851842  450393 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:49.851867  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:50.418758  450393 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:50.418808  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 13:03:50.564965  450393 logs.go:123] Gathering logs for coredns [b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb] ...
	I0805 13:03:50.564999  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb"
	I0805 13:03:50.608518  450393 logs.go:123] Gathering logs for kube-apiserver [be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7] ...
	I0805 13:03:50.608557  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7"
	I0805 13:03:50.658446  450393 logs.go:123] Gathering logs for etcd [85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804] ...
	I0805 13:03:50.658482  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804"
	I0805 13:03:50.699924  450393 logs.go:123] Gathering logs for kube-scheduler [8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756] ...
	I0805 13:03:50.699962  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756"
	I0805 13:03:50.741228  450393 logs.go:123] Gathering logs for kube-proxy [c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0] ...
	I0805 13:03:50.741264  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0"
	I0805 13:03:50.776100  450393 logs.go:123] Gathering logs for kube-controller-manager [75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f] ...
	I0805 13:03:50.776133  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f"
	I0805 13:03:50.827847  450393 logs.go:123] Gathering logs for storage-provisioner [07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b] ...
	I0805 13:03:50.827880  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b"
	I0805 13:03:50.867699  450393 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:50.867731  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:50.920049  450393 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:50.920085  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:50.934198  450393 logs.go:123] Gathering logs for container status ...
	I0805 13:03:50.934224  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:03:53.477808  450393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:03:53.494062  450393 api_server.go:72] duration metric: took 4m14.183013645s to wait for apiserver process to appear ...
	I0805 13:03:53.494093  450393 api_server.go:88] waiting for apiserver healthz status ...
	I0805 13:03:53.494143  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:03:53.494211  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:03:53.534293  450393 cri.go:89] found id: "be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7"
	I0805 13:03:53.534322  450393 cri.go:89] found id: ""
	I0805 13:03:53.534333  450393 logs.go:276] 1 containers: [be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7]
	I0805 13:03:53.534400  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:53.539014  450393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:03:53.539088  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:03:53.576587  450393 cri.go:89] found id: "85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804"
	I0805 13:03:53.576608  450393 cri.go:89] found id: ""
	I0805 13:03:53.576616  450393 logs.go:276] 1 containers: [85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804]
	I0805 13:03:53.576667  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:53.582068  450393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:03:53.582147  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:03:53.623240  450393 cri.go:89] found id: "b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb"
	I0805 13:03:53.623264  450393 cri.go:89] found id: ""
	I0805 13:03:53.623274  450393 logs.go:276] 1 containers: [b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb]
	I0805 13:03:53.623352  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:53.627638  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:03:53.627699  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:03:53.668167  450393 cri.go:89] found id: "8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756"
	I0805 13:03:53.668198  450393 cri.go:89] found id: ""
	I0805 13:03:53.668209  450393 logs.go:276] 1 containers: [8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756]
	I0805 13:03:53.668281  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:53.672390  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:03:53.672469  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:03:53.714046  450393 cri.go:89] found id: "c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0"
	I0805 13:03:53.714069  450393 cri.go:89] found id: ""
	I0805 13:03:53.714078  450393 logs.go:276] 1 containers: [c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0]
	I0805 13:03:53.714130  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:53.718325  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:03:53.718392  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:03:53.756343  450393 cri.go:89] found id: "75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f"
	I0805 13:03:53.756372  450393 cri.go:89] found id: ""
	I0805 13:03:53.756382  450393 logs.go:276] 1 containers: [75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f]
	I0805 13:03:53.756444  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:53.760627  450393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:03:53.760696  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:03:53.806370  450393 cri.go:89] found id: ""
	I0805 13:03:53.806406  450393 logs.go:276] 0 containers: []
	W0805 13:03:53.806424  450393 logs.go:278] No container was found matching "kindnet"
	I0805 13:03:53.806432  450393 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0805 13:03:53.806505  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0805 13:03:53.843082  450393 cri.go:89] found id: "07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b"
	I0805 13:03:53.843116  450393 cri.go:89] found id: "2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86"
	I0805 13:03:53.843121  450393 cri.go:89] found id: ""
	I0805 13:03:53.843129  450393 logs.go:276] 2 containers: [07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b 2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86]
	I0805 13:03:53.843188  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:53.847214  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:53.851093  450393 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:53.851112  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:52.467589  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:52.967390  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:53.466580  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:53.967544  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:54.467454  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:54.967281  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:55.467111  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:55.967513  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:56.467255  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:56.967513  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:54.296506  450393 logs.go:123] Gathering logs for kube-apiserver [be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7] ...
	I0805 13:03:54.296556  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7"
	I0805 13:03:54.343983  450393 logs.go:123] Gathering logs for etcd [85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804] ...
	I0805 13:03:54.344026  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804"
	I0805 13:03:54.389236  450393 logs.go:123] Gathering logs for coredns [b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb] ...
	I0805 13:03:54.389271  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb"
	I0805 13:03:54.427964  450393 logs.go:123] Gathering logs for kube-proxy [c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0] ...
	I0805 13:03:54.427996  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0"
	I0805 13:03:54.465953  450393 logs.go:123] Gathering logs for kube-controller-manager [75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f] ...
	I0805 13:03:54.465988  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f"
	I0805 13:03:54.521755  450393 logs.go:123] Gathering logs for storage-provisioner [07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b] ...
	I0805 13:03:54.521835  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b"
	I0805 13:03:54.565481  450393 logs.go:123] Gathering logs for storage-provisioner [2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86] ...
	I0805 13:03:54.565513  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86"
	I0805 13:03:54.606592  450393 logs.go:123] Gathering logs for container status ...
	I0805 13:03:54.606634  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:03:54.650820  450393 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:54.650858  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:54.704512  450393 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:54.704559  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:54.722149  450393 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:54.722184  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 13:03:54.844289  450393 logs.go:123] Gathering logs for kube-scheduler [8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756] ...
	I0805 13:03:54.844324  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756"
	I0805 13:03:57.386998  450393 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0805 13:03:57.391714  450393 api_server.go:279] https://192.168.39.196:8443/healthz returned 200:
	ok
	I0805 13:03:57.392752  450393 api_server.go:141] control plane version: v1.30.3
	I0805 13:03:57.392776  450393 api_server.go:131] duration metric: took 3.898675075s to wait for apiserver health ...
	I0805 13:03:57.392783  450393 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 13:03:57.392812  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:03:57.392868  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:03:57.430171  450393 cri.go:89] found id: "be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7"
	I0805 13:03:57.430201  450393 cri.go:89] found id: ""
	I0805 13:03:57.430210  450393 logs.go:276] 1 containers: [be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7]
	I0805 13:03:57.430270  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:57.434861  450393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:03:57.434920  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:03:57.490595  450393 cri.go:89] found id: "85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804"
	I0805 13:03:57.490622  450393 cri.go:89] found id: ""
	I0805 13:03:57.490632  450393 logs.go:276] 1 containers: [85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804]
	I0805 13:03:57.490702  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:57.496054  450393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:03:57.496141  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:03:57.540248  450393 cri.go:89] found id: "b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb"
	I0805 13:03:57.540278  450393 cri.go:89] found id: ""
	I0805 13:03:57.540289  450393 logs.go:276] 1 containers: [b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb]
	I0805 13:03:57.540353  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:57.547750  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:03:57.547820  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:03:57.595821  450393 cri.go:89] found id: "8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756"
	I0805 13:03:57.595852  450393 cri.go:89] found id: ""
	I0805 13:03:57.595864  450393 logs.go:276] 1 containers: [8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756]
	I0805 13:03:57.595932  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:57.600153  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:03:57.600225  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:03:57.640382  450393 cri.go:89] found id: "c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0"
	I0805 13:03:57.640409  450393 cri.go:89] found id: ""
	I0805 13:03:57.640418  450393 logs.go:276] 1 containers: [c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0]
	I0805 13:03:57.640486  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:57.645476  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:03:57.645569  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:03:57.700199  450393 cri.go:89] found id: "75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f"
	I0805 13:03:57.700224  450393 cri.go:89] found id: ""
	I0805 13:03:57.700233  450393 logs.go:276] 1 containers: [75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f]
	I0805 13:03:57.700294  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:57.704818  450393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:03:57.704874  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:03:57.745647  450393 cri.go:89] found id: ""
	I0805 13:03:57.745677  450393 logs.go:276] 0 containers: []
	W0805 13:03:57.745687  450393 logs.go:278] No container was found matching "kindnet"
	I0805 13:03:57.745696  450393 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0805 13:03:57.745760  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0805 13:03:57.787327  450393 cri.go:89] found id: "07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b"
	I0805 13:03:57.787367  450393 cri.go:89] found id: "2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86"
	I0805 13:03:57.787374  450393 cri.go:89] found id: ""
	I0805 13:03:57.787384  450393 logs.go:276] 2 containers: [07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b 2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86]
	I0805 13:03:57.787448  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:57.792340  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:57.796906  450393 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:57.796933  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:57.850401  450393 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:57.850447  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 13:03:57.961760  450393 logs.go:123] Gathering logs for kube-apiserver [be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7] ...
	I0805 13:03:57.961808  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7"
	I0805 13:03:58.009682  450393 logs.go:123] Gathering logs for etcd [85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804] ...
	I0805 13:03:58.009720  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804"
	I0805 13:03:58.061874  450393 logs.go:123] Gathering logs for kube-proxy [c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0] ...
	I0805 13:03:58.061915  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0"
	I0805 13:03:58.105715  450393 logs.go:123] Gathering logs for kube-controller-manager [75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f] ...
	I0805 13:03:58.105745  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f"
	I0805 13:03:58.164739  450393 logs.go:123] Gathering logs for storage-provisioner [07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b] ...
	I0805 13:03:58.164780  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b"
	I0805 13:03:58.203530  450393 logs.go:123] Gathering logs for storage-provisioner [2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86] ...
	I0805 13:03:58.203579  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86"
	I0805 13:03:58.245478  450393 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:58.245511  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:58.647807  450393 logs.go:123] Gathering logs for container status ...
	I0805 13:03:58.647857  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:03:58.694175  450393 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:58.694211  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:58.709744  450393 logs.go:123] Gathering logs for coredns [b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb] ...
	I0805 13:03:58.709773  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb"
	I0805 13:03:58.750668  450393 logs.go:123] Gathering logs for kube-scheduler [8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756] ...
	I0805 13:03:58.750698  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756"
	I0805 13:04:01.297212  450393 system_pods.go:59] 8 kube-system pods found
	I0805 13:04:01.297248  450393 system_pods.go:61] "coredns-7db6d8ff4d-wm7lh" [e3851d79-431c-4629-bfdc-ed9615cd46aa] Running
	I0805 13:04:01.297255  450393 system_pods.go:61] "etcd-embed-certs-321139" [98de664b-92d7-432d-9881-496dd8edd9f3] Running
	I0805 13:04:01.297261  450393 system_pods.go:61] "kube-apiserver-embed-certs-321139" [2d93e6df-1933-4ac1-82f6-d0d8f74f6d4e] Running
	I0805 13:04:01.297265  450393 system_pods.go:61] "kube-controller-manager-embed-certs-321139" [84165f78-f74b-4714-81b9-eeac2771b86b] Running
	I0805 13:04:01.297269  450393 system_pods.go:61] "kube-proxy-shgv2" [a19c5991-505f-4105-8c20-7afd63dd8e61] Running
	I0805 13:04:01.297273  450393 system_pods.go:61] "kube-scheduler-embed-certs-321139" [961a5013-fd55-48a2-adc2-acde33f6aed5] Running
	I0805 13:04:01.297281  450393 system_pods.go:61] "metrics-server-569cc877fc-k8mrt" [6d400b20-5de5-4046-b773-39766c67cdb4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 13:04:01.297289  450393 system_pods.go:61] "storage-provisioner" [8b2db057-5262-4648-93ea-f2f0ed51a19b] Running
	I0805 13:04:01.297300  450393 system_pods.go:74] duration metric: took 3.904508974s to wait for pod list to return data ...
	I0805 13:04:01.297312  450393 default_sa.go:34] waiting for default service account to be created ...
	I0805 13:04:01.299765  450393 default_sa.go:45] found service account: "default"
	I0805 13:04:01.299792  450393 default_sa.go:55] duration metric: took 2.470684ms for default service account to be created ...
	I0805 13:04:01.299802  450393 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 13:04:01.304612  450393 system_pods.go:86] 8 kube-system pods found
	I0805 13:04:01.304644  450393 system_pods.go:89] "coredns-7db6d8ff4d-wm7lh" [e3851d79-431c-4629-bfdc-ed9615cd46aa] Running
	I0805 13:04:01.304651  450393 system_pods.go:89] "etcd-embed-certs-321139" [98de664b-92d7-432d-9881-496dd8edd9f3] Running
	I0805 13:04:01.304656  450393 system_pods.go:89] "kube-apiserver-embed-certs-321139" [2d93e6df-1933-4ac1-82f6-d0d8f74f6d4e] Running
	I0805 13:04:01.304661  450393 system_pods.go:89] "kube-controller-manager-embed-certs-321139" [84165f78-f74b-4714-81b9-eeac2771b86b] Running
	I0805 13:04:01.304665  450393 system_pods.go:89] "kube-proxy-shgv2" [a19c5991-505f-4105-8c20-7afd63dd8e61] Running
	I0805 13:04:01.304670  450393 system_pods.go:89] "kube-scheduler-embed-certs-321139" [961a5013-fd55-48a2-adc2-acde33f6aed5] Running
	I0805 13:04:01.304677  450393 system_pods.go:89] "metrics-server-569cc877fc-k8mrt" [6d400b20-5de5-4046-b773-39766c67cdb4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 13:04:01.304685  450393 system_pods.go:89] "storage-provisioner" [8b2db057-5262-4648-93ea-f2f0ed51a19b] Running
	I0805 13:04:01.304694  450393 system_pods.go:126] duration metric: took 4.885808ms to wait for k8s-apps to be running ...
	I0805 13:04:01.304702  450393 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 13:04:01.304751  450393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 13:04:01.323278  450393 system_svc.go:56] duration metric: took 18.55935ms WaitForService to wait for kubelet
	I0805 13:04:01.323316  450393 kubeadm.go:582] duration metric: took 4m22.01227204s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 13:04:01.323349  450393 node_conditions.go:102] verifying NodePressure condition ...
	I0805 13:04:01.326802  450393 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 13:04:01.326829  450393 node_conditions.go:123] node cpu capacity is 2
	I0805 13:04:01.326843  450393 node_conditions.go:105] duration metric: took 3.486931ms to run NodePressure ...
	I0805 13:04:01.326859  450393 start.go:241] waiting for startup goroutines ...
	I0805 13:04:01.326869  450393 start.go:246] waiting for cluster config update ...
	I0805 13:04:01.326883  450393 start.go:255] writing updated cluster config ...
	I0805 13:04:01.327230  450393 ssh_runner.go:195] Run: rm -f paused
	I0805 13:04:01.380315  450393 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0805 13:04:01.381891  450393 out.go:177] * Done! kubectl is now configured to use "embed-certs-321139" cluster and "default" namespace by default
	I0805 13:03:57.113870  451238 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0805 13:03:57.114408  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:03:57.114630  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:03:57.467412  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:57.967538  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:58.467217  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:58.967035  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:59.466816  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:59.966909  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:04:00.467553  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:04:00.967667  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:04:01.467382  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:04:01.967495  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:04:02.085428  450884 kubeadm.go:1113] duration metric: took 13.335097096s to wait for elevateKubeSystemPrivileges
	I0805 13:04:02.085464  450884 kubeadm.go:394] duration metric: took 5m13.227479413s to StartCluster
	I0805 13:04:02.085482  450884 settings.go:142] acquiring lock: {Name:mkef693333292ed53a03690c72ec170ce2e26d3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 13:04:02.085571  450884 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 13:04:02.087178  450884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/kubeconfig: {Name:mkf2ea766e58530103015ce4ba9d1ed3336f3926 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 13:04:02.087425  450884 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.228 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 13:04:02.087550  450884 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 13:04:02.087653  450884 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-371585"
	I0805 13:04:02.087659  450884 config.go:182] Loaded profile config "default-k8s-diff-port-371585": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 13:04:02.087681  450884 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-371585"
	I0805 13:04:02.087697  450884 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-371585"
	I0805 13:04:02.087718  450884 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-371585"
	W0805 13:04:02.087729  450884 addons.go:243] addon metrics-server should already be in state true
	I0805 13:04:02.087783  450884 host.go:66] Checking if "default-k8s-diff-port-371585" exists ...
	I0805 13:04:02.087727  450884 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-371585"
	I0805 13:04:02.087692  450884 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-371585"
	W0805 13:04:02.087953  450884 addons.go:243] addon storage-provisioner should already be in state true
	I0805 13:04:02.087986  450884 host.go:66] Checking if "default-k8s-diff-port-371585" exists ...
	I0805 13:04:02.088243  450884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:04:02.088294  450884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:04:02.088243  450884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:04:02.088377  450884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:04:02.088406  450884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:04:02.088415  450884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:04:02.088935  450884 out.go:177] * Verifying Kubernetes components...
	I0805 13:04:02.090386  450884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 13:04:02.105328  450884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39169
	I0805 13:04:02.105335  450884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33049
	I0805 13:04:02.105853  450884 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:04:02.105848  450884 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:04:02.106395  450884 main.go:141] libmachine: Using API Version  1
	I0805 13:04:02.106398  450884 main.go:141] libmachine: Using API Version  1
	I0805 13:04:02.106420  450884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:04:02.106423  450884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:04:02.106506  450884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33831
	I0805 13:04:02.106879  450884 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:04:02.106957  450884 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:04:02.106982  450884 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:04:02.107193  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetState
	I0805 13:04:02.107508  450884 main.go:141] libmachine: Using API Version  1
	I0805 13:04:02.107522  450884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:04:02.107534  450884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:04:02.107561  450884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:04:02.107903  450884 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:04:02.108458  450884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:04:02.108490  450884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:04:02.111681  450884 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-371585"
	W0805 13:04:02.111709  450884 addons.go:243] addon default-storageclass should already be in state true
	I0805 13:04:02.111775  450884 host.go:66] Checking if "default-k8s-diff-port-371585" exists ...
	I0805 13:04:02.113601  450884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:04:02.113648  450884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:04:02.127860  450884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37207
	I0805 13:04:02.128512  450884 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:04:02.128619  450884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39253
	I0805 13:04:02.129023  450884 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:04:02.129174  450884 main.go:141] libmachine: Using API Version  1
	I0805 13:04:02.129198  450884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:04:02.129495  450884 main.go:141] libmachine: Using API Version  1
	I0805 13:04:02.129516  450884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:04:02.129566  450884 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:04:02.129850  450884 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:04:02.129879  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetState
	I0805 13:04:02.130443  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetState
	I0805 13:04:02.131691  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 13:04:02.132370  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 13:04:02.133468  450884 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 13:04:02.134210  450884 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0805 13:04:02.134899  450884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37161
	I0805 13:04:02.135049  450884 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0805 13:04:02.135067  450884 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0805 13:04:02.135099  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 13:04:02.135183  450884 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 13:04:02.135201  450884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 13:04:02.135216  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 13:04:02.135404  450884 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:04:02.136704  450884 main.go:141] libmachine: Using API Version  1
	I0805 13:04:02.136723  450884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:04:02.138362  450884 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:04:02.138801  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 13:04:02.138918  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 13:04:02.139264  450884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:04:02.139290  450884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:04:02.139335  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 13:04:02.139377  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 13:04:02.139404  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 13:04:02.139448  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 13:04:02.139482  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 13:04:02.139503  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 13:04:02.139581  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 13:04:02.139637  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 13:04:02.139737  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 13:04:02.139807  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 13:04:02.139867  450884 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa Username:docker}
	I0805 13:04:02.139909  450884 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa Username:docker}
	I0805 13:04:02.159720  450884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34137
	I0805 13:04:02.160199  450884 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:04:02.160744  450884 main.go:141] libmachine: Using API Version  1
	I0805 13:04:02.160770  450884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:04:02.161048  450884 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:04:02.161246  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetState
	I0805 13:04:02.162535  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 13:04:02.162788  450884 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 13:04:02.162805  450884 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 13:04:02.162825  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 13:04:02.165787  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 13:04:02.166204  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 13:04:02.166236  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 13:04:02.166411  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 13:04:02.166594  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 13:04:02.166744  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 13:04:02.166876  450884 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa Username:docker}
	I0805 13:04:02.349175  450884 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 13:04:02.453663  450884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 13:04:02.462474  450884 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-371585" to be "Ready" ...
	I0805 13:04:02.472177  450884 node_ready.go:49] node "default-k8s-diff-port-371585" has status "Ready":"True"
	I0805 13:04:02.472201  450884 node_ready.go:38] duration metric: took 9.692872ms for node "default-k8s-diff-port-371585" to be "Ready" ...
	I0805 13:04:02.472211  450884 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 13:04:02.474341  450884 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0805 13:04:02.474363  450884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0805 13:04:02.485604  450884 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-5vxpl" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:02.514889  450884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 13:04:02.543388  450884 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0805 13:04:02.543428  450884 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0805 13:04:02.618040  450884 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0805 13:04:02.618094  450884 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0805 13:04:02.716705  450884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0805 13:04:02.784102  450884 main.go:141] libmachine: Making call to close driver server
	I0805 13:04:02.784193  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .Close
	I0805 13:04:02.784545  450884 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:04:02.784566  450884 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:04:02.784577  450884 main.go:141] libmachine: Making call to close driver server
	I0805 13:04:02.784586  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .Close
	I0805 13:04:02.784588  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | Closing plugin on server side
	I0805 13:04:02.784851  450884 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:04:02.784868  450884 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:04:02.784868  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | Closing plugin on server side
	I0805 13:04:02.797584  450884 main.go:141] libmachine: Making call to close driver server
	I0805 13:04:02.797617  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .Close
	I0805 13:04:02.797938  450884 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:04:02.797956  450884 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:04:03.431060  450884 main.go:141] libmachine: Making call to close driver server
	I0805 13:04:03.431091  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .Close
	I0805 13:04:03.431452  450884 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:04:03.431494  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | Closing plugin on server side
	I0805 13:04:03.431511  450884 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:04:03.431530  450884 main.go:141] libmachine: Making call to close driver server
	I0805 13:04:03.431539  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .Close
	I0805 13:04:03.431839  450884 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:04:03.431893  450884 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:04:03.746668  450884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.029912928s)
	I0805 13:04:03.746734  450884 main.go:141] libmachine: Making call to close driver server
	I0805 13:04:03.746750  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .Close
	I0805 13:04:03.747152  450884 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:04:03.747180  450884 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:04:03.747191  450884 main.go:141] libmachine: Making call to close driver server
	I0805 13:04:03.747200  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .Close
	I0805 13:04:03.748527  450884 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:04:03.748558  450884 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:04:03.748571  450884 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-371585"
	I0805 13:04:03.750522  450884 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0805 13:04:03.751714  450884 addons.go:510] duration metric: took 1.664163176s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0805 13:04:04.491832  450884 pod_ready.go:92] pod "coredns-7db6d8ff4d-5vxpl" in "kube-system" namespace has status "Ready":"True"
	I0805 13:04:04.491861  450884 pod_ready.go:81] duration metric: took 2.00623062s for pod "coredns-7db6d8ff4d-5vxpl" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.491870  450884 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-qtt9j" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.496173  450884 pod_ready.go:92] pod "coredns-7db6d8ff4d-qtt9j" in "kube-system" namespace has status "Ready":"True"
	I0805 13:04:04.496194  450884 pod_ready.go:81] duration metric: took 4.317446ms for pod "coredns-7db6d8ff4d-qtt9j" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.496202  450884 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.500270  450884 pod_ready.go:92] pod "etcd-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"True"
	I0805 13:04:04.500297  450884 pod_ready.go:81] duration metric: took 4.088399ms for pod "etcd-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.500309  450884 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.504892  450884 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"True"
	I0805 13:04:04.504917  450884 pod_ready.go:81] duration metric: took 4.598589ms for pod "kube-apiserver-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.504926  450884 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.509448  450884 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"True"
	I0805 13:04:04.509468  450884 pod_ready.go:81] duration metric: took 4.535174ms for pod "kube-controller-manager-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.509478  450884 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4v6sn" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.890517  450884 pod_ready.go:92] pod "kube-proxy-4v6sn" in "kube-system" namespace has status "Ready":"True"
	I0805 13:04:04.890544  450884 pod_ready.go:81] duration metric: took 381.059204ms for pod "kube-proxy-4v6sn" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.890552  450884 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:05.289670  450884 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"True"
	I0805 13:04:05.289701  450884 pod_ready.go:81] duration metric: took 399.141309ms for pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:05.289712  450884 pod_ready.go:38] duration metric: took 2.817491444s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 13:04:05.289732  450884 api_server.go:52] waiting for apiserver process to appear ...
	I0805 13:04:05.289805  450884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:04:05.305815  450884 api_server.go:72] duration metric: took 3.218344531s to wait for apiserver process to appear ...
	I0805 13:04:05.305848  450884 api_server.go:88] waiting for apiserver healthz status ...
	I0805 13:04:05.305870  450884 api_server.go:253] Checking apiserver healthz at https://192.168.50.228:8444/healthz ...
	I0805 13:04:05.311144  450884 api_server.go:279] https://192.168.50.228:8444/healthz returned 200:
	ok
	I0805 13:04:05.312427  450884 api_server.go:141] control plane version: v1.30.3
	I0805 13:04:05.312450  450884 api_server.go:131] duration metric: took 6.595933ms to wait for apiserver health ...
	I0805 13:04:05.312460  450884 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 13:04:05.493376  450884 system_pods.go:59] 9 kube-system pods found
	I0805 13:04:05.493417  450884 system_pods.go:61] "coredns-7db6d8ff4d-5vxpl" [6f6aa906-d76f-4f92-8de4-4d3a4a1ee733] Running
	I0805 13:04:05.493425  450884 system_pods.go:61] "coredns-7db6d8ff4d-qtt9j" [8dcadd0b-af8c-4d76-a1d1-ceeaffb725b8] Running
	I0805 13:04:05.493432  450884 system_pods.go:61] "etcd-default-k8s-diff-port-371585" [c3ab12b8-78ea-42c5-a1d3-e37eb9e72961] Running
	I0805 13:04:05.493438  450884 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-371585" [16d27e99-f652-4e88-907f-c2895f051a8a] Running
	I0805 13:04:05.493444  450884 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-371585" [f8d0d828-a7fb-4887-bbf9-e3ad9fd3ebf3] Running
	I0805 13:04:05.493450  450884 system_pods.go:61] "kube-proxy-4v6sn" [497a1512-cdee-49ff-92ea-ea523d3de2a4] Running
	I0805 13:04:05.493456  450884 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-371585" [48ae4522-6d11-4f79-820b-68eb06410186] Running
	I0805 13:04:05.493465  450884 system_pods.go:61] "metrics-server-569cc877fc-xf92r" [edb560ac-ddb1-4afa-b3a3-aa054ea38162] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 13:04:05.493475  450884 system_pods.go:61] "storage-provisioner" [8f3de3fc-9b34-4a46-a7cf-5487647b06ca] Running
	I0805 13:04:05.493488  450884 system_pods.go:74] duration metric: took 181.019102ms to wait for pod list to return data ...
	I0805 13:04:05.493504  450884 default_sa.go:34] waiting for default service account to be created ...
	I0805 13:04:05.688283  450884 default_sa.go:45] found service account: "default"
	I0805 13:04:05.688313  450884 default_sa.go:55] duration metric: took 194.799711ms for default service account to be created ...
	I0805 13:04:05.688323  450884 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 13:04:05.892656  450884 system_pods.go:86] 9 kube-system pods found
	I0805 13:04:05.892688  450884 system_pods.go:89] "coredns-7db6d8ff4d-5vxpl" [6f6aa906-d76f-4f92-8de4-4d3a4a1ee733] Running
	I0805 13:04:05.892696  450884 system_pods.go:89] "coredns-7db6d8ff4d-qtt9j" [8dcadd0b-af8c-4d76-a1d1-ceeaffb725b8] Running
	I0805 13:04:05.892702  450884 system_pods.go:89] "etcd-default-k8s-diff-port-371585" [c3ab12b8-78ea-42c5-a1d3-e37eb9e72961] Running
	I0805 13:04:05.892709  450884 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-371585" [16d27e99-f652-4e88-907f-c2895f051a8a] Running
	I0805 13:04:05.892715  450884 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-371585" [f8d0d828-a7fb-4887-bbf9-e3ad9fd3ebf3] Running
	I0805 13:04:05.892721  450884 system_pods.go:89] "kube-proxy-4v6sn" [497a1512-cdee-49ff-92ea-ea523d3de2a4] Running
	I0805 13:04:05.892727  450884 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-371585" [48ae4522-6d11-4f79-820b-68eb06410186] Running
	I0805 13:04:05.892737  450884 system_pods.go:89] "metrics-server-569cc877fc-xf92r" [edb560ac-ddb1-4afa-b3a3-aa054ea38162] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 13:04:05.892743  450884 system_pods.go:89] "storage-provisioner" [8f3de3fc-9b34-4a46-a7cf-5487647b06ca] Running
	I0805 13:04:05.892755  450884 system_pods.go:126] duration metric: took 204.423562ms to wait for k8s-apps to be running ...
	I0805 13:04:05.892765  450884 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 13:04:05.892819  450884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 13:04:05.907542  450884 system_svc.go:56] duration metric: took 14.764349ms WaitForService to wait for kubelet
	I0805 13:04:05.907576  450884 kubeadm.go:582] duration metric: took 3.820116927s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 13:04:05.907599  450884 node_conditions.go:102] verifying NodePressure condition ...
	I0805 13:04:06.089000  450884 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 13:04:06.089025  450884 node_conditions.go:123] node cpu capacity is 2
	I0805 13:04:06.089035  450884 node_conditions.go:105] duration metric: took 181.431221ms to run NodePressure ...
	I0805 13:04:06.089047  450884 start.go:241] waiting for startup goroutines ...
	I0805 13:04:06.089054  450884 start.go:246] waiting for cluster config update ...
	I0805 13:04:06.089065  450884 start.go:255] writing updated cluster config ...
	I0805 13:04:06.089373  450884 ssh_runner.go:195] Run: rm -f paused
	I0805 13:04:06.140202  450884 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0805 13:04:06.142149  450884 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-371585" cluster and "default" namespace by default
	I0805 13:04:02.115811  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:04:02.116057  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:04:12.115990  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:04:12.116208  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:04:32.116734  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:04:32.117001  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:05:12.119196  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:05:12.119475  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:05:12.119502  451238 kubeadm.go:310] 
	I0805 13:05:12.119564  451238 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0805 13:05:12.119622  451238 kubeadm.go:310] 		timed out waiting for the condition
	I0805 13:05:12.119634  451238 kubeadm.go:310] 
	I0805 13:05:12.119680  451238 kubeadm.go:310] 	This error is likely caused by:
	I0805 13:05:12.119724  451238 kubeadm.go:310] 		- The kubelet is not running
	I0805 13:05:12.119880  451238 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0805 13:05:12.119898  451238 kubeadm.go:310] 
	I0805 13:05:12.120029  451238 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0805 13:05:12.120114  451238 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0805 13:05:12.120169  451238 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0805 13:05:12.120179  451238 kubeadm.go:310] 
	I0805 13:05:12.120321  451238 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0805 13:05:12.120445  451238 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0805 13:05:12.120455  451238 kubeadm.go:310] 
	I0805 13:05:12.120612  451238 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0805 13:05:12.120751  451238 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0805 13:05:12.120888  451238 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0805 13:05:12.121010  451238 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0805 13:05:12.121023  451238 kubeadm.go:310] 
	I0805 13:05:12.121325  451238 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 13:05:12.121458  451238 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0805 13:05:12.121545  451238 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0805 13:05:12.121714  451238 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0805 13:05:12.121782  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0805 13:05:12.587687  451238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 13:05:12.603422  451238 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 13:05:12.614302  451238 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 13:05:12.614330  451238 kubeadm.go:157] found existing configuration files:
	
	I0805 13:05:12.614391  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 13:05:12.625131  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 13:05:12.625199  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 13:05:12.635606  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 13:05:12.644896  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 13:05:12.644953  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 13:05:12.655178  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 13:05:12.664668  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 13:05:12.664753  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 13:05:12.675174  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 13:05:12.684765  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 13:05:12.684834  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 13:05:12.694762  451238 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 13:05:12.930906  451238 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 13:07:09.256859  451238 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0805 13:07:09.257016  451238 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0805 13:07:09.258511  451238 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0805 13:07:09.258579  451238 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 13:07:09.258710  451238 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 13:07:09.258881  451238 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 13:07:09.259022  451238 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 13:07:09.259125  451238 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 13:07:09.260912  451238 out.go:204]   - Generating certificates and keys ...
	I0805 13:07:09.261023  451238 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 13:07:09.261123  451238 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 13:07:09.261232  451238 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0805 13:07:09.261319  451238 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0805 13:07:09.261411  451238 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0805 13:07:09.261507  451238 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0805 13:07:09.261601  451238 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0805 13:07:09.261690  451238 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0805 13:07:09.261801  451238 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0805 13:07:09.261946  451238 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0805 13:07:09.262015  451238 kubeadm.go:310] [certs] Using the existing "sa" key
	I0805 13:07:09.262119  451238 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 13:07:09.262198  451238 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 13:07:09.262273  451238 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 13:07:09.262369  451238 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 13:07:09.262464  451238 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 13:07:09.262615  451238 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 13:07:09.262731  451238 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 13:07:09.262770  451238 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 13:07:09.262831  451238 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 13:07:09.264428  451238 out.go:204]   - Booting up control plane ...
	I0805 13:07:09.264537  451238 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 13:07:09.264663  451238 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 13:07:09.264774  451238 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 13:07:09.264896  451238 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 13:07:09.265144  451238 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0805 13:07:09.265224  451238 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0805 13:07:09.265318  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:07:09.265554  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:07:09.265630  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:07:09.265783  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:07:09.265886  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:07:09.266143  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:07:09.266221  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:07:09.266387  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:07:09.266472  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:07:09.266656  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:07:09.266673  451238 kubeadm.go:310] 
	I0805 13:07:09.266707  451238 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0805 13:07:09.266738  451238 kubeadm.go:310] 		timed out waiting for the condition
	I0805 13:07:09.266743  451238 kubeadm.go:310] 
	I0805 13:07:09.266788  451238 kubeadm.go:310] 	This error is likely caused by:
	I0805 13:07:09.266819  451238 kubeadm.go:310] 		- The kubelet is not running
	I0805 13:07:09.266924  451238 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0805 13:07:09.266932  451238 kubeadm.go:310] 
	I0805 13:07:09.267050  451238 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0805 13:07:09.267137  451238 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0805 13:07:09.267192  451238 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0805 13:07:09.267201  451238 kubeadm.go:310] 
	I0805 13:07:09.267316  451238 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0805 13:07:09.267435  451238 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0805 13:07:09.267445  451238 kubeadm.go:310] 
	I0805 13:07:09.267570  451238 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0805 13:07:09.267683  451238 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0805 13:07:09.267802  451238 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0805 13:07:09.267898  451238 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0805 13:07:09.267986  451238 kubeadm.go:310] 
	I0805 13:07:09.268003  451238 kubeadm.go:394] duration metric: took 7m57.870990174s to StartCluster
	I0805 13:07:09.268066  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:07:09.268158  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:07:09.311436  451238 cri.go:89] found id: ""
	I0805 13:07:09.311471  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.311497  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:07:09.311509  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:07:09.311573  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:07:09.347748  451238 cri.go:89] found id: ""
	I0805 13:07:09.347776  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.347784  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:07:09.347797  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:07:09.347860  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:07:09.385418  451238 cri.go:89] found id: ""
	I0805 13:07:09.385445  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.385453  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:07:09.385460  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:07:09.385517  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:07:09.427209  451238 cri.go:89] found id: ""
	I0805 13:07:09.427255  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.427268  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:07:09.427276  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:07:09.427360  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:07:09.461763  451238 cri.go:89] found id: ""
	I0805 13:07:09.461787  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.461795  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:07:09.461801  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:07:09.461854  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:07:09.498655  451238 cri.go:89] found id: ""
	I0805 13:07:09.498692  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.498705  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:07:09.498713  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:07:09.498782  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:07:09.534100  451238 cri.go:89] found id: ""
	I0805 13:07:09.534134  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.534143  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:07:09.534149  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:07:09.534207  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:07:09.570089  451238 cri.go:89] found id: ""
	I0805 13:07:09.570125  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.570137  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:07:09.570153  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:07:09.570176  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:07:09.625158  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:07:09.625199  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:07:09.640087  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:07:09.640119  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:07:09.719851  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:07:09.719879  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:07:09.719895  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:07:09.832717  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:07:09.832758  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0805 13:07:09.878585  451238 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0805 13:07:09.878653  451238 out.go:239] * 
	W0805 13:07:09.878739  451238 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0805 13:07:09.878767  451238 out.go:239] * 
	W0805 13:07:09.879755  451238 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 13:07:09.883027  451238 out.go:177] 
	W0805 13:07:09.884197  451238 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0805 13:07:09.884243  451238 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0805 13:07:09.884265  451238 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0805 13:07:09.885783  451238 out.go:177] 
	
	
	==> CRI-O <==
	Aug 05 13:13:03 embed-certs-321139 crio[733]: time="2024-08-05 13:13:03.488579983Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863583488556367,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=38663813-2f43-43e7-88d4-dfcdb41e934c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:13:03 embed-certs-321139 crio[733]: time="2024-08-05 13:13:03.490934123Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aab48320-2726-4de9-9ee2-4bc7fdc27c24 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:13:03 embed-certs-321139 crio[733]: time="2024-08-05 13:13:03.491007057Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aab48320-2726-4de9-9ee2-4bc7fdc27c24 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:13:03 embed-certs-321139 crio[733]: time="2024-08-05 13:13:03.491184300Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b,PodSandboxId:8d6b517e958ba42aedc04b4e350f3fadd7788b7f5f30417c4f2cdbf6f52f739e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722862808306612518,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b2db057-5262-4648-93ea-f2f0ed51a19b,},Annotations:map[string]string{io.kubernetes.container.hash: a22cb328,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8edf973c34f98728f31f5f81f4ad25b839ec3dd0f41ed930d65c3d4f2f191948,PodSandboxId:78c2f0eda34cccb01df09e520ae26a9b7bc2185b9f9d00a419136e01a3063a3a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722862787551881955,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 61652096-d612-4b1d-bac3-a0df9a0e629b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b96c50c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb,PodSandboxId:d632aafdacf52b10e9b2b7bf7f3deaf56aaefbff50c31ed27a9e3b8ffc07ccfc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722862785158048980,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wm7lh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3851d79-431c-4629-bfdc-ed9615cd46aa,},Annotations:map[string]string{io.kubernetes.container.hash: ca25e05e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0,PodSandboxId:3ea21207a040295af58068810ab0010cac2197b6c4ebf43384ac02addb445654,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722862777536935964,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shgv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a19c5991-505f-4105-8
c20-7afd63dd8e61,},Annotations:map[string]string{io.kubernetes.container.hash: ef26fde1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86,PodSandboxId:8d6b517e958ba42aedc04b4e350f3fadd7788b7f5f30417c4f2cdbf6f52f739e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722862777519087317,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b2db057-5262-4648-93ea-f2f0ed51a
19b,},Annotations:map[string]string{io.kubernetes.container.hash: a22cb328,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804,PodSandboxId:72626e53802dff9bc26788699e920a66326f3e39061ac44d3ff27a7dd7939fb6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722862772795199915,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-321139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac5d8f139dc62eb6728616077f9f3d55,},Annotations:map[string]string{io.kub
ernetes.container.hash: 82e6bf3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f,PodSandboxId:cdba31db1da5c242b90f5578d1c9b81ccee46b1bbed039c101dc116cc2ed72c5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722862772783826722,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-321139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d805150634b40a739cf75f6352c5c67,},Annotations:map[strin
g]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7,PodSandboxId:980b16fe922b81963439c38a6c9df44bd68292b9711e8ed086427a17428aab87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722862772722446344,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-321139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ac13876e8cffeed8789fb80a6043482,},Annotations:map[string]string{io.
kubernetes.container.hash: 4422576b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756,PodSandboxId:9b3e234d1497348d2f1230a7a8716892424592e79944981064e92a2ac2ce2de6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722862772672738287,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-321139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a9e377164f8f6abffa50cd66ffd3878,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aab48320-2726-4de9-9ee2-4bc7fdc27c24 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:13:03 embed-certs-321139 crio[733]: time="2024-08-05 13:13:03.533238959Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3ad7fdab-59a2-4975-a45b-9f03ced83f7b name=/runtime.v1.RuntimeService/Version
	Aug 05 13:13:03 embed-certs-321139 crio[733]: time="2024-08-05 13:13:03.533619223Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3ad7fdab-59a2-4975-a45b-9f03ced83f7b name=/runtime.v1.RuntimeService/Version
	Aug 05 13:13:03 embed-certs-321139 crio[733]: time="2024-08-05 13:13:03.535180688Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=aa11ec1d-b12b-4af2-9e54-fffae0707360 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:13:03 embed-certs-321139 crio[733]: time="2024-08-05 13:13:03.535784792Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863583535762183,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aa11ec1d-b12b-4af2-9e54-fffae0707360 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:13:03 embed-certs-321139 crio[733]: time="2024-08-05 13:13:03.536505227Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c066f31a-c838-4b7c-82f2-ae9203ffd74c name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:13:03 embed-certs-321139 crio[733]: time="2024-08-05 13:13:03.536559504Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c066f31a-c838-4b7c-82f2-ae9203ffd74c name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:13:03 embed-certs-321139 crio[733]: time="2024-08-05 13:13:03.536757268Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b,PodSandboxId:8d6b517e958ba42aedc04b4e350f3fadd7788b7f5f30417c4f2cdbf6f52f739e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722862808306612518,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b2db057-5262-4648-93ea-f2f0ed51a19b,},Annotations:map[string]string{io.kubernetes.container.hash: a22cb328,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8edf973c34f98728f31f5f81f4ad25b839ec3dd0f41ed930d65c3d4f2f191948,PodSandboxId:78c2f0eda34cccb01df09e520ae26a9b7bc2185b9f9d00a419136e01a3063a3a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722862787551881955,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 61652096-d612-4b1d-bac3-a0df9a0e629b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b96c50c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb,PodSandboxId:d632aafdacf52b10e9b2b7bf7f3deaf56aaefbff50c31ed27a9e3b8ffc07ccfc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722862785158048980,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wm7lh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3851d79-431c-4629-bfdc-ed9615cd46aa,},Annotations:map[string]string{io.kubernetes.container.hash: ca25e05e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0,PodSandboxId:3ea21207a040295af58068810ab0010cac2197b6c4ebf43384ac02addb445654,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722862777536935964,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shgv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a19c5991-505f-4105-8
c20-7afd63dd8e61,},Annotations:map[string]string{io.kubernetes.container.hash: ef26fde1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86,PodSandboxId:8d6b517e958ba42aedc04b4e350f3fadd7788b7f5f30417c4f2cdbf6f52f739e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722862777519087317,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b2db057-5262-4648-93ea-f2f0ed51a
19b,},Annotations:map[string]string{io.kubernetes.container.hash: a22cb328,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804,PodSandboxId:72626e53802dff9bc26788699e920a66326f3e39061ac44d3ff27a7dd7939fb6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722862772795199915,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-321139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac5d8f139dc62eb6728616077f9f3d55,},Annotations:map[string]string{io.kub
ernetes.container.hash: 82e6bf3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f,PodSandboxId:cdba31db1da5c242b90f5578d1c9b81ccee46b1bbed039c101dc116cc2ed72c5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722862772783826722,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-321139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d805150634b40a739cf75f6352c5c67,},Annotations:map[strin
g]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7,PodSandboxId:980b16fe922b81963439c38a6c9df44bd68292b9711e8ed086427a17428aab87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722862772722446344,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-321139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ac13876e8cffeed8789fb80a6043482,},Annotations:map[string]string{io.
kubernetes.container.hash: 4422576b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756,PodSandboxId:9b3e234d1497348d2f1230a7a8716892424592e79944981064e92a2ac2ce2de6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722862772672738287,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-321139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a9e377164f8f6abffa50cd66ffd3878,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c066f31a-c838-4b7c-82f2-ae9203ffd74c name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:13:03 embed-certs-321139 crio[733]: time="2024-08-05 13:13:03.576734703Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=45085174-ece1-4917-857d-37a4a139fe95 name=/runtime.v1.RuntimeService/Version
	Aug 05 13:13:03 embed-certs-321139 crio[733]: time="2024-08-05 13:13:03.576838426Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=45085174-ece1-4917-857d-37a4a139fe95 name=/runtime.v1.RuntimeService/Version
	Aug 05 13:13:03 embed-certs-321139 crio[733]: time="2024-08-05 13:13:03.578333537Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=72571e29-432b-4118-87cf-136cc8b2994b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:13:03 embed-certs-321139 crio[733]: time="2024-08-05 13:13:03.578758362Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863583578735638,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=72571e29-432b-4118-87cf-136cc8b2994b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:13:03 embed-certs-321139 crio[733]: time="2024-08-05 13:13:03.579345394Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f7936a10-8b32-4d01-b122-021891f1afe1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:13:03 embed-certs-321139 crio[733]: time="2024-08-05 13:13:03.579486407Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f7936a10-8b32-4d01-b122-021891f1afe1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:13:03 embed-certs-321139 crio[733]: time="2024-08-05 13:13:03.579703279Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b,PodSandboxId:8d6b517e958ba42aedc04b4e350f3fadd7788b7f5f30417c4f2cdbf6f52f739e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722862808306612518,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b2db057-5262-4648-93ea-f2f0ed51a19b,},Annotations:map[string]string{io.kubernetes.container.hash: a22cb328,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8edf973c34f98728f31f5f81f4ad25b839ec3dd0f41ed930d65c3d4f2f191948,PodSandboxId:78c2f0eda34cccb01df09e520ae26a9b7bc2185b9f9d00a419136e01a3063a3a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722862787551881955,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 61652096-d612-4b1d-bac3-a0df9a0e629b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b96c50c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb,PodSandboxId:d632aafdacf52b10e9b2b7bf7f3deaf56aaefbff50c31ed27a9e3b8ffc07ccfc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722862785158048980,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wm7lh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3851d79-431c-4629-bfdc-ed9615cd46aa,},Annotations:map[string]string{io.kubernetes.container.hash: ca25e05e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0,PodSandboxId:3ea21207a040295af58068810ab0010cac2197b6c4ebf43384ac02addb445654,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722862777536935964,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shgv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a19c5991-505f-4105-8
c20-7afd63dd8e61,},Annotations:map[string]string{io.kubernetes.container.hash: ef26fde1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86,PodSandboxId:8d6b517e958ba42aedc04b4e350f3fadd7788b7f5f30417c4f2cdbf6f52f739e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722862777519087317,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b2db057-5262-4648-93ea-f2f0ed51a
19b,},Annotations:map[string]string{io.kubernetes.container.hash: a22cb328,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804,PodSandboxId:72626e53802dff9bc26788699e920a66326f3e39061ac44d3ff27a7dd7939fb6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722862772795199915,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-321139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac5d8f139dc62eb6728616077f9f3d55,},Annotations:map[string]string{io.kub
ernetes.container.hash: 82e6bf3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f,PodSandboxId:cdba31db1da5c242b90f5578d1c9b81ccee46b1bbed039c101dc116cc2ed72c5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722862772783826722,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-321139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d805150634b40a739cf75f6352c5c67,},Annotations:map[strin
g]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7,PodSandboxId:980b16fe922b81963439c38a6c9df44bd68292b9711e8ed086427a17428aab87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722862772722446344,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-321139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ac13876e8cffeed8789fb80a6043482,},Annotations:map[string]string{io.
kubernetes.container.hash: 4422576b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756,PodSandboxId:9b3e234d1497348d2f1230a7a8716892424592e79944981064e92a2ac2ce2de6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722862772672738287,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-321139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a9e377164f8f6abffa50cd66ffd3878,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f7936a10-8b32-4d01-b122-021891f1afe1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:13:03 embed-certs-321139 crio[733]: time="2024-08-05 13:13:03.613621307Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7d3ed7bf-ea48-49b3-8489-b566c5a11376 name=/runtime.v1.RuntimeService/Version
	Aug 05 13:13:03 embed-certs-321139 crio[733]: time="2024-08-05 13:13:03.613694286Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7d3ed7bf-ea48-49b3-8489-b566c5a11376 name=/runtime.v1.RuntimeService/Version
	Aug 05 13:13:03 embed-certs-321139 crio[733]: time="2024-08-05 13:13:03.614940688Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=23a6cd68-e0bd-4ffb-9939-1ed4ec383ce0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:13:03 embed-certs-321139 crio[733]: time="2024-08-05 13:13:03.615372345Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863583615349798,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=23a6cd68-e0bd-4ffb-9939-1ed4ec383ce0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:13:03 embed-certs-321139 crio[733]: time="2024-08-05 13:13:03.615977987Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2e316e36-035f-4b41-8894-c5d2390e4469 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:13:03 embed-certs-321139 crio[733]: time="2024-08-05 13:13:03.616070342Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2e316e36-035f-4b41-8894-c5d2390e4469 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:13:03 embed-certs-321139 crio[733]: time="2024-08-05 13:13:03.616369001Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b,PodSandboxId:8d6b517e958ba42aedc04b4e350f3fadd7788b7f5f30417c4f2cdbf6f52f739e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722862808306612518,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b2db057-5262-4648-93ea-f2f0ed51a19b,},Annotations:map[string]string{io.kubernetes.container.hash: a22cb328,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8edf973c34f98728f31f5f81f4ad25b839ec3dd0f41ed930d65c3d4f2f191948,PodSandboxId:78c2f0eda34cccb01df09e520ae26a9b7bc2185b9f9d00a419136e01a3063a3a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722862787551881955,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 61652096-d612-4b1d-bac3-a0df9a0e629b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b96c50c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb,PodSandboxId:d632aafdacf52b10e9b2b7bf7f3deaf56aaefbff50c31ed27a9e3b8ffc07ccfc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722862785158048980,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wm7lh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3851d79-431c-4629-bfdc-ed9615cd46aa,},Annotations:map[string]string{io.kubernetes.container.hash: ca25e05e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0,PodSandboxId:3ea21207a040295af58068810ab0010cac2197b6c4ebf43384ac02addb445654,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722862777536935964,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shgv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a19c5991-505f-4105-8
c20-7afd63dd8e61,},Annotations:map[string]string{io.kubernetes.container.hash: ef26fde1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86,PodSandboxId:8d6b517e958ba42aedc04b4e350f3fadd7788b7f5f30417c4f2cdbf6f52f739e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722862777519087317,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b2db057-5262-4648-93ea-f2f0ed51a
19b,},Annotations:map[string]string{io.kubernetes.container.hash: a22cb328,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804,PodSandboxId:72626e53802dff9bc26788699e920a66326f3e39061ac44d3ff27a7dd7939fb6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722862772795199915,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-321139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac5d8f139dc62eb6728616077f9f3d55,},Annotations:map[string]string{io.kub
ernetes.container.hash: 82e6bf3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f,PodSandboxId:cdba31db1da5c242b90f5578d1c9b81ccee46b1bbed039c101dc116cc2ed72c5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722862772783826722,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-321139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d805150634b40a739cf75f6352c5c67,},Annotations:map[strin
g]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7,PodSandboxId:980b16fe922b81963439c38a6c9df44bd68292b9711e8ed086427a17428aab87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722862772722446344,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-321139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ac13876e8cffeed8789fb80a6043482,},Annotations:map[string]string{io.
kubernetes.container.hash: 4422576b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756,PodSandboxId:9b3e234d1497348d2f1230a7a8716892424592e79944981064e92a2ac2ce2de6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722862772672738287,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-321139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a9e377164f8f6abffa50cd66ffd3878,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2e316e36-035f-4b41-8894-c5d2390e4469 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	07a14eee4cdae       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   8d6b517e958ba       storage-provisioner
	8edf973c34f98       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   78c2f0eda34cc       busybox
	b22c1fc4aed8b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   d632aafdacf52       coredns-7db6d8ff4d-wm7lh
	c905047116d6c       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      13 minutes ago      Running             kube-proxy                1                   3ea21207a0402       kube-proxy-shgv2
	2d096466c2e0d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   8d6b517e958ba       storage-provisioner
	85c424836db21       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago      Running             etcd                      1                   72626e53802df       etcd-embed-certs-321139
	75f0d0c4ce468       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      13 minutes ago      Running             kube-controller-manager   1                   cdba31db1da5c       kube-controller-manager-embed-certs-321139
	be59c5f295285       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      13 minutes ago      Running             kube-apiserver            1                   980b16fe922b8       kube-apiserver-embed-certs-321139
	8b55325728604       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      13 minutes ago      Running             kube-scheduler            1                   9b3e234d14973       kube-scheduler-embed-certs-321139
	
	
	==> coredns [b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57604 - 63795 "HINFO IN 1122241197051515001.1866069707439365595. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022287696s
	
	
	==> describe nodes <==
	Name:               embed-certs-321139
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-321139
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f
	                    minikube.k8s.io/name=embed-certs-321139
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_05T12_50_26_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 12:50:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-321139
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 13:13:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 13:10:19 +0000   Mon, 05 Aug 2024 12:50:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 13:10:19 +0000   Mon, 05 Aug 2024 12:50:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 13:10:19 +0000   Mon, 05 Aug 2024 12:50:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 13:10:19 +0000   Mon, 05 Aug 2024 12:59:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.196
	  Hostname:    embed-certs-321139
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7d261453e09c4e6981750858662a1300
	  System UUID:                7d261453-e09c-4e69-8175-0858662a1300
	  Boot ID:                    9d8267a6-aa4e-40a9-b37c-a96dabe9dd0f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 coredns-7db6d8ff4d-wm7lh                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     22m
	  kube-system                 etcd-embed-certs-321139                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 kube-apiserver-embed-certs-321139             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-controller-manager-embed-certs-321139    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-proxy-shgv2                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-scheduler-embed-certs-321139             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 metrics-server-569cc877fc-k8mrt               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node embed-certs-321139 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node embed-certs-321139 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node embed-certs-321139 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node embed-certs-321139 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node embed-certs-321139 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m                kubelet          Node embed-certs-321139 status is now: NodeHasSufficientPID
	  Normal  NodeReady                22m                kubelet          Node embed-certs-321139 status is now: NodeReady
	  Normal  RegisteredNode           22m                node-controller  Node embed-certs-321139 event: Registered Node embed-certs-321139 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-321139 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-321139 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-321139 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-321139 event: Registered Node embed-certs-321139 in Controller
	
	
	==> dmesg <==
	[Aug 5 12:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055322] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042642] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.183394] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.655686] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.450586] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.914135] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.059399] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070737] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +0.202615] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +0.115666] systemd-fstab-generator[684]: Ignoring "noauto" option for root device
	[  +0.297815] systemd-fstab-generator[717]: Ignoring "noauto" option for root device
	[  +4.497767] systemd-fstab-generator[814]: Ignoring "noauto" option for root device
	[  +0.072010] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.193486] systemd-fstab-generator[936]: Ignoring "noauto" option for root device
	[  +5.605592] kauditd_printk_skb: 97 callbacks suppressed
	[  +1.966443] systemd-fstab-generator[1537]: Ignoring "noauto" option for root device
	[  +3.748656] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.626320] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804] <==
	{"level":"info","ts":"2024-08-05T12:59:33.355553Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8309c60c27e527a4","local-member-id":"a14f9258d3b66c75","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T12:59:33.355614Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T12:59:33.349687Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-08-05T12:59:33.349772Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-05T12:59:33.359169Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-05T12:59:33.359184Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-05T12:59:33.349907Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.196:2380"}
	{"level":"info","ts":"2024-08-05T12:59:33.359249Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.196:2380"}
	{"level":"info","ts":"2024-08-05T12:59:34.99512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a14f9258d3b66c75 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-05T12:59:34.995179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a14f9258d3b66c75 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-05T12:59:34.995213Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a14f9258d3b66c75 received MsgPreVoteResp from a14f9258d3b66c75 at term 2"}
	{"level":"info","ts":"2024-08-05T12:59:34.995224Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a14f9258d3b66c75 became candidate at term 3"}
	{"level":"info","ts":"2024-08-05T12:59:34.995229Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a14f9258d3b66c75 received MsgVoteResp from a14f9258d3b66c75 at term 3"}
	{"level":"info","ts":"2024-08-05T12:59:34.995238Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a14f9258d3b66c75 became leader at term 3"}
	{"level":"info","ts":"2024-08-05T12:59:34.995247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a14f9258d3b66c75 elected leader a14f9258d3b66c75 at term 3"}
	{"level":"info","ts":"2024-08-05T12:59:35.006236Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"a14f9258d3b66c75","local-member-attributes":"{Name:embed-certs-321139 ClientURLs:[https://192.168.39.196:2379]}","request-path":"/0/members/a14f9258d3b66c75/attributes","cluster-id":"8309c60c27e527a4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-05T12:59:35.006428Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T12:59:35.006549Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T12:59:35.006913Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-05T12:59:35.006925Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-05T12:59:35.008857Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-05T12:59:35.009041Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.196:2379"}
	{"level":"info","ts":"2024-08-05T13:09:35.037471Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":844}
	{"level":"info","ts":"2024-08-05T13:09:35.048468Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":844,"took":"10.652872ms","hash":1339320383,"current-db-size-bytes":2232320,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2232320,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-08-05T13:09:35.048546Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1339320383,"revision":844,"compact-revision":-1}
	
	
	==> kernel <==
	 13:13:03 up 13 min,  0 users,  load average: 0.23, 0.19, 0.14
	Linux embed-certs-321139 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7] <==
	I0805 13:07:37.326583       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0805 13:09:36.325715       1 handler_proxy.go:93] no RequestInfo found in the context
	E0805 13:09:36.326001       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0805 13:09:37.326401       1 handler_proxy.go:93] no RequestInfo found in the context
	E0805 13:09:37.326494       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0805 13:09:37.326521       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0805 13:09:37.326583       1 handler_proxy.go:93] no RequestInfo found in the context
	E0805 13:09:37.326647       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0805 13:09:37.327792       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0805 13:10:37.327404       1 handler_proxy.go:93] no RequestInfo found in the context
	E0805 13:10:37.327485       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0805 13:10:37.327508       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0805 13:10:37.328673       1 handler_proxy.go:93] no RequestInfo found in the context
	E0805 13:10:37.328820       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0805 13:10:37.328861       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0805 13:12:37.328061       1 handler_proxy.go:93] no RequestInfo found in the context
	E0805 13:12:37.328159       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0805 13:12:37.328168       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0805 13:12:37.329342       1 handler_proxy.go:93] no RequestInfo found in the context
	E0805 13:12:37.329484       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0805 13:12:37.329496       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f] <==
	I0805 13:07:19.514955       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:07:49.036459       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0805 13:07:49.523139       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:08:19.041890       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0805 13:08:19.530953       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:08:49.046923       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0805 13:08:49.538480       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:09:19.052459       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0805 13:09:19.548940       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:09:49.057683       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0805 13:09:49.557742       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:10:19.062227       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0805 13:10:19.566831       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0805 13:10:48.113863       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="386.411µs"
	E0805 13:10:49.067343       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0805 13:10:49.574731       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0805 13:11:02.119879       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="172.785µs"
	E0805 13:11:19.073760       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0805 13:11:19.594998       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:11:49.079729       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0805 13:11:49.603509       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:12:19.084476       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0805 13:12:19.611181       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:12:49.089562       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0805 13:12:49.621223       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0] <==
	I0805 12:59:37.756894       1 server_linux.go:69] "Using iptables proxy"
	I0805 12:59:37.767237       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.196"]
	I0805 12:59:37.835578       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0805 12:59:37.835641       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0805 12:59:37.835666       1 server_linux.go:165] "Using iptables Proxier"
	I0805 12:59:37.846864       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0805 12:59:37.851436       1 server.go:872] "Version info" version="v1.30.3"
	I0805 12:59:37.851502       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 12:59:37.859743       1 config.go:192] "Starting service config controller"
	I0805 12:59:37.859759       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 12:59:37.859865       1 config.go:101] "Starting endpoint slice config controller"
	I0805 12:59:37.859870       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 12:59:37.861998       1 config.go:319] "Starting node config controller"
	I0805 12:59:37.862032       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 12:59:37.959920       1 shared_informer.go:320] Caches are synced for service config
	I0805 12:59:37.959983       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0805 12:59:37.962082       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756] <==
	I0805 12:59:34.246812       1 serving.go:380] Generated self-signed cert in-memory
	W0805 12:59:36.278609       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0805 12:59:36.278653       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0805 12:59:36.278666       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0805 12:59:36.278672       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0805 12:59:36.316202       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0805 12:59:36.316467       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 12:59:36.322690       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0805 12:59:36.322790       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0805 12:59:36.322819       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0805 12:59:36.322833       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0805 12:59:36.423010       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 05 13:10:37 embed-certs-321139 kubelet[943]: E0805 13:10:37.109442     943 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Aug 05 13:10:37 embed-certs-321139 kubelet[943]: E0805 13:10:37.109551     943 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Aug 05 13:10:37 embed-certs-321139 kubelet[943]: E0805 13:10:37.110031     943 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fdh7x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,Recurs
iveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false
,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-k8mrt_kube-system(6d400b20-5de5-4046-b773-39766c67cdb4): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Aug 05 13:10:37 embed-certs-321139 kubelet[943]: E0805 13:10:37.110135     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-k8mrt" podUID="6d400b20-5de5-4046-b773-39766c67cdb4"
	Aug 05 13:10:48 embed-certs-321139 kubelet[943]: E0805 13:10:48.095947     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k8mrt" podUID="6d400b20-5de5-4046-b773-39766c67cdb4"
	Aug 05 13:11:02 embed-certs-321139 kubelet[943]: E0805 13:11:02.100438     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k8mrt" podUID="6d400b20-5de5-4046-b773-39766c67cdb4"
	Aug 05 13:11:17 embed-certs-321139 kubelet[943]: E0805 13:11:17.096126     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k8mrt" podUID="6d400b20-5de5-4046-b773-39766c67cdb4"
	Aug 05 13:11:29 embed-certs-321139 kubelet[943]: E0805 13:11:29.095829     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k8mrt" podUID="6d400b20-5de5-4046-b773-39766c67cdb4"
	Aug 05 13:11:32 embed-certs-321139 kubelet[943]: E0805 13:11:32.124608     943 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 13:11:32 embed-certs-321139 kubelet[943]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 13:11:32 embed-certs-321139 kubelet[943]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 13:11:32 embed-certs-321139 kubelet[943]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 13:11:32 embed-certs-321139 kubelet[943]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 13:11:41 embed-certs-321139 kubelet[943]: E0805 13:11:41.095924     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k8mrt" podUID="6d400b20-5de5-4046-b773-39766c67cdb4"
	Aug 05 13:11:54 embed-certs-321139 kubelet[943]: E0805 13:11:54.095713     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k8mrt" podUID="6d400b20-5de5-4046-b773-39766c67cdb4"
	Aug 05 13:12:09 embed-certs-321139 kubelet[943]: E0805 13:12:09.096303     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k8mrt" podUID="6d400b20-5de5-4046-b773-39766c67cdb4"
	Aug 05 13:12:23 embed-certs-321139 kubelet[943]: E0805 13:12:23.096079     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k8mrt" podUID="6d400b20-5de5-4046-b773-39766c67cdb4"
	Aug 05 13:12:32 embed-certs-321139 kubelet[943]: E0805 13:12:32.125505     943 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 13:12:32 embed-certs-321139 kubelet[943]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 13:12:32 embed-certs-321139 kubelet[943]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 13:12:32 embed-certs-321139 kubelet[943]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 13:12:32 embed-certs-321139 kubelet[943]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 13:12:37 embed-certs-321139 kubelet[943]: E0805 13:12:37.096055     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k8mrt" podUID="6d400b20-5de5-4046-b773-39766c67cdb4"
	Aug 05 13:12:50 embed-certs-321139 kubelet[943]: E0805 13:12:50.095394     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k8mrt" podUID="6d400b20-5de5-4046-b773-39766c67cdb4"
	Aug 05 13:13:04 embed-certs-321139 kubelet[943]: E0805 13:13:04.095981     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k8mrt" podUID="6d400b20-5de5-4046-b773-39766c67cdb4"
	
	
	==> storage-provisioner [07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b] <==
	I0805 13:00:08.414946       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0805 13:00:08.423918       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0805 13:00:08.424001       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0805 13:00:08.436404       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0805 13:00:08.436577       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-321139_802883f9-fd9c-4117-8935-6f2099d3f05c!
	I0805 13:00:08.436992       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d8dada66-6135-4655-a5db-5fefeff62831", APIVersion:"v1", ResourceVersion:"608", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-321139_802883f9-fd9c-4117-8935-6f2099d3f05c became leader
	I0805 13:00:08.537633       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-321139_802883f9-fd9c-4117-8935-6f2099d3f05c!
	
	
	==> storage-provisioner [2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86] <==
	I0805 12:59:37.701889       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0805 13:00:07.704372       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-321139 -n embed-certs-321139
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-321139 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-k8mrt
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-321139 describe pod metrics-server-569cc877fc-k8mrt
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-321139 describe pod metrics-server-569cc877fc-k8mrt: exit status 1 (62.624208ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-k8mrt" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-321139 describe pod metrics-server-569cc877fc-k8mrt: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0805 13:04:06.749409  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/flannel-119870/client.crt: no such file or directory
E0805 13:05:07.008271  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/bridge-119870/client.crt: no such file or directory
E0805 13:05:27.753040  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.crt: no such file or directory
E0805 13:05:48.987108  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/client.crt: no such file or directory
E0805 13:05:49.458791  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/auto-119870/client.crt: no such file or directory
E0805 13:07:09.900483  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/calico-119870/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-371585 -n default-k8s-diff-port-371585
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-05 13:13:06.679912377 +0000 UTC m=+6373.637236065
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-371585 -n default-k8s-diff-port-371585
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-371585 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-371585 logs -n 25: (2.065349532s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-119870 sudo cat                              | bridge-119870                | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-119870 sudo                                  | bridge-119870                | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-119870 sudo                                  | bridge-119870                | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-119870 sudo                                  | bridge-119870                | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-119870 sudo find                             | bridge-119870                | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-119870 sudo crio                             | bridge-119870                | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-119870                                       | bridge-119870                | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	| delete  | -p                                                     | disable-driver-mounts-130994 | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | disable-driver-mounts-130994                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-371585 | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:51 UTC |
	|         | default-k8s-diff-port-371585                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-321139            | embed-certs-321139           | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-321139                                  | embed-certs-321139           | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-669469             | no-preload-669469            | jenkins | v1.33.1 | 05 Aug 24 12:51 UTC | 05 Aug 24 12:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-669469                                   | no-preload-669469            | jenkins | v1.33.1 | 05 Aug 24 12:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-371585  | default-k8s-diff-port-371585 | jenkins | v1.33.1 | 05 Aug 24 12:51 UTC | 05 Aug 24 12:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-371585 | jenkins | v1.33.1 | 05 Aug 24 12:51 UTC |                     |
	|         | default-k8s-diff-port-371585                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-321139                 | embed-certs-321139           | jenkins | v1.33.1 | 05 Aug 24 12:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-635707        | old-k8s-version-635707       | jenkins | v1.33.1 | 05 Aug 24 12:53 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-321139                                  | embed-certs-321139           | jenkins | v1.33.1 | 05 Aug 24 12:53 UTC | 05 Aug 24 13:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-669469                  | no-preload-669469            | jenkins | v1.33.1 | 05 Aug 24 12:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-669469                                   | no-preload-669469            | jenkins | v1.33.1 | 05 Aug 24 12:53 UTC | 05 Aug 24 13:03 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-371585       | default-k8s-diff-port-371585 | jenkins | v1.33.1 | 05 Aug 24 12:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-371585 | jenkins | v1.33.1 | 05 Aug 24 12:54 UTC | 05 Aug 24 13:04 UTC |
	|         | default-k8s-diff-port-371585                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-635707                              | old-k8s-version-635707       | jenkins | v1.33.1 | 05 Aug 24 12:55 UTC | 05 Aug 24 12:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-635707             | old-k8s-version-635707       | jenkins | v1.33.1 | 05 Aug 24 12:55 UTC | 05 Aug 24 12:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-635707                              | old-k8s-version-635707       | jenkins | v1.33.1 | 05 Aug 24 12:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 12:55:11
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 12:55:11.960192  451238 out.go:291] Setting OutFile to fd 1 ...
	I0805 12:55:11.960471  451238 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:55:11.960479  451238 out.go:304] Setting ErrFile to fd 2...
	I0805 12:55:11.960484  451238 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:55:11.960646  451238 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
	I0805 12:55:11.961145  451238 out.go:298] Setting JSON to false
	I0805 12:55:11.962063  451238 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":9459,"bootTime":1722853053,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0805 12:55:11.962121  451238 start.go:139] virtualization: kvm guest
	I0805 12:55:11.964372  451238 out.go:177] * [old-k8s-version-635707] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0805 12:55:11.965770  451238 notify.go:220] Checking for updates...
	I0805 12:55:11.965787  451238 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 12:55:11.967106  451238 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 12:55:11.968790  451238 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 12:55:11.970181  451238 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 12:55:11.971500  451238 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0805 12:55:11.973243  451238 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 12:55:11.974825  451238 config.go:182] Loaded profile config "old-k8s-version-635707": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0805 12:55:11.975239  451238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:55:11.975319  451238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:55:11.990296  451238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40583
	I0805 12:55:11.990704  451238 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:55:11.991235  451238 main.go:141] libmachine: Using API Version  1
	I0805 12:55:11.991259  451238 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:55:11.991575  451238 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:55:11.991765  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:55:11.993484  451238 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0805 12:55:11.994687  451238 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 12:55:11.994952  451238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:55:11.994984  451238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:55:12.009528  451238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37395
	I0805 12:55:12.009879  451238 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:55:12.010353  451238 main.go:141] libmachine: Using API Version  1
	I0805 12:55:12.010375  451238 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:55:12.010670  451238 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:55:12.010857  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:55:12.044634  451238 out.go:177] * Using the kvm2 driver based on existing profile
	I0805 12:55:12.045859  451238 start.go:297] selected driver: kvm2
	I0805 12:55:12.045876  451238 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-635707 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-635707 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.41 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:55:12.045987  451238 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 12:55:12.046662  451238 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 12:55:12.046731  451238 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19377-383955/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0805 12:55:12.061918  451238 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0805 12:55:12.062400  451238 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 12:55:12.062484  451238 cni.go:84] Creating CNI manager for ""
	I0805 12:55:12.062502  451238 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:55:12.062572  451238 start.go:340] cluster config:
	{Name:old-k8s-version-635707 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-635707 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.41 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:55:12.062722  451238 iso.go:125] acquiring lock: {Name:mk78a4988ea0dfb86bb6f7367e362683a39fd912 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 12:55:12.064478  451238 out.go:177] * Starting "old-k8s-version-635707" primary control-plane node in "old-k8s-version-635707" cluster
	I0805 12:55:10.820047  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:13.892041  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:12.065640  451238 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0805 12:55:12.065680  451238 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0805 12:55:12.065701  451238 cache.go:56] Caching tarball of preloaded images
	I0805 12:55:12.065786  451238 preload.go:172] Found /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0805 12:55:12.065797  451238 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0805 12:55:12.065897  451238 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/config.json ...
	I0805 12:55:12.066073  451238 start.go:360] acquireMachinesLock for old-k8s-version-635707: {Name:mk3babe91d55c30c0b650587cdec6489eb3a7ed6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 12:55:19.971977  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:23.044092  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:29.124041  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:32.196124  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:38.276045  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:41.348117  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:47.428042  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:50.500022  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:56.580074  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:59.652091  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:05.732072  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:08.804128  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:14.884085  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:17.956073  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:24.036067  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:27.108059  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:33.188012  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:36.260134  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:42.340036  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:45.412038  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:51.492022  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:54.564068  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:00.644018  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:03.716112  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:09.796041  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:12.868080  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:18.948054  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:22.020023  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:28.100099  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:31.172076  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:37.251997  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:40.324080  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:46.404055  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:49.476072  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:55.556045  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:58.627984  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:58:01.632326  450576 start.go:364] duration metric: took 4m17.994768704s to acquireMachinesLock for "no-preload-669469"
	I0805 12:58:01.632391  450576 start.go:96] Skipping create...Using existing machine configuration
	I0805 12:58:01.632403  450576 fix.go:54] fixHost starting: 
	I0805 12:58:01.632845  450576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:58:01.632880  450576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:58:01.648358  450576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43013
	I0805 12:58:01.648860  450576 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:58:01.649387  450576 main.go:141] libmachine: Using API Version  1
	I0805 12:58:01.649410  450576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:58:01.649779  450576 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:58:01.649963  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 12:58:01.650176  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetState
	I0805 12:58:01.651681  450576 fix.go:112] recreateIfNeeded on no-preload-669469: state=Stopped err=<nil>
	I0805 12:58:01.651715  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	W0805 12:58:01.651903  450576 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 12:58:01.653860  450576 out.go:177] * Restarting existing kvm2 VM for "no-preload-669469" ...
	I0805 12:58:01.655338  450576 main.go:141] libmachine: (no-preload-669469) Calling .Start
	I0805 12:58:01.655475  450576 main.go:141] libmachine: (no-preload-669469) Ensuring networks are active...
	I0805 12:58:01.656224  450576 main.go:141] libmachine: (no-preload-669469) Ensuring network default is active
	I0805 12:58:01.656565  450576 main.go:141] libmachine: (no-preload-669469) Ensuring network mk-no-preload-669469 is active
	I0805 12:58:01.656898  450576 main.go:141] libmachine: (no-preload-669469) Getting domain xml...
	I0805 12:58:01.657537  450576 main.go:141] libmachine: (no-preload-669469) Creating domain...
	I0805 12:58:02.879809  450576 main.go:141] libmachine: (no-preload-669469) Waiting to get IP...
	I0805 12:58:02.880800  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:02.881194  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:02.881270  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:02.881175  451829 retry.go:31] will retry after 303.380177ms: waiting for machine to come up
	I0805 12:58:03.185834  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:03.186259  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:03.186288  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:03.186214  451829 retry.go:31] will retry after 263.494141ms: waiting for machine to come up
	I0805 12:58:03.451923  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:03.452263  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:03.452340  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:03.452217  451829 retry.go:31] will retry after 310.615163ms: waiting for machine to come up
	I0805 12:58:01.629832  450393 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 12:58:01.629873  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetMachineName
	I0805 12:58:01.630250  450393 buildroot.go:166] provisioning hostname "embed-certs-321139"
	I0805 12:58:01.630295  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetMachineName
	I0805 12:58:01.630511  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:58:01.632158  450393 machine.go:97] duration metric: took 4m37.422562602s to provisionDockerMachine
	I0805 12:58:01.632208  450393 fix.go:56] duration metric: took 4m37.444588707s for fixHost
	I0805 12:58:01.632226  450393 start.go:83] releasing machines lock for "embed-certs-321139", held for 4m37.44461751s
	W0805 12:58:01.632250  450393 start.go:714] error starting host: provision: host is not running
	W0805 12:58:01.632431  450393 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0805 12:58:01.632445  450393 start.go:729] Will try again in 5 seconds ...
	I0805 12:58:03.764803  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:03.765280  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:03.765305  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:03.765243  451829 retry.go:31] will retry after 570.955722ms: waiting for machine to come up
	I0805 12:58:04.338423  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:04.338863  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:04.338893  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:04.338811  451829 retry.go:31] will retry after 485.490715ms: waiting for machine to come up
	I0805 12:58:04.825511  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:04.825882  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:04.825911  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:04.825823  451829 retry.go:31] will retry after 671.109731ms: waiting for machine to come up
	I0805 12:58:05.498113  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:05.498529  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:05.498557  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:05.498467  451829 retry.go:31] will retry after 997.668856ms: waiting for machine to come up
	I0805 12:58:06.497843  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:06.498144  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:06.498161  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:06.498120  451829 retry.go:31] will retry after 996.614411ms: waiting for machine to come up
	I0805 12:58:07.496801  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:07.497298  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:07.497334  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:07.497249  451829 retry.go:31] will retry after 1.384682595s: waiting for machine to come up
	I0805 12:58:06.634410  450393 start.go:360] acquireMachinesLock for embed-certs-321139: {Name:mk3babe91d55c30c0b650587cdec6489eb3a7ed6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 12:58:08.883309  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:08.883701  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:08.883732  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:08.883642  451829 retry.go:31] will retry after 2.017073843s: waiting for machine to come up
	I0805 12:58:10.903852  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:10.904279  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:10.904310  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:10.904233  451829 retry.go:31] will retry after 2.485880433s: waiting for machine to come up
	I0805 12:58:13.392693  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:13.393169  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:13.393199  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:13.393116  451829 retry.go:31] will retry after 2.986076236s: waiting for machine to come up
	I0805 12:58:16.380921  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:16.381475  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:16.381508  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:16.381432  451829 retry.go:31] will retry after 4.291617536s: waiting for machine to come up
	I0805 12:58:21.948770  450884 start.go:364] duration metric: took 4m4.773878111s to acquireMachinesLock for "default-k8s-diff-port-371585"
	I0805 12:58:21.948843  450884 start.go:96] Skipping create...Using existing machine configuration
	I0805 12:58:21.948851  450884 fix.go:54] fixHost starting: 
	I0805 12:58:21.949291  450884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:58:21.949337  450884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:58:21.966933  450884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34223
	I0805 12:58:21.967356  450884 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:58:21.967874  450884 main.go:141] libmachine: Using API Version  1
	I0805 12:58:21.967899  450884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:58:21.968326  450884 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:58:21.968638  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 12:58:21.968874  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetState
	I0805 12:58:21.970608  450884 fix.go:112] recreateIfNeeded on default-k8s-diff-port-371585: state=Stopped err=<nil>
	I0805 12:58:21.970631  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	W0805 12:58:21.970789  450884 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 12:58:21.973235  450884 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-371585" ...
	I0805 12:58:21.974564  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .Start
	I0805 12:58:21.974751  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Ensuring networks are active...
	I0805 12:58:21.975581  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Ensuring network default is active
	I0805 12:58:21.976001  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Ensuring network mk-default-k8s-diff-port-371585 is active
	I0805 12:58:21.976376  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Getting domain xml...
	I0805 12:58:21.977078  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Creating domain...
	I0805 12:58:20.678231  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.678743  450576 main.go:141] libmachine: (no-preload-669469) Found IP for machine: 192.168.72.223
	I0805 12:58:20.678771  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has current primary IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.678786  450576 main.go:141] libmachine: (no-preload-669469) Reserving static IP address...
	I0805 12:58:20.679230  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "no-preload-669469", mac: "52:54:00:55:38:0a", ip: "192.168.72.223"} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:20.679266  450576 main.go:141] libmachine: (no-preload-669469) Reserved static IP address: 192.168.72.223
	I0805 12:58:20.679288  450576 main.go:141] libmachine: (no-preload-669469) DBG | skip adding static IP to network mk-no-preload-669469 - found existing host DHCP lease matching {name: "no-preload-669469", mac: "52:54:00:55:38:0a", ip: "192.168.72.223"}
	I0805 12:58:20.679302  450576 main.go:141] libmachine: (no-preload-669469) DBG | Getting to WaitForSSH function...
	I0805 12:58:20.679317  450576 main.go:141] libmachine: (no-preload-669469) Waiting for SSH to be available...
	I0805 12:58:20.681864  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.682263  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:20.682297  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.682447  450576 main.go:141] libmachine: (no-preload-669469) DBG | Using SSH client type: external
	I0805 12:58:20.682484  450576 main.go:141] libmachine: (no-preload-669469) DBG | Using SSH private key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa (-rw-------)
	I0805 12:58:20.682539  450576 main.go:141] libmachine: (no-preload-669469) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.223 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 12:58:20.682557  450576 main.go:141] libmachine: (no-preload-669469) DBG | About to run SSH command:
	I0805 12:58:20.682568  450576 main.go:141] libmachine: (no-preload-669469) DBG | exit 0
	I0805 12:58:20.807791  450576 main.go:141] libmachine: (no-preload-669469) DBG | SSH cmd err, output: <nil>: 
	I0805 12:58:20.808168  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetConfigRaw
	I0805 12:58:20.808767  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetIP
	I0805 12:58:20.811170  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.811486  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:20.811517  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.811738  450576 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469/config.json ...
	I0805 12:58:20.811957  450576 machine.go:94] provisionDockerMachine start ...
	I0805 12:58:20.811976  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 12:58:20.812203  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:20.814305  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.814656  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:20.814693  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.814823  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:20.814996  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:20.815156  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:20.815329  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:20.815503  450576 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:20.815871  450576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.223 22 <nil> <nil>}
	I0805 12:58:20.815887  450576 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 12:58:20.920311  450576 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 12:58:20.920344  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetMachineName
	I0805 12:58:20.920642  450576 buildroot.go:166] provisioning hostname "no-preload-669469"
	I0805 12:58:20.920695  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetMachineName
	I0805 12:58:20.920951  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:20.924029  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.924583  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:20.924611  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.924770  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:20.925001  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:20.925190  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:20.925334  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:20.925514  450576 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:20.925755  450576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.223 22 <nil> <nil>}
	I0805 12:58:20.925774  450576 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-669469 && echo "no-preload-669469" | sudo tee /etc/hostname
	I0805 12:58:21.046579  450576 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-669469
	
	I0805 12:58:21.046614  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:21.049322  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.049657  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.049687  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.049851  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:21.050049  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.050239  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.050412  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:21.050588  450576 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:21.050755  450576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.223 22 <nil> <nil>}
	I0805 12:58:21.050771  450576 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-669469' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-669469/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-669469' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 12:58:21.165100  450576 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 12:58:21.165134  450576 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19377-383955/.minikube CaCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19377-383955/.minikube}
	I0805 12:58:21.165170  450576 buildroot.go:174] setting up certificates
	I0805 12:58:21.165180  450576 provision.go:84] configureAuth start
	I0805 12:58:21.165191  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetMachineName
	I0805 12:58:21.165477  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetIP
	I0805 12:58:21.168018  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.168399  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.168443  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.168703  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:21.171168  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.171536  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.171565  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.171638  450576 provision.go:143] copyHostCerts
	I0805 12:58:21.171713  450576 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem, removing ...
	I0805 12:58:21.171724  450576 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 12:58:21.171807  450576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem (1082 bytes)
	I0805 12:58:21.171920  450576 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem, removing ...
	I0805 12:58:21.171930  450576 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 12:58:21.171955  450576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem (1123 bytes)
	I0805 12:58:21.172010  450576 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem, removing ...
	I0805 12:58:21.172016  450576 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 12:58:21.172037  450576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem (1675 bytes)
	I0805 12:58:21.172095  450576 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem org=jenkins.no-preload-669469 san=[127.0.0.1 192.168.72.223 localhost minikube no-preload-669469]
	I0805 12:58:21.287395  450576 provision.go:177] copyRemoteCerts
	I0805 12:58:21.287463  450576 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 12:58:21.287505  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:21.290416  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.290765  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.290796  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.290962  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:21.291169  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.291323  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:21.291460  450576 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa Username:docker}
	I0805 12:58:21.373992  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0805 12:58:21.398249  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 12:58:21.422950  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0805 12:58:21.446469  450576 provision.go:87] duration metric: took 281.275299ms to configureAuth
	I0805 12:58:21.446500  450576 buildroot.go:189] setting minikube options for container-runtime
	I0805 12:58:21.446688  450576 config.go:182] Loaded profile config "no-preload-669469": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0805 12:58:21.446813  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:21.449833  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.450219  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.450235  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.450526  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:21.450814  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.450993  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.451168  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:21.451342  450576 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:21.451515  450576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.223 22 <nil> <nil>}
	I0805 12:58:21.451532  450576 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 12:58:21.714813  450576 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 12:58:21.714842  450576 machine.go:97] duration metric: took 902.872257ms to provisionDockerMachine
	I0805 12:58:21.714858  450576 start.go:293] postStartSetup for "no-preload-669469" (driver="kvm2")
	I0805 12:58:21.714889  450576 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 12:58:21.714940  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 12:58:21.715304  450576 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 12:58:21.715333  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:21.717989  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.718405  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.718427  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.718597  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:21.718832  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.718993  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:21.719152  450576 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa Username:docker}
	I0805 12:58:21.802634  450576 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 12:58:21.806957  450576 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 12:58:21.806985  450576 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/addons for local assets ...
	I0805 12:58:21.807079  450576 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/files for local assets ...
	I0805 12:58:21.807186  450576 filesync.go:149] local asset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> 3912192.pem in /etc/ssl/certs
	I0805 12:58:21.807293  450576 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 12:58:21.816690  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:58:21.839848  450576 start.go:296] duration metric: took 124.973515ms for postStartSetup
	I0805 12:58:21.839903  450576 fix.go:56] duration metric: took 20.207499572s for fixHost
	I0805 12:58:21.839934  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:21.842548  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.842869  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.842893  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.843090  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:21.843310  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.843502  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.843640  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:21.843815  450576 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:21.844015  450576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.223 22 <nil> <nil>}
	I0805 12:58:21.844029  450576 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 12:58:21.948584  450576 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722862701.921979093
	
	I0805 12:58:21.948613  450576 fix.go:216] guest clock: 1722862701.921979093
	I0805 12:58:21.948623  450576 fix.go:229] Guest: 2024-08-05 12:58:21.921979093 +0000 UTC Remote: 2024-08-05 12:58:21.83991063 +0000 UTC m=+278.340267839 (delta=82.068463ms)
	I0805 12:58:21.948671  450576 fix.go:200] guest clock delta is within tolerance: 82.068463ms
	I0805 12:58:21.948680  450576 start.go:83] releasing machines lock for "no-preload-669469", held for 20.316310092s
	I0805 12:58:21.948713  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 12:58:21.948990  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetIP
	I0805 12:58:21.951624  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.952086  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.952136  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.952256  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 12:58:21.952797  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 12:58:21.952984  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 12:58:21.953065  450576 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 12:58:21.953113  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:21.953227  450576 ssh_runner.go:195] Run: cat /version.json
	I0805 12:58:21.953255  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:21.955837  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.956081  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.956200  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.956227  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.956370  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:21.956504  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.956528  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.956568  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.956670  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:21.956760  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:21.956857  450576 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa Username:docker}
	I0805 12:58:21.956906  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.957058  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:21.957205  450576 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa Username:docker}
	I0805 12:58:22.058847  450576 ssh_runner.go:195] Run: systemctl --version
	I0805 12:58:22.065110  450576 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 12:58:22.211415  450576 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 12:58:22.219405  450576 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 12:58:22.219492  450576 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 12:58:22.240631  450576 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 12:58:22.240659  450576 start.go:495] detecting cgroup driver to use...
	I0805 12:58:22.240764  450576 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 12:58:22.258777  450576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 12:58:22.273312  450576 docker.go:217] disabling cri-docker service (if available) ...
	I0805 12:58:22.273400  450576 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 12:58:22.288455  450576 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 12:58:22.305028  450576 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 12:58:22.428098  450576 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 12:58:22.586232  450576 docker.go:233] disabling docker service ...
	I0805 12:58:22.586318  450576 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 12:58:22.611888  450576 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 12:58:22.627393  450576 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 12:58:22.757335  450576 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 12:58:22.878168  450576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 12:58:22.896174  450576 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 12:58:22.914395  450576 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0805 12:58:23.229202  450576 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0805 12:58:23.229300  450576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:23.242180  450576 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 12:58:23.242262  450576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:23.254577  450576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:23.265805  450576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:23.276522  450576 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 12:58:23.287288  450576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:23.297863  450576 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:23.314322  450576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:23.324662  450576 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 12:58:23.334125  450576 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0805 12:58:23.334192  450576 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0805 12:58:23.346701  450576 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 12:58:23.356256  450576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:58:23.474046  450576 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 12:58:23.617276  450576 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 12:58:23.617363  450576 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 12:58:23.622001  450576 start.go:563] Will wait 60s for crictl version
	I0805 12:58:23.622047  450576 ssh_runner.go:195] Run: which crictl
	I0805 12:58:23.626041  450576 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 12:58:23.670186  450576 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 12:58:23.670267  450576 ssh_runner.go:195] Run: crio --version
	I0805 12:58:23.700616  450576 ssh_runner.go:195] Run: crio --version
	I0805 12:58:23.733411  450576 out.go:177] * Preparing Kubernetes v1.31.0-rc.0 on CRI-O 1.29.1 ...
	I0805 12:58:23.254293  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting to get IP...
	I0805 12:58:23.255331  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:23.255802  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:23.255880  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:23.255773  451963 retry.go:31] will retry after 245.269435ms: waiting for machine to come up
	I0805 12:58:23.502617  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:23.503105  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:23.503130  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:23.503068  451963 retry.go:31] will retry after 243.155673ms: waiting for machine to come up
	I0805 12:58:23.747498  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:23.747913  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:23.747950  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:23.747867  451963 retry.go:31] will retry after 459.286566ms: waiting for machine to come up
	I0805 12:58:24.208594  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:24.209076  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:24.209127  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:24.209003  451963 retry.go:31] will retry after 499.069946ms: waiting for machine to come up
	I0805 12:58:24.709128  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:24.709554  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:24.709577  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:24.709512  451963 retry.go:31] will retry after 732.735525ms: waiting for machine to come up
	I0805 12:58:25.443632  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:25.444185  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:25.444216  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:25.444125  451963 retry.go:31] will retry after 883.69375ms: waiting for machine to come up
	I0805 12:58:26.329477  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:26.330010  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:26.330045  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:26.329947  451963 retry.go:31] will retry after 1.157298734s: waiting for machine to come up
	I0805 12:58:23.734875  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetIP
	I0805 12:58:23.737945  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:23.738460  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:23.738487  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:23.738646  450576 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0805 12:58:23.742894  450576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:58:23.756164  450576 kubeadm.go:883] updating cluster {Name:no-preload-669469 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-rc.0 ClusterName:no-preload-669469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.223 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 12:58:23.756435  450576 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0805 12:58:24.035575  450576 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0805 12:58:24.352144  450576 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0805 12:58:24.657175  450576 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0805 12:58:24.657266  450576 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:58:24.694685  450576 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-rc.0". assuming images are not preloaded.
	I0805 12:58:24.694720  450576 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-rc.0 registry.k8s.io/kube-controller-manager:v1.31.0-rc.0 registry.k8s.io/kube-scheduler:v1.31.0-rc.0 registry.k8s.io/kube-proxy:v1.31.0-rc.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0805 12:58:24.694809  450576 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0805 12:58:24.694831  450576 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0805 12:58:24.694845  450576 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0805 12:58:24.694867  450576 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0805 12:58:24.694835  450576 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:58:24.694815  450576 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0805 12:58:24.694801  450576 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0805 12:58:24.694917  450576 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0805 12:58:24.696852  450576 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0805 12:58:24.696859  450576 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0805 12:58:24.696860  450576 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0805 12:58:24.696902  450576 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0805 12:58:24.696904  450576 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:58:24.696852  450576 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0805 12:58:24.696881  450576 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0805 12:58:24.696852  450576 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0805 12:58:24.864249  450576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0805 12:58:24.867334  450576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0805 12:58:24.905018  450576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0805 12:58:24.920294  450576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0805 12:58:24.925405  450576 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" does not exist at hash "fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c" in container runtime
	I0805 12:58:24.925440  450576 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" does not exist at hash "c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0" in container runtime
	I0805 12:58:24.925456  450576 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0805 12:58:24.925476  450576 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0805 12:58:24.925508  450576 ssh_runner.go:195] Run: which crictl
	I0805 12:58:24.925520  450576 ssh_runner.go:195] Run: which crictl
	I0805 12:58:24.973191  450576 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-rc.0" does not exist at hash "41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318" in container runtime
	I0805 12:58:24.973240  450576 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0805 12:58:24.973304  450576 ssh_runner.go:195] Run: which crictl
	I0805 12:58:24.986642  450576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0805 12:58:24.986685  450576 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0805 12:58:24.986706  450576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0805 12:58:24.986723  450576 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0805 12:58:24.986642  450576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0805 12:58:24.986772  450576 ssh_runner.go:195] Run: which crictl
	I0805 12:58:25.037012  450576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0
	I0805 12:58:25.037066  450576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0805 12:58:25.037132  450576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0805 12:58:25.067311  450576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0805 12:58:25.068528  450576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0805 12:58:25.073769  450576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0
	I0805 12:58:25.073831  450576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-rc.0
	I0805 12:58:25.073872  450576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0805 12:58:25.073933  450576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0805 12:58:25.082476  450576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0805 12:58:25.126044  450576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0 (exists)
	I0805 12:58:25.126080  450576 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0805 12:58:25.126127  450576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0805 12:58:25.126144  450576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0805 12:58:25.126230  450576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0805 12:58:25.149903  450576 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0805 12:58:25.149965  450576 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0805 12:58:25.150028  450576 ssh_runner.go:195] Run: which crictl
	I0805 12:58:25.196288  450576 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" does not exist at hash "0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c" in container runtime
	I0805 12:58:25.196336  450576 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0805 12:58:25.196388  450576 ssh_runner.go:195] Run: which crictl
	I0805 12:58:25.196416  450576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0 (exists)
	I0805 12:58:25.196510  450576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0 (exists)
	I0805 12:58:25.651632  450576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:58:27.532922  450576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0: (2.406747514s)
	I0805 12:58:27.532959  450576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 from cache
	I0805 12:58:27.532994  450576 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0805 12:58:27.533010  450576 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.406755032s)
	I0805 12:58:27.533048  450576 ssh_runner.go:235] Completed: which crictl: (2.383000552s)
	I0805 12:58:27.533050  450576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0805 12:58:27.533082  450576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0805 12:58:27.533082  450576 ssh_runner.go:235] Completed: which crictl: (2.336681164s)
	I0805 12:58:27.533095  450576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0805 12:58:27.533117  450576 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.88145852s)
	I0805 12:58:27.533139  450576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0805 12:58:27.533161  450576 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0805 12:58:27.533198  450576 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:58:27.533234  450576 ssh_runner.go:195] Run: which crictl
	I0805 12:58:27.488683  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:27.489080  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:27.489108  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:27.489027  451963 retry.go:31] will retry after 997.566168ms: waiting for machine to come up
	I0805 12:58:28.488397  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:28.488846  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:28.488878  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:28.488794  451963 retry.go:31] will retry after 1.327498575s: waiting for machine to come up
	I0805 12:58:29.818339  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:29.818705  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:29.818735  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:29.818660  451963 retry.go:31] will retry after 2.105158858s: waiting for machine to come up
	I0805 12:58:31.925036  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:31.925564  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:31.925601  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:31.925492  451963 retry.go:31] will retry after 2.860711737s: waiting for machine to come up
	I0805 12:58:29.629896  450576 ssh_runner.go:235] Completed: which crictl: (2.096633143s)
	I0805 12:58:29.630000  450576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:58:29.630084  450576 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0: (2.096969259s)
	I0805 12:58:29.630184  450576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0805 12:58:29.630102  450576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0: (2.09697893s)
	I0805 12:58:29.630255  450576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 from cache
	I0805 12:58:29.630121  450576 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-rc.0: (2.096957841s)
	I0805 12:58:29.630282  450576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.15-0
	I0805 12:58:29.630286  450576 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0805 12:58:29.630313  450576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0
	I0805 12:58:29.630322  450576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0805 12:58:29.630381  450576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0805 12:58:29.675831  450576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0805 12:58:29.675914  450576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0805 12:58:29.676019  450576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0805 12:58:31.695376  450576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0: (2.06501136s)
	I0805 12:58:31.695429  450576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 from cache
	I0805 12:58:31.695458  450576 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0805 12:58:31.695476  450576 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.019437866s)
	I0805 12:58:31.695382  450576 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0: (2.064967299s)
	I0805 12:58:31.695510  450576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0805 12:58:31.695523  450576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0 (exists)
	I0805 12:58:31.695536  450576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0805 12:58:34.789126  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:34.789644  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:34.789673  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:34.789592  451963 retry.go:31] will retry after 2.763937018s: waiting for machine to come up
	I0805 12:58:33.659147  450576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.963588438s)
	I0805 12:58:33.659183  450576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0805 12:58:33.659216  450576 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0805 12:58:33.659263  450576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0805 12:58:37.466579  450576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.807281649s)
	I0805 12:58:37.466623  450576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0805 12:58:37.466657  450576 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0805 12:58:37.466709  450576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0805 12:58:38.111584  450576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0805 12:58:38.111633  450576 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0805 12:58:38.111678  450576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0805 12:58:37.554827  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:37.555233  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:37.555263  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:37.555184  451963 retry.go:31] will retry after 3.143735106s: waiting for machine to come up
	I0805 12:58:40.701139  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.701615  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Found IP for machine: 192.168.50.228
	I0805 12:58:40.701649  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has current primary IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.701660  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Reserving static IP address...
	I0805 12:58:40.702105  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-371585", mac: "52:54:00:f4:9f:83", ip: "192.168.50.228"} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:40.702126  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Reserved static IP address: 192.168.50.228
	I0805 12:58:40.702146  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | skip adding static IP to network mk-default-k8s-diff-port-371585 - found existing host DHCP lease matching {name: "default-k8s-diff-port-371585", mac: "52:54:00:f4:9f:83", ip: "192.168.50.228"}
	I0805 12:58:40.702156  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for SSH to be available...
	I0805 12:58:40.702198  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | Getting to WaitForSSH function...
	I0805 12:58:40.704600  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.704920  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:40.704950  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.705091  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | Using SSH client type: external
	I0805 12:58:40.705129  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | Using SSH private key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa (-rw-------)
	I0805 12:58:40.705179  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.228 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 12:58:40.705200  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | About to run SSH command:
	I0805 12:58:40.705218  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | exit 0
	I0805 12:58:40.836818  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | SSH cmd err, output: <nil>: 
	I0805 12:58:40.837228  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetConfigRaw
	I0805 12:58:40.837884  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetIP
	I0805 12:58:40.840503  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.840843  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:40.840870  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.841129  450884 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585/config.json ...
	I0805 12:58:40.841353  450884 machine.go:94] provisionDockerMachine start ...
	I0805 12:58:40.841373  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 12:58:40.841587  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:40.843943  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.844308  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:40.844336  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.844448  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:40.844614  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:40.844782  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:40.844922  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:40.845067  450884 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:40.845322  450884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.228 22 <nil> <nil>}
	I0805 12:58:40.845333  450884 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 12:58:40.952367  450884 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 12:58:40.952410  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetMachineName
	I0805 12:58:40.952733  450884 buildroot.go:166] provisioning hostname "default-k8s-diff-port-371585"
	I0805 12:58:40.952762  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetMachineName
	I0805 12:58:40.952968  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:40.955642  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.956045  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:40.956077  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.956216  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:40.956493  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:40.956651  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:40.956804  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:40.957027  450884 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:40.957239  450884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.228 22 <nil> <nil>}
	I0805 12:58:40.957255  450884 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-371585 && echo "default-k8s-diff-port-371585" | sudo tee /etc/hostname
	I0805 12:58:41.077775  450884 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-371585
	
	I0805 12:58:41.077808  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:41.080777  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.081230  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:41.081273  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.081406  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:41.081631  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:41.081782  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:41.081963  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:41.082139  450884 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:41.082315  450884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.228 22 <nil> <nil>}
	I0805 12:58:41.082333  450884 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-371585' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-371585/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-371585' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 12:58:41.200835  450884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 12:58:41.200871  450884 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19377-383955/.minikube CaCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19377-383955/.minikube}
	I0805 12:58:41.200923  450884 buildroot.go:174] setting up certificates
	I0805 12:58:41.200934  450884 provision.go:84] configureAuth start
	I0805 12:58:41.200945  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetMachineName
	I0805 12:58:41.201284  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetIP
	I0805 12:58:41.204107  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.204460  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:41.204494  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.204631  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:41.206634  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.206948  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:41.206977  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.207048  450884 provision.go:143] copyHostCerts
	I0805 12:58:41.207139  450884 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem, removing ...
	I0805 12:58:41.207151  450884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 12:58:41.207215  450884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem (1082 bytes)
	I0805 12:58:41.207333  450884 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem, removing ...
	I0805 12:58:41.207345  450884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 12:58:41.207372  450884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem (1123 bytes)
	I0805 12:58:41.207451  450884 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem, removing ...
	I0805 12:58:41.207462  450884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 12:58:41.207502  450884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem (1675 bytes)
	I0805 12:58:41.207573  450884 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-371585 san=[127.0.0.1 192.168.50.228 default-k8s-diff-port-371585 localhost minikube]
	I0805 12:58:41.357243  450884 provision.go:177] copyRemoteCerts
	I0805 12:58:41.357344  450884 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 12:58:41.357386  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:41.360309  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.360697  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:41.360738  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.360933  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:41.361120  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:41.361295  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:41.361474  450884 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa Username:docker}
	I0805 12:58:41.454251  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 12:58:41.480595  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0805 12:58:41.506729  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 12:58:41.533349  450884 provision.go:87] duration metric: took 332.399026ms to configureAuth
	I0805 12:58:41.533402  450884 buildroot.go:189] setting minikube options for container-runtime
	I0805 12:58:41.533575  450884 config.go:182] Loaded profile config "default-k8s-diff-port-371585": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:58:41.533655  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:41.536469  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.536831  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:41.536862  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.537006  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:41.537197  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:41.537386  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:41.537541  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:41.537734  450884 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:41.537946  450884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.228 22 <nil> <nil>}
	I0805 12:58:41.537968  450884 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 12:58:41.827043  450884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 12:58:41.827078  450884 machine.go:97] duration metric: took 985.710155ms to provisionDockerMachine
	I0805 12:58:41.827095  450884 start.go:293] postStartSetup for "default-k8s-diff-port-371585" (driver="kvm2")
	I0805 12:58:41.827109  450884 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 12:58:41.827145  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 12:58:41.827564  450884 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 12:58:41.827605  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:41.830350  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.830724  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:41.830761  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.830853  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:41.831034  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:41.831206  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:41.831329  450884 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa Username:docker}
	I0805 12:58:41.915261  450884 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 12:58:41.919719  450884 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 12:58:41.919760  450884 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/addons for local assets ...
	I0805 12:58:41.919835  450884 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/files for local assets ...
	I0805 12:58:41.919930  450884 filesync.go:149] local asset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> 3912192.pem in /etc/ssl/certs
	I0805 12:58:41.920062  450884 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 12:58:41.929842  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:58:41.958933  450884 start.go:296] duration metric: took 131.820227ms for postStartSetup
	I0805 12:58:41.958981  450884 fix.go:56] duration metric: took 20.010130311s for fixHost
	I0805 12:58:41.959012  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:41.962092  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.962510  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:41.962540  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.962726  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:41.962968  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:41.963153  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:41.963309  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:41.963479  450884 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:41.963687  450884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.228 22 <nil> <nil>}
	I0805 12:58:41.963700  450884 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 12:58:42.080993  451238 start.go:364] duration metric: took 3m30.014883629s to acquireMachinesLock for "old-k8s-version-635707"
	I0805 12:58:42.081066  451238 start.go:96] Skipping create...Using existing machine configuration
	I0805 12:58:42.081076  451238 fix.go:54] fixHost starting: 
	I0805 12:58:42.081569  451238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:58:42.081611  451238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:58:42.101889  451238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43379
	I0805 12:58:42.102366  451238 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:58:42.102910  451238 main.go:141] libmachine: Using API Version  1
	I0805 12:58:42.102947  451238 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:58:42.103310  451238 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:58:42.103552  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:58:42.103718  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetState
	I0805 12:58:42.105465  451238 fix.go:112] recreateIfNeeded on old-k8s-version-635707: state=Stopped err=<nil>
	I0805 12:58:42.105504  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	W0805 12:58:42.105674  451238 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 12:58:42.107563  451238 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-635707" ...
	I0805 12:58:39.567840  450576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0: (1.456137011s)
	I0805 12:58:39.567879  450576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 from cache
	I0805 12:58:39.567905  450576 cache_images.go:123] Successfully loaded all cached images
	I0805 12:58:39.567911  450576 cache_images.go:92] duration metric: took 14.873174481s to LoadCachedImages
	I0805 12:58:39.567921  450576 kubeadm.go:934] updating node { 192.168.72.223 8443 v1.31.0-rc.0 crio true true} ...
	I0805 12:58:39.568053  450576 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-669469 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.223
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-669469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 12:58:39.568137  450576 ssh_runner.go:195] Run: crio config
	I0805 12:58:39.616607  450576 cni.go:84] Creating CNI manager for ""
	I0805 12:58:39.616634  450576 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:58:39.616660  450576 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 12:58:39.616683  450576 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.223 APIServerPort:8443 KubernetesVersion:v1.31.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-669469 NodeName:no-preload-669469 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.223"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.223 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 12:58:39.616822  450576 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.223
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-669469"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.223
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.223"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 12:58:39.616896  450576 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-rc.0
	I0805 12:58:39.627827  450576 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 12:58:39.627901  450576 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 12:58:39.637348  450576 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0805 12:58:39.653917  450576 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0805 12:58:39.670196  450576 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0805 12:58:39.686922  450576 ssh_runner.go:195] Run: grep 192.168.72.223	control-plane.minikube.internal$ /etc/hosts
	I0805 12:58:39.690804  450576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.223	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:58:39.703146  450576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:58:39.834718  450576 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 12:58:39.857015  450576 certs.go:68] Setting up /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469 for IP: 192.168.72.223
	I0805 12:58:39.857036  450576 certs.go:194] generating shared ca certs ...
	I0805 12:58:39.857057  450576 certs.go:226] acquiring lock for ca certs: {Name:mk0abfcaff3883fbb5243c47b487f9200d9166d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:58:39.857229  450576 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key
	I0805 12:58:39.857286  450576 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key
	I0805 12:58:39.857300  450576 certs.go:256] generating profile certs ...
	I0805 12:58:39.857431  450576 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469/client.key
	I0805 12:58:39.857489  450576 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469/apiserver.key.dd0884bb
	I0805 12:58:39.857535  450576 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469/proxy-client.key
	I0805 12:58:39.857683  450576 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem (1338 bytes)
	W0805 12:58:39.857723  450576 certs.go:480] ignoring /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219_empty.pem, impossibly tiny 0 bytes
	I0805 12:58:39.857739  450576 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 12:58:39.857769  450576 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem (1082 bytes)
	I0805 12:58:39.857834  450576 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem (1123 bytes)
	I0805 12:58:39.857872  450576 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem (1675 bytes)
	I0805 12:58:39.857923  450576 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:58:39.858695  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 12:58:39.895944  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 12:58:39.925816  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 12:58:39.960150  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 12:58:39.993307  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0805 12:58:40.027900  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0805 12:58:40.053492  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 12:58:40.077331  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 12:58:40.101010  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /usr/share/ca-certificates/3912192.pem (1708 bytes)
	I0805 12:58:40.123991  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 12:58:40.147563  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem --> /usr/share/ca-certificates/391219.pem (1338 bytes)
	I0805 12:58:40.170414  450576 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 12:58:40.188256  450576 ssh_runner.go:195] Run: openssl version
	I0805 12:58:40.193955  450576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3912192.pem && ln -fs /usr/share/ca-certificates/3912192.pem /etc/ssl/certs/3912192.pem"
	I0805 12:58:40.204793  450576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3912192.pem
	I0805 12:58:40.209061  450576 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 11:39 /usr/share/ca-certificates/3912192.pem
	I0805 12:58:40.209115  450576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3912192.pem
	I0805 12:58:40.214948  450576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3912192.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 12:58:40.226193  450576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 12:58:40.237723  450576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:58:40.241960  450576 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:58:40.242019  450576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:58:40.247502  450576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 12:58:40.258791  450576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391219.pem && ln -fs /usr/share/ca-certificates/391219.pem /etc/ssl/certs/391219.pem"
	I0805 12:58:40.270176  450576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/391219.pem
	I0805 12:58:40.274717  450576 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 11:39 /usr/share/ca-certificates/391219.pem
	I0805 12:58:40.274786  450576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391219.pem
	I0805 12:58:40.280457  450576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391219.pem /etc/ssl/certs/51391683.0"
	I0805 12:58:40.292091  450576 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 12:58:40.296842  450576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 12:58:40.303003  450576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 12:58:40.309009  450576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 12:58:40.314951  450576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 12:58:40.320674  450576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 12:58:40.326433  450576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 12:58:40.331848  450576 kubeadm.go:392] StartCluster: {Name:no-preload-669469 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-rc.0 ClusterName:no-preload-669469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.223 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:58:40.331938  450576 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 12:58:40.331975  450576 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:58:40.374390  450576 cri.go:89] found id: ""
	I0805 12:58:40.374482  450576 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 12:58:40.385467  450576 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0805 12:58:40.385485  450576 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0805 12:58:40.385531  450576 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0805 12:58:40.395411  450576 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0805 12:58:40.396455  450576 kubeconfig.go:125] found "no-preload-669469" server: "https://192.168.72.223:8443"
	I0805 12:58:40.400090  450576 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0805 12:58:40.410942  450576 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.223
	I0805 12:58:40.410971  450576 kubeadm.go:1160] stopping kube-system containers ...
	I0805 12:58:40.410985  450576 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0805 12:58:40.411032  450576 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:58:40.453021  450576 cri.go:89] found id: ""
	I0805 12:58:40.453115  450576 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0805 12:58:40.470389  450576 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 12:58:40.480421  450576 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 12:58:40.480445  450576 kubeadm.go:157] found existing configuration files:
	
	I0805 12:58:40.480502  450576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 12:58:40.489625  450576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 12:58:40.489672  450576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 12:58:40.499261  450576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 12:58:40.508571  450576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 12:58:40.508634  450576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 12:58:40.517811  450576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 12:58:40.526563  450576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 12:58:40.526620  450576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 12:58:40.535753  450576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 12:58:40.544981  450576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 12:58:40.545040  450576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 12:58:40.555237  450576 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 12:58:40.565180  450576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:40.683889  450576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:41.632122  450576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:41.866665  450576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:41.944022  450576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:42.048030  450576 api_server.go:52] waiting for apiserver process to appear ...
	I0805 12:58:42.048127  450576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:58:42.548995  450576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:58:43.048336  450576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:58:43.086457  450576 api_server.go:72] duration metric: took 1.038426772s to wait for apiserver process to appear ...
	I0805 12:58:43.086487  450576 api_server.go:88] waiting for apiserver healthz status ...
	I0805 12:58:43.086509  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:43.086982  450576 api_server.go:269] stopped: https://192.168.72.223:8443/healthz: Get "https://192.168.72.223:8443/healthz": dial tcp 192.168.72.223:8443: connect: connection refused
	I0805 12:58:42.080800  450884 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722862722.053648046
	
	I0805 12:58:42.080828  450884 fix.go:216] guest clock: 1722862722.053648046
	I0805 12:58:42.080839  450884 fix.go:229] Guest: 2024-08-05 12:58:42.053648046 +0000 UTC Remote: 2024-08-05 12:58:41.958987261 +0000 UTC m=+264.923354352 (delta=94.660785ms)
	I0805 12:58:42.080867  450884 fix.go:200] guest clock delta is within tolerance: 94.660785ms
	I0805 12:58:42.080876  450884 start.go:83] releasing machines lock for "default-k8s-diff-port-371585", held for 20.132054114s
	I0805 12:58:42.080916  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 12:58:42.081260  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetIP
	I0805 12:58:42.084196  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:42.084662  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:42.084695  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:42.084867  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 12:58:42.085589  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 12:58:42.085786  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 12:58:42.085875  450884 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 12:58:42.085925  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:42.086064  450884 ssh_runner.go:195] Run: cat /version.json
	I0805 12:58:42.086091  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:42.088693  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:42.089018  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:42.089042  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:42.089197  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:42.089260  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:42.089455  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:42.089729  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:42.089730  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:42.089785  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:42.089881  450884 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa Username:docker}
	I0805 12:58:42.089970  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:42.090128  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:42.090286  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:42.090457  450884 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa Username:docker}
	I0805 12:58:42.193160  450884 ssh_runner.go:195] Run: systemctl --version
	I0805 12:58:42.199341  450884 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 12:58:42.344713  450884 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 12:58:42.350944  450884 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 12:58:42.351026  450884 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 12:58:42.368162  450884 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 12:58:42.368196  450884 start.go:495] detecting cgroup driver to use...
	I0805 12:58:42.368260  450884 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 12:58:42.384477  450884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 12:58:42.401847  450884 docker.go:217] disabling cri-docker service (if available) ...
	I0805 12:58:42.401907  450884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 12:58:42.416318  450884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 12:58:42.430994  450884 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 12:58:42.545944  450884 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 12:58:42.721877  450884 docker.go:233] disabling docker service ...
	I0805 12:58:42.721961  450884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 12:58:42.743504  450884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 12:58:42.763111  450884 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 12:58:42.914270  450884 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 12:58:43.064816  450884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 12:58:43.090748  450884 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 12:58:43.115493  450884 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0805 12:58:43.115565  450884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:43.132497  450884 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 12:58:43.132583  450884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:43.146700  450884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:43.159880  450884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:43.175598  450884 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 12:58:43.191263  450884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:43.207573  450884 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:43.229567  450884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:43.248604  450884 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 12:58:43.261272  450884 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0805 12:58:43.261350  450884 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0805 12:58:43.276740  450884 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 12:58:43.288473  450884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:58:43.436066  450884 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 12:58:43.593264  450884 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 12:58:43.593355  450884 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 12:58:43.599342  450884 start.go:563] Will wait 60s for crictl version
	I0805 12:58:43.599419  450884 ssh_runner.go:195] Run: which crictl
	I0805 12:58:43.603681  450884 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 12:58:43.651181  450884 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 12:58:43.651296  450884 ssh_runner.go:195] Run: crio --version
	I0805 12:58:43.691418  450884 ssh_runner.go:195] Run: crio --version
	I0805 12:58:43.725036  450884 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0805 12:58:42.109016  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .Start
	I0805 12:58:42.109214  451238 main.go:141] libmachine: (old-k8s-version-635707) Ensuring networks are active...
	I0805 12:58:42.110192  451238 main.go:141] libmachine: (old-k8s-version-635707) Ensuring network default is active
	I0805 12:58:42.110686  451238 main.go:141] libmachine: (old-k8s-version-635707) Ensuring network mk-old-k8s-version-635707 is active
	I0805 12:58:42.111108  451238 main.go:141] libmachine: (old-k8s-version-635707) Getting domain xml...
	I0805 12:58:42.112194  451238 main.go:141] libmachine: (old-k8s-version-635707) Creating domain...
	I0805 12:58:43.453015  451238 main.go:141] libmachine: (old-k8s-version-635707) Waiting to get IP...
	I0805 12:58:43.453994  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:43.454435  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:43.454504  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:43.454435  452186 retry.go:31] will retry after 270.355403ms: waiting for machine to come up
	I0805 12:58:43.727101  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:43.727583  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:43.727641  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:43.727568  452186 retry.go:31] will retry after 313.75466ms: waiting for machine to come up
	I0805 12:58:44.043303  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:44.043954  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:44.043981  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:44.043855  452186 retry.go:31] will retry after 308.608573ms: waiting for machine to come up
	I0805 12:58:44.354830  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:44.355396  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:44.355421  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:44.355305  452186 retry.go:31] will retry after 510.256657ms: waiting for machine to come up
	I0805 12:58:44.866970  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:44.867534  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:44.867559  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:44.867424  452186 retry.go:31] will retry after 668.55006ms: waiting for machine to come up
	I0805 12:58:45.537377  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:45.537959  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:45.537989  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:45.537909  452186 retry.go:31] will retry after 677.549944ms: waiting for machine to come up
	I0805 12:58:46.217077  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:46.217591  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:46.217625  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:46.217483  452186 retry.go:31] will retry after 847.636867ms: waiting for machine to come up
	I0805 12:58:43.726277  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetIP
	I0805 12:58:43.729689  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:43.730162  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:43.730195  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:43.730391  450884 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0805 12:58:43.735448  450884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:58:43.749640  450884 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-371585 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-371585 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.228 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 12:58:43.749808  450884 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 12:58:43.749886  450884 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:58:43.798507  450884 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0805 12:58:43.798584  450884 ssh_runner.go:195] Run: which lz4
	I0805 12:58:43.803306  450884 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0805 12:58:43.809104  450884 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 12:58:43.809144  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0805 12:58:45.333758  450884 crio.go:462] duration metric: took 1.530500213s to copy over tarball
	I0805 12:58:45.333831  450884 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 12:58:43.587275  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:46.303995  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:46.304038  450576 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:46.304057  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:46.308815  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:46.308849  450576 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:46.587239  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:46.595116  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:46.595151  450576 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:47.087372  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:47.094319  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:47.094363  450576 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:47.586909  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:47.592210  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:47.592252  450576 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:48.086763  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:48.095151  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:48.095182  450576 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:48.586840  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:48.593834  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:48.593870  450576 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:49.087516  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:49.093647  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:49.093677  450576 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:49.587309  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:49.593592  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 200:
	ok
	I0805 12:58:49.602960  450576 api_server.go:141] control plane version: v1.31.0-rc.0
	I0805 12:58:49.603001  450576 api_server.go:131] duration metric: took 6.516505116s to wait for apiserver health ...
	I0805 12:58:49.603013  450576 cni.go:84] Creating CNI manager for ""
	I0805 12:58:49.603024  450576 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:58:49.851135  450576 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0805 12:58:47.067245  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:47.067895  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:47.067930  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:47.067838  452186 retry.go:31] will retry after 1.275228928s: waiting for machine to come up
	I0805 12:58:48.344881  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:48.345295  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:48.345319  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:48.345258  452186 retry.go:31] will retry after 1.826891386s: waiting for machine to come up
	I0805 12:58:50.174583  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:50.175111  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:50.175138  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:50.175074  452186 retry.go:31] will retry after 1.53756677s: waiting for machine to come up
	I0805 12:58:51.714025  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:51.714529  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:51.714553  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:51.714485  452186 retry.go:31] will retry after 2.762270002s: waiting for machine to come up
	I0805 12:58:47.908896  450884 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.575029516s)
	I0805 12:58:47.908929  450884 crio.go:469] duration metric: took 2.575138566s to extract the tarball
	I0805 12:58:47.908938  450884 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0805 12:58:47.964757  450884 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:58:48.013358  450884 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 12:58:48.013392  450884 cache_images.go:84] Images are preloaded, skipping loading
	I0805 12:58:48.013404  450884 kubeadm.go:934] updating node { 192.168.50.228 8444 v1.30.3 crio true true} ...
	I0805 12:58:48.013533  450884 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-371585 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.228
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-371585 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 12:58:48.013623  450884 ssh_runner.go:195] Run: crio config
	I0805 12:58:48.062183  450884 cni.go:84] Creating CNI manager for ""
	I0805 12:58:48.062219  450884 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:58:48.062238  450884 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 12:58:48.062274  450884 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.228 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-371585 NodeName:default-k8s-diff-port-371585 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.228"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.228 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 12:58:48.062474  450884 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.228
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-371585"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.228
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.228"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 12:58:48.062552  450884 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 12:58:48.076490  450884 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 12:58:48.076583  450884 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 12:58:48.090058  450884 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0805 12:58:48.110202  450884 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 12:58:48.131420  450884 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0805 12:58:48.151774  450884 ssh_runner.go:195] Run: grep 192.168.50.228	control-plane.minikube.internal$ /etc/hosts
	I0805 12:58:48.156904  450884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.228	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:58:48.172398  450884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:58:48.292999  450884 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 12:58:48.310331  450884 certs.go:68] Setting up /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585 for IP: 192.168.50.228
	I0805 12:58:48.310366  450884 certs.go:194] generating shared ca certs ...
	I0805 12:58:48.310389  450884 certs.go:226] acquiring lock for ca certs: {Name:mk0abfcaff3883fbb5243c47b487f9200d9166d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:58:48.310576  450884 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key
	I0805 12:58:48.310640  450884 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key
	I0805 12:58:48.310658  450884 certs.go:256] generating profile certs ...
	I0805 12:58:48.310803  450884 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585/client.key
	I0805 12:58:48.310881  450884 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585/apiserver.key.f7891227
	I0805 12:58:48.310946  450884 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585/proxy-client.key
	I0805 12:58:48.311231  450884 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem (1338 bytes)
	W0805 12:58:48.311317  450884 certs.go:480] ignoring /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219_empty.pem, impossibly tiny 0 bytes
	I0805 12:58:48.311354  450884 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 12:58:48.311408  450884 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem (1082 bytes)
	I0805 12:58:48.311447  450884 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem (1123 bytes)
	I0805 12:58:48.311485  450884 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem (1675 bytes)
	I0805 12:58:48.311545  450884 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:58:48.312365  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 12:58:48.363733  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 12:58:48.395662  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 12:58:48.450822  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 12:58:48.495611  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0805 12:58:48.529393  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0805 12:58:48.557543  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 12:58:48.584777  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 12:58:48.611987  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /usr/share/ca-certificates/3912192.pem (1708 bytes)
	I0805 12:58:48.637500  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 12:58:48.664469  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem --> /usr/share/ca-certificates/391219.pem (1338 bytes)
	I0805 12:58:48.690221  450884 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 12:58:48.709082  450884 ssh_runner.go:195] Run: openssl version
	I0805 12:58:48.716181  450884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3912192.pem && ln -fs /usr/share/ca-certificates/3912192.pem /etc/ssl/certs/3912192.pem"
	I0805 12:58:48.728455  450884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3912192.pem
	I0805 12:58:48.733395  450884 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 11:39 /usr/share/ca-certificates/3912192.pem
	I0805 12:58:48.733456  450884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3912192.pem
	I0805 12:58:48.739295  450884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3912192.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 12:58:48.750515  450884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 12:58:48.761506  450884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:58:48.765995  450884 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:58:48.766052  450884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:58:48.772121  450884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 12:58:48.783123  450884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391219.pem && ln -fs /usr/share/ca-certificates/391219.pem /etc/ssl/certs/391219.pem"
	I0805 12:58:48.794318  450884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/391219.pem
	I0805 12:58:48.798795  450884 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 11:39 /usr/share/ca-certificates/391219.pem
	I0805 12:58:48.798843  450884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391219.pem
	I0805 12:58:48.804878  450884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391219.pem /etc/ssl/certs/51391683.0"
	I0805 12:58:48.816757  450884 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 12:58:48.821686  450884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 12:58:48.828121  450884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 12:58:48.834386  450884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 12:58:48.840425  450884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 12:58:48.846218  450884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 12:58:48.852035  450884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 12:58:48.857997  450884 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-371585 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-371585 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.228 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:58:48.858131  450884 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 12:58:48.858179  450884 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:58:48.908402  450884 cri.go:89] found id: ""
	I0805 12:58:48.908471  450884 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 12:58:48.921185  450884 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0805 12:58:48.921207  450884 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0805 12:58:48.921258  450884 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0805 12:58:48.932907  450884 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0805 12:58:48.933927  450884 kubeconfig.go:125] found "default-k8s-diff-port-371585" server: "https://192.168.50.228:8444"
	I0805 12:58:48.936058  450884 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0805 12:58:48.947233  450884 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.228
	I0805 12:58:48.947262  450884 kubeadm.go:1160] stopping kube-system containers ...
	I0805 12:58:48.947273  450884 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0805 12:58:48.947313  450884 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:58:48.988179  450884 cri.go:89] found id: ""
	I0805 12:58:48.988281  450884 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0805 12:58:49.005901  450884 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 12:58:49.016576  450884 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 12:58:49.016597  450884 kubeadm.go:157] found existing configuration files:
	
	I0805 12:58:49.016648  450884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0805 12:58:49.029718  450884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 12:58:49.029822  450884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 12:58:49.041670  450884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0805 12:58:49.051650  450884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 12:58:49.051724  450884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 12:58:49.061671  450884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0805 12:58:49.071671  450884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 12:58:49.071755  450884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 12:58:49.082022  450884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0805 12:58:49.092013  450884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 12:58:49.092103  450884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 12:58:49.105446  450884 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 12:58:49.118581  450884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:49.233260  450884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:50.199462  450884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:50.418823  450884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:50.500350  450884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:50.594991  450884 api_server.go:52] waiting for apiserver process to appear ...
	I0805 12:58:50.595109  450884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:58:51.096171  450884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:58:51.596111  450884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:58:51.633309  450884 api_server.go:72] duration metric: took 1.038316986s to wait for apiserver process to appear ...
	I0805 12:58:51.633350  450884 api_server.go:88] waiting for apiserver healthz status ...
	I0805 12:58:51.633377  450884 api_server.go:253] Checking apiserver healthz at https://192.168.50.228:8444/healthz ...
	I0805 12:58:51.634005  450884 api_server.go:269] stopped: https://192.168.50.228:8444/healthz: Get "https://192.168.50.228:8444/healthz": dial tcp 192.168.50.228:8444: connect: connection refused
	I0805 12:58:50.021635  450576 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0805 12:58:50.036338  450576 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0805 12:58:50.060746  450576 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 12:58:50.159670  450576 system_pods.go:59] 8 kube-system pods found
	I0805 12:58:50.159724  450576 system_pods.go:61] "coredns-6f6b679f8f-nkv88" [ee7e59fb-2500-4d7a-9537-e38e08fb2445] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0805 12:58:50.159737  450576 system_pods.go:61] "etcd-no-preload-669469" [095df0f1-069a-419f-815b-ddbec3a2291f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0805 12:58:50.159762  450576 system_pods.go:61] "kube-apiserver-no-preload-669469" [20b45902-b807-457a-93b3-d2b9b76d2598] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0805 12:58:50.159772  450576 system_pods.go:61] "kube-controller-manager-no-preload-669469" [122a47ed-7f6f-4b2e-980a-45f41b997dda] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0805 12:58:50.159780  450576 system_pods.go:61] "kube-proxy-cwq69" [78e0333b-a0f4-40a6-a04d-6971bb4d09a8] Running
	I0805 12:58:50.159788  450576 system_pods.go:61] "kube-scheduler-no-preload-669469" [88010c2b-b32f-4fe1-952d-262e881b76dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0805 12:58:50.159796  450576 system_pods.go:61] "metrics-server-6867b74b74-p7b2r" [7e4dd805-07c8-4339-bf1a-57a98fd674cd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 12:58:50.159808  450576 system_pods.go:61] "storage-provisioner" [207c46c5-c3c0-4f0b-b3ea-9b42b9e5f761] Running
	I0805 12:58:50.159817  450576 system_pods.go:74] duration metric: took 99.038765ms to wait for pod list to return data ...
	I0805 12:58:50.159830  450576 node_conditions.go:102] verifying NodePressure condition ...
	I0805 12:58:50.163888  450576 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 12:58:50.163923  450576 node_conditions.go:123] node cpu capacity is 2
	I0805 12:58:50.163956  450576 node_conditions.go:105] duration metric: took 4.11869ms to run NodePressure ...
	I0805 12:58:50.163980  450576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:50.849885  450576 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0805 12:58:50.854483  450576 kubeadm.go:739] kubelet initialised
	I0805 12:58:50.854505  450576 kubeadm.go:740] duration metric: took 4.588388ms waiting for restarted kubelet to initialise ...
	I0805 12:58:50.854514  450576 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 12:58:50.861245  450576 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-nkv88" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:52.869370  450576 pod_ready.go:102] pod "coredns-6f6b679f8f-nkv88" in "kube-system" namespace has status "Ready":"False"
	I0805 12:58:52.134427  450884 api_server.go:253] Checking apiserver healthz at https://192.168.50.228:8444/healthz ...
	I0805 12:58:54.933253  450884 api_server.go:279] https://192.168.50.228:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0805 12:58:54.933288  450884 api_server.go:103] status: https://192.168.50.228:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0805 12:58:54.933305  450884 api_server.go:253] Checking apiserver healthz at https://192.168.50.228:8444/healthz ...
	I0805 12:58:54.970883  450884 api_server.go:279] https://192.168.50.228:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0805 12:58:54.970928  450884 api_server.go:103] status: https://192.168.50.228:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0805 12:58:55.134250  450884 api_server.go:253] Checking apiserver healthz at https://192.168.50.228:8444/healthz ...
	I0805 12:58:55.139762  450884 api_server.go:279] https://192.168.50.228:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:55.139798  450884 api_server.go:103] status: https://192.168.50.228:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:55.634499  450884 api_server.go:253] Checking apiserver healthz at https://192.168.50.228:8444/healthz ...
	I0805 12:58:55.644495  450884 api_server.go:279] https://192.168.50.228:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:55.644532  450884 api_server.go:103] status: https://192.168.50.228:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:56.134123  450884 api_server.go:253] Checking apiserver healthz at https://192.168.50.228:8444/healthz ...
	I0805 12:58:56.141958  450884 api_server.go:279] https://192.168.50.228:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:56.142002  450884 api_server.go:103] status: https://192.168.50.228:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:56.633573  450884 api_server.go:253] Checking apiserver healthz at https://192.168.50.228:8444/healthz ...
	I0805 12:58:56.640578  450884 api_server.go:279] https://192.168.50.228:8444/healthz returned 200:
	ok
	I0805 12:58:56.649624  450884 api_server.go:141] control plane version: v1.30.3
	I0805 12:58:56.649659  450884 api_server.go:131] duration metric: took 5.016299114s to wait for apiserver health ...
	I0805 12:58:56.649671  450884 cni.go:84] Creating CNI manager for ""
	I0805 12:58:56.649681  450884 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:58:56.651587  450884 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0805 12:58:54.478201  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:54.478619  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:54.478650  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:54.478579  452186 retry.go:31] will retry after 2.992766963s: waiting for machine to come up
	I0805 12:58:56.652853  450884 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0805 12:58:56.663878  450884 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0805 12:58:56.699765  450884 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 12:58:56.715040  450884 system_pods.go:59] 8 kube-system pods found
	I0805 12:58:56.715078  450884 system_pods.go:61] "coredns-7db6d8ff4d-8rzb7" [df42e41d-4544-493f-a09d-678df1fb5258] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0805 12:58:56.715085  450884 system_pods.go:61] "etcd-default-k8s-diff-port-371585" [1ab6cd59-432a-44b8-95f2-948c585d9bbf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0805 12:58:56.715092  450884 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-371585" [c9173b98-c77e-4ad0-aea5-c894c045e0c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0805 12:58:56.715101  450884 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-371585" [283737ec-1afa-4994-9cee-b655a8397a37] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0805 12:58:56.715105  450884 system_pods.go:61] "kube-proxy-5dr9v" [767ccb8b-2db0-4b59-b3b0-e099185bc725] Running
	I0805 12:58:56.715111  450884 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-371585" [fb3cfdea-9370-4842-a5ab-5ac24804f59e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0805 12:58:56.715116  450884 system_pods.go:61] "metrics-server-569cc877fc-dsrqr" [0d4c79e4-aa6c-42f5-840b-91b9d714d078] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 12:58:56.715125  450884 system_pods.go:61] "storage-provisioner" [2dba6f50-5cdc-4195-8daf-c19dac38f488] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0805 12:58:56.715133  450884 system_pods.go:74] duration metric: took 15.343284ms to wait for pod list to return data ...
	I0805 12:58:56.715144  450884 node_conditions.go:102] verifying NodePressure condition ...
	I0805 12:58:56.720006  450884 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 12:58:56.720031  450884 node_conditions.go:123] node cpu capacity is 2
	I0805 12:58:56.720042  450884 node_conditions.go:105] duration metric: took 4.893566ms to run NodePressure ...
	I0805 12:58:56.720059  450884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:56.985822  450884 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0805 12:58:56.990461  450884 kubeadm.go:739] kubelet initialised
	I0805 12:58:56.990484  450884 kubeadm.go:740] duration metric: took 4.636814ms waiting for restarted kubelet to initialise ...
	I0805 12:58:56.990493  450884 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 12:58:56.996266  450884 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-8rzb7" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:57.001407  450884 pod_ready.go:97] node "default-k8s-diff-port-371585" hosting pod "coredns-7db6d8ff4d-8rzb7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-371585" has status "Ready":"False"
	I0805 12:58:57.001434  450884 pod_ready.go:81] duration metric: took 5.140963ms for pod "coredns-7db6d8ff4d-8rzb7" in "kube-system" namespace to be "Ready" ...
	E0805 12:58:57.001446  450884 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-371585" hosting pod "coredns-7db6d8ff4d-8rzb7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-371585" has status "Ready":"False"
	I0805 12:58:57.001456  450884 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:57.005437  450884 pod_ready.go:97] node "default-k8s-diff-port-371585" hosting pod "etcd-default-k8s-diff-port-371585" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-371585" has status "Ready":"False"
	I0805 12:58:57.005473  450884 pod_ready.go:81] duration metric: took 3.995646ms for pod "etcd-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	E0805 12:58:57.005486  450884 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-371585" hosting pod "etcd-default-k8s-diff-port-371585" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-371585" has status "Ready":"False"
	I0805 12:58:57.005495  450884 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:57.009923  450884 pod_ready.go:97] node "default-k8s-diff-port-371585" hosting pod "kube-apiserver-default-k8s-diff-port-371585" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-371585" has status "Ready":"False"
	I0805 12:58:57.009943  450884 pod_ready.go:81] duration metric: took 4.439871ms for pod "kube-apiserver-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	E0805 12:58:57.009952  450884 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-371585" hosting pod "kube-apiserver-default-k8s-diff-port-371585" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-371585" has status "Ready":"False"
	I0805 12:58:57.009958  450884 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:54.869534  450576 pod_ready.go:102] pod "coredns-6f6b679f8f-nkv88" in "kube-system" namespace has status "Ready":"False"
	I0805 12:58:56.370007  450576 pod_ready.go:92] pod "coredns-6f6b679f8f-nkv88" in "kube-system" namespace has status "Ready":"True"
	I0805 12:58:56.370035  450576 pod_ready.go:81] duration metric: took 5.508756413s for pod "coredns-6f6b679f8f-nkv88" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:56.370045  450576 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.376357  450576 pod_ready.go:92] pod "etcd-no-preload-669469" in "kube-system" namespace has status "Ready":"True"
	I0805 12:58:58.376386  450576 pod_ready.go:81] duration metric: took 2.006334873s for pod "etcd-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.376396  450576 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:57.473094  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:57.473555  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:57.473587  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:57.473495  452186 retry.go:31] will retry after 4.27138033s: waiting for machine to come up
	I0805 12:59:01.750111  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.750558  451238 main.go:141] libmachine: (old-k8s-version-635707) Found IP for machine: 192.168.61.41
	I0805 12:59:01.750586  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has current primary IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.750593  451238 main.go:141] libmachine: (old-k8s-version-635707) Reserving static IP address...
	I0805 12:59:01.751003  451238 main.go:141] libmachine: (old-k8s-version-635707) Reserved static IP address: 192.168.61.41
	I0805 12:59:01.751061  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "old-k8s-version-635707", mac: "52:54:00:2a:da:c5", ip: "192.168.61.41"} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:01.751081  451238 main.go:141] libmachine: (old-k8s-version-635707) Waiting for SSH to be available...
	I0805 12:59:01.751112  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | skip adding static IP to network mk-old-k8s-version-635707 - found existing host DHCP lease matching {name: "old-k8s-version-635707", mac: "52:54:00:2a:da:c5", ip: "192.168.61.41"}
	I0805 12:59:01.751130  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | Getting to WaitForSSH function...
	I0805 12:59:01.753240  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.753634  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:01.753672  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.753810  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | Using SSH client type: external
	I0805 12:59:01.753854  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | Using SSH private key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/id_rsa (-rw-------)
	I0805 12:59:01.753900  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.41 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 12:59:01.753919  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | About to run SSH command:
	I0805 12:59:01.753933  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | exit 0
	I0805 12:59:01.875919  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | SSH cmd err, output: <nil>: 
	I0805 12:59:01.876298  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetConfigRaw
	I0805 12:59:01.877028  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetIP
	I0805 12:59:01.879644  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.880120  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:01.880164  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.880508  451238 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/config.json ...
	I0805 12:59:01.880778  451238 machine.go:94] provisionDockerMachine start ...
	I0805 12:59:01.880805  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:59:01.881039  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:01.882998  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.883362  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:01.883389  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.883553  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:01.883755  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:01.883900  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:01.884012  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:01.884248  451238 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:01.884496  451238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I0805 12:59:01.884511  451238 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 12:58:57.103049  450884 pod_ready.go:97] node "default-k8s-diff-port-371585" hosting pod "kube-controller-manager-default-k8s-diff-port-371585" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-371585" has status "Ready":"False"
	I0805 12:58:57.103095  450884 pod_ready.go:81] duration metric: took 93.113727ms for pod "kube-controller-manager-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	E0805 12:58:57.103109  450884 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-371585" hosting pod "kube-controller-manager-default-k8s-diff-port-371585" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-371585" has status "Ready":"False"
	I0805 12:58:57.103116  450884 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5dr9v" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:57.503531  450884 pod_ready.go:92] pod "kube-proxy-5dr9v" in "kube-system" namespace has status "Ready":"True"
	I0805 12:58:57.503556  450884 pod_ready.go:81] duration metric: took 400.433562ms for pod "kube-proxy-5dr9v" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:57.503565  450884 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:59.514591  450884 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:02.011308  450884 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:03.148902  450393 start.go:364] duration metric: took 56.514427046s to acquireMachinesLock for "embed-certs-321139"
	I0805 12:59:03.148967  450393 start.go:96] Skipping create...Using existing machine configuration
	I0805 12:59:03.148976  450393 fix.go:54] fixHost starting: 
	I0805 12:59:03.149432  450393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:59:03.149473  450393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:59:03.166485  450393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43007
	I0805 12:59:03.166934  450393 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:59:03.167443  450393 main.go:141] libmachine: Using API Version  1
	I0805 12:59:03.167469  450393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:59:03.167808  450393 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:59:03.168062  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:03.168258  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetState
	I0805 12:59:03.170011  450393 fix.go:112] recreateIfNeeded on embed-certs-321139: state=Stopped err=<nil>
	I0805 12:59:03.170036  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	W0805 12:59:03.170221  450393 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 12:59:03.172109  450393 out.go:177] * Restarting existing kvm2 VM for "embed-certs-321139" ...
	I0805 12:58:58.886766  450576 pod_ready.go:92] pod "kube-apiserver-no-preload-669469" in "kube-system" namespace has status "Ready":"True"
	I0805 12:58:58.886792  450576 pod_ready.go:81] duration metric: took 510.389529ms for pod "kube-apiserver-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.886804  450576 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.891878  450576 pod_ready.go:92] pod "kube-controller-manager-no-preload-669469" in "kube-system" namespace has status "Ready":"True"
	I0805 12:58:58.891907  450576 pod_ready.go:81] duration metric: took 5.094036ms for pod "kube-controller-manager-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.891919  450576 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cwq69" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.896953  450576 pod_ready.go:92] pod "kube-proxy-cwq69" in "kube-system" namespace has status "Ready":"True"
	I0805 12:58:58.896981  450576 pod_ready.go:81] duration metric: took 5.054422ms for pod "kube-proxy-cwq69" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.896995  450576 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.902437  450576 pod_ready.go:92] pod "kube-scheduler-no-preload-669469" in "kube-system" namespace has status "Ready":"True"
	I0805 12:58:58.902456  450576 pod_ready.go:81] duration metric: took 5.453487ms for pod "kube-scheduler-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.902465  450576 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:00.909633  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:03.410487  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:03.173728  450393 main.go:141] libmachine: (embed-certs-321139) Calling .Start
	I0805 12:59:03.173932  450393 main.go:141] libmachine: (embed-certs-321139) Ensuring networks are active...
	I0805 12:59:03.174932  450393 main.go:141] libmachine: (embed-certs-321139) Ensuring network default is active
	I0805 12:59:03.175441  450393 main.go:141] libmachine: (embed-certs-321139) Ensuring network mk-embed-certs-321139 is active
	I0805 12:59:03.176102  450393 main.go:141] libmachine: (embed-certs-321139) Getting domain xml...
	I0805 12:59:03.176848  450393 main.go:141] libmachine: (embed-certs-321139) Creating domain...
	I0805 12:59:01.984198  451238 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 12:59:01.984237  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetMachineName
	I0805 12:59:01.984501  451238 buildroot.go:166] provisioning hostname "old-k8s-version-635707"
	I0805 12:59:01.984534  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetMachineName
	I0805 12:59:01.984750  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:01.987690  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.988085  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:01.988115  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.988240  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:01.988470  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:01.988782  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:01.988945  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:01.989173  451238 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:01.989407  451238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I0805 12:59:01.989425  451238 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-635707 && echo "old-k8s-version-635707" | sudo tee /etc/hostname
	I0805 12:59:02.108368  451238 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-635707
	
	I0805 12:59:02.108406  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:02.111301  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.111669  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:02.111712  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.111837  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:02.112027  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:02.112212  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:02.112393  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:02.112563  451238 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:02.112797  451238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I0805 12:59:02.112824  451238 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-635707' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-635707/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-635707' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 12:59:02.225638  451238 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 12:59:02.225681  451238 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19377-383955/.minikube CaCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19377-383955/.minikube}
	I0805 12:59:02.225731  451238 buildroot.go:174] setting up certificates
	I0805 12:59:02.225745  451238 provision.go:84] configureAuth start
	I0805 12:59:02.225760  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetMachineName
	I0805 12:59:02.226099  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetIP
	I0805 12:59:02.229252  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.229643  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:02.229671  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.229885  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:02.232479  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.232912  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:02.232951  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.233125  451238 provision.go:143] copyHostCerts
	I0805 12:59:02.233188  451238 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem, removing ...
	I0805 12:59:02.233201  451238 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 12:59:02.233271  451238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem (1123 bytes)
	I0805 12:59:02.233412  451238 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem, removing ...
	I0805 12:59:02.233426  451238 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 12:59:02.233459  451238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem (1675 bytes)
	I0805 12:59:02.233543  451238 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem, removing ...
	I0805 12:59:02.233553  451238 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 12:59:02.233581  451238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem (1082 bytes)
	I0805 12:59:02.233661  451238 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-635707 san=[127.0.0.1 192.168.61.41 localhost minikube old-k8s-version-635707]
	I0805 12:59:02.470213  451238 provision.go:177] copyRemoteCerts
	I0805 12:59:02.470328  451238 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 12:59:02.470369  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:02.473450  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.473791  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:02.473829  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.473964  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:02.474173  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:02.474313  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:02.474429  451238 sshutil.go:53] new ssh client: &{IP:192.168.61.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/id_rsa Username:docker}
	I0805 12:59:02.558831  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 12:59:02.583652  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0805 12:59:02.609154  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0805 12:59:02.635827  451238 provision.go:87] duration metric: took 410.067115ms to configureAuth
	I0805 12:59:02.635862  451238 buildroot.go:189] setting minikube options for container-runtime
	I0805 12:59:02.636109  451238 config.go:182] Loaded profile config "old-k8s-version-635707": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0805 12:59:02.636357  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:02.638964  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.639466  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:02.639489  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.639644  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:02.639953  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:02.640197  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:02.640454  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:02.640733  451238 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:02.640975  451238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I0805 12:59:02.641000  451238 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 12:59:02.917466  451238 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 12:59:02.917499  451238 machine.go:97] duration metric: took 1.036701572s to provisionDockerMachine
	I0805 12:59:02.917512  451238 start.go:293] postStartSetup for "old-k8s-version-635707" (driver="kvm2")
	I0805 12:59:02.917522  451238 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 12:59:02.917539  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:59:02.917946  451238 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 12:59:02.917979  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:02.920900  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.921383  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:02.921426  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.921552  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:02.921773  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:02.921958  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:02.922220  451238 sshutil.go:53] new ssh client: &{IP:192.168.61.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/id_rsa Username:docker}
	I0805 12:59:03.003670  451238 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 12:59:03.008348  451238 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 12:59:03.008384  451238 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/addons for local assets ...
	I0805 12:59:03.008468  451238 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/files for local assets ...
	I0805 12:59:03.008588  451238 filesync.go:149] local asset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> 3912192.pem in /etc/ssl/certs
	I0805 12:59:03.008727  451238 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 12:59:03.019098  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:59:03.042969  451238 start.go:296] duration metric: took 125.441712ms for postStartSetup
	I0805 12:59:03.043011  451238 fix.go:56] duration metric: took 20.961935899s for fixHost
	I0805 12:59:03.043034  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:03.045667  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.046030  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:03.046062  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.046254  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:03.046508  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:03.046701  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:03.046824  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:03.047002  451238 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:03.047182  451238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I0805 12:59:03.047192  451238 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 12:59:03.148773  451238 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722862743.120260193
	
	I0805 12:59:03.148798  451238 fix.go:216] guest clock: 1722862743.120260193
	I0805 12:59:03.148807  451238 fix.go:229] Guest: 2024-08-05 12:59:03.120260193 +0000 UTC Remote: 2024-08-05 12:59:03.043015059 +0000 UTC m=+231.118249223 (delta=77.245134ms)
	I0805 12:59:03.148831  451238 fix.go:200] guest clock delta is within tolerance: 77.245134ms
	I0805 12:59:03.148836  451238 start.go:83] releasing machines lock for "old-k8s-version-635707", held for 21.067801046s
	I0805 12:59:03.148857  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:59:03.149131  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetIP
	I0805 12:59:03.152026  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.152444  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:03.152475  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.152645  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:59:03.153237  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:59:03.153423  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:59:03.153495  451238 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 12:59:03.153551  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:03.153860  451238 ssh_runner.go:195] Run: cat /version.json
	I0805 12:59:03.153895  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:03.156566  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.156903  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:03.156963  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.156994  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.157187  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:03.157411  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:03.157479  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:03.157508  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.157594  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:03.157770  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:03.157782  451238 sshutil.go:53] new ssh client: &{IP:192.168.61.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/id_rsa Username:docker}
	I0805 12:59:03.157924  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:03.158107  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:03.158344  451238 sshutil.go:53] new ssh client: &{IP:192.168.61.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/id_rsa Username:docker}
	I0805 12:59:03.254162  451238 ssh_runner.go:195] Run: systemctl --version
	I0805 12:59:03.260684  451238 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 12:59:03.409837  451238 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 12:59:03.416010  451238 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 12:59:03.416093  451238 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 12:59:03.433548  451238 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 12:59:03.433584  451238 start.go:495] detecting cgroup driver to use...
	I0805 12:59:03.433667  451238 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 12:59:03.450756  451238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 12:59:03.467281  451238 docker.go:217] disabling cri-docker service (if available) ...
	I0805 12:59:03.467341  451238 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 12:59:03.482537  451238 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 12:59:03.498623  451238 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 12:59:03.621224  451238 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 12:59:03.781777  451238 docker.go:233] disabling docker service ...
	I0805 12:59:03.781842  451238 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 12:59:03.798020  451238 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 12:59:03.818262  451238 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 12:59:03.940897  451238 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 12:59:04.075622  451238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 12:59:04.092487  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 12:59:04.112699  451238 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0805 12:59:04.112769  451238 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:04.124102  451238 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 12:59:04.124181  451238 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:04.136339  451238 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:04.147689  451238 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:04.158552  451238 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 12:59:04.171412  451238 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 12:59:04.183284  451238 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0805 12:59:04.183336  451238 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0805 12:59:04.199465  451238 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 12:59:04.215571  451238 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:59:04.342540  451238 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 12:59:04.521705  451238 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 12:59:04.521786  451238 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 12:59:04.526734  451238 start.go:563] Will wait 60s for crictl version
	I0805 12:59:04.526795  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:04.530528  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 12:59:04.572468  451238 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 12:59:04.572557  451238 ssh_runner.go:195] Run: crio --version
	I0805 12:59:04.602411  451238 ssh_runner.go:195] Run: crio --version
	I0805 12:59:04.636641  451238 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0805 12:59:04.638062  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetIP
	I0805 12:59:04.641240  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:04.641734  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:04.641763  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:04.641991  451238 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0805 12:59:04.646446  451238 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:59:04.659876  451238 kubeadm.go:883] updating cluster {Name:old-k8s-version-635707 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-635707 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.41 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 12:59:04.660037  451238 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0805 12:59:04.660105  451238 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:59:04.709636  451238 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0805 12:59:04.709725  451238 ssh_runner.go:195] Run: which lz4
	I0805 12:59:04.714439  451238 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0805 12:59:04.719014  451238 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 12:59:04.719047  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0805 12:59:06.414858  451238 crio.go:462] duration metric: took 1.70045694s to copy over tarball
	I0805 12:59:06.414950  451238 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 12:59:04.513198  450884 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:07.018197  450884 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:05.911274  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:07.911405  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:04.478626  450393 main.go:141] libmachine: (embed-certs-321139) Waiting to get IP...
	I0805 12:59:04.479615  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:04.480147  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:04.480209  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:04.480103  452359 retry.go:31] will retry after 236.369287ms: waiting for machine to come up
	I0805 12:59:04.718716  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:04.719184  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:04.719209  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:04.719125  452359 retry.go:31] will retry after 296.553947ms: waiting for machine to come up
	I0805 12:59:05.017667  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:05.018198  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:05.018235  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:05.018143  452359 retry.go:31] will retry after 427.78496ms: waiting for machine to come up
	I0805 12:59:05.447507  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:05.448075  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:05.448105  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:05.448038  452359 retry.go:31] will retry after 469.229133ms: waiting for machine to come up
	I0805 12:59:05.918469  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:05.919013  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:05.919047  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:05.918998  452359 retry.go:31] will retry after 720.005641ms: waiting for machine to come up
	I0805 12:59:06.641103  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:06.641679  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:06.641708  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:06.641634  452359 retry.go:31] will retry after 591.439327ms: waiting for machine to come up
	I0805 12:59:07.234573  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:07.235179  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:07.235207  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:07.235063  452359 retry.go:31] will retry after 1.087958168s: waiting for machine to come up
	I0805 12:59:08.324599  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:08.325179  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:08.325212  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:08.325129  452359 retry.go:31] will retry after 1.316276197s: waiting for machine to come up
	I0805 12:59:09.473711  451238 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.058718584s)
	I0805 12:59:09.473740  451238 crio.go:469] duration metric: took 3.058854233s to extract the tarball
	I0805 12:59:09.473748  451238 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0805 12:59:09.524420  451238 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:59:09.562003  451238 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0805 12:59:09.562035  451238 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0805 12:59:09.562107  451238 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:59:09.562159  451238 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0805 12:59:09.562156  451238 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0805 12:59:09.562194  451238 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0805 12:59:09.562228  451238 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0805 12:59:09.562256  451238 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0805 12:59:09.562374  451238 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0805 12:59:09.562274  451238 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0805 12:59:09.563981  451238 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0805 12:59:09.563993  451238 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0805 12:59:09.564007  451238 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0805 12:59:09.564015  451238 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0805 12:59:09.564032  451238 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0805 12:59:09.564041  451238 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0805 12:59:09.564076  451238 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:59:09.564075  451238 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0805 12:59:09.727888  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0805 12:59:09.732060  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0805 12:59:09.732150  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0805 12:59:09.736408  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0805 12:59:09.748051  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0805 12:59:09.753579  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0805 12:59:09.762561  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0805 12:59:09.822623  451238 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0805 12:59:09.822681  451238 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0805 12:59:09.822742  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:09.824314  451238 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0805 12:59:09.824360  451238 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0805 12:59:09.824404  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:09.905619  451238 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0805 12:59:09.905778  451238 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0805 12:59:09.905738  451238 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0805 12:59:09.905944  451238 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0805 12:59:09.905998  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:09.905851  451238 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0805 12:59:09.906075  451238 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0805 12:59:09.906133  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:09.905861  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:09.916767  451238 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0805 12:59:09.916796  451238 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0805 12:59:09.916812  451238 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0805 12:59:09.916830  451238 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0805 12:59:09.916864  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:09.916868  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:09.916905  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0805 12:59:09.916958  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0805 12:59:09.918683  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0805 12:59:09.918718  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0805 12:59:09.918776  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0805 12:59:10.007687  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0805 12:59:10.007721  451238 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0805 12:59:10.007871  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0805 12:59:10.042432  451238 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0805 12:59:10.061343  451238 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0805 12:59:10.061400  451238 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0805 12:59:10.061469  451238 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0805 12:59:10.073852  451238 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0805 12:59:10.084957  451238 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0805 12:59:10.423355  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:59:10.563992  451238 cache_images.go:92] duration metric: took 1.001937985s to LoadCachedImages
	W0805 12:59:10.564184  451238 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0805 12:59:10.564211  451238 kubeadm.go:934] updating node { 192.168.61.41 8443 v1.20.0 crio true true} ...
	I0805 12:59:10.564345  451238 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-635707 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.41
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-635707 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 12:59:10.564427  451238 ssh_runner.go:195] Run: crio config
	I0805 12:59:10.612146  451238 cni.go:84] Creating CNI manager for ""
	I0805 12:59:10.612180  451238 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:59:10.612197  451238 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 12:59:10.612226  451238 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.41 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-635707 NodeName:old-k8s-version-635707 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.41"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.41 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0805 12:59:10.612415  451238 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.41
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-635707"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.41
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.41"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 12:59:10.612507  451238 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0805 12:59:10.623036  451238 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 12:59:10.623121  451238 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 12:59:10.633484  451238 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0805 12:59:10.652444  451238 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 12:59:10.673192  451238 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0805 12:59:10.694533  451238 ssh_runner.go:195] Run: grep 192.168.61.41	control-plane.minikube.internal$ /etc/hosts
	I0805 12:59:10.699901  451238 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.41	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:59:10.714251  451238 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:59:10.838992  451238 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 12:59:10.857248  451238 certs.go:68] Setting up /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707 for IP: 192.168.61.41
	I0805 12:59:10.857279  451238 certs.go:194] generating shared ca certs ...
	I0805 12:59:10.857303  451238 certs.go:226] acquiring lock for ca certs: {Name:mk0abfcaff3883fbb5243c47b487f9200d9166d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:59:10.857515  451238 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key
	I0805 12:59:10.857587  451238 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key
	I0805 12:59:10.857602  451238 certs.go:256] generating profile certs ...
	I0805 12:59:10.857746  451238 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/client.key
	I0805 12:59:10.857847  451238 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/apiserver.key.3f42c485
	I0805 12:59:10.857907  451238 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/proxy-client.key
	I0805 12:59:10.858072  451238 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem (1338 bytes)
	W0805 12:59:10.858122  451238 certs.go:480] ignoring /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219_empty.pem, impossibly tiny 0 bytes
	I0805 12:59:10.858143  451238 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 12:59:10.858177  451238 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem (1082 bytes)
	I0805 12:59:10.858207  451238 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem (1123 bytes)
	I0805 12:59:10.858235  451238 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem (1675 bytes)
	I0805 12:59:10.858294  451238 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:59:10.859247  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 12:59:10.908518  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 12:59:10.949310  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 12:59:10.981447  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 12:59:11.008085  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0805 12:59:11.035539  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0805 12:59:11.071371  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 12:59:11.099842  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 12:59:11.135629  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 12:59:11.164194  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem --> /usr/share/ca-certificates/391219.pem (1338 bytes)
	I0805 12:59:11.190595  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /usr/share/ca-certificates/3912192.pem (1708 bytes)
	I0805 12:59:11.219765  451238 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 12:59:11.240836  451238 ssh_runner.go:195] Run: openssl version
	I0805 12:59:11.247516  451238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3912192.pem && ln -fs /usr/share/ca-certificates/3912192.pem /etc/ssl/certs/3912192.pem"
	I0805 12:59:11.260736  451238 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3912192.pem
	I0805 12:59:11.266004  451238 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 11:39 /usr/share/ca-certificates/3912192.pem
	I0805 12:59:11.266100  451238 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3912192.pem
	I0805 12:59:11.273012  451238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3912192.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 12:59:11.285453  451238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 12:59:11.296934  451238 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:59:11.301588  451238 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:59:11.301655  451238 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:59:11.307459  451238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 12:59:11.318833  451238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391219.pem && ln -fs /usr/share/ca-certificates/391219.pem /etc/ssl/certs/391219.pem"
	I0805 12:59:11.330224  451238 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/391219.pem
	I0805 12:59:11.334864  451238 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 11:39 /usr/share/ca-certificates/391219.pem
	I0805 12:59:11.334917  451238 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391219.pem
	I0805 12:59:11.341338  451238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391219.pem /etc/ssl/certs/51391683.0"
	I0805 12:59:11.353084  451238 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 12:59:11.358532  451238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 12:59:11.365419  451238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 12:59:11.371581  451238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 12:59:11.378308  451238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 12:59:11.384640  451238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 12:59:11.390622  451238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 12:59:11.397027  451238 kubeadm.go:392] StartCluster: {Name:old-k8s-version-635707 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-635707 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.41 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:59:11.397199  451238 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 12:59:11.397286  451238 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:59:11.436612  451238 cri.go:89] found id: ""
	I0805 12:59:11.436689  451238 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 12:59:11.447906  451238 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0805 12:59:11.447927  451238 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0805 12:59:11.447984  451238 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0805 12:59:11.459282  451238 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0805 12:59:11.460548  451238 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-635707" does not appear in /home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 12:59:11.461355  451238 kubeconfig.go:62] /home/jenkins/minikube-integration/19377-383955/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-635707" cluster setting kubeconfig missing "old-k8s-version-635707" context setting]
	I0805 12:59:11.462324  451238 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/kubeconfig: {Name:mkf2ea766e58530103015ce4ba9d1ed3336f3926 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:59:11.476306  451238 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0805 12:59:11.487869  451238 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.41
	I0805 12:59:11.487911  451238 kubeadm.go:1160] stopping kube-system containers ...
	I0805 12:59:11.487927  451238 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0805 12:59:11.487988  451238 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:59:11.526601  451238 cri.go:89] found id: ""
	I0805 12:59:11.526674  451238 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0805 12:59:11.545429  451238 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 12:59:11.556725  451238 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 12:59:11.556755  451238 kubeadm.go:157] found existing configuration files:
	
	I0805 12:59:11.556820  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 12:59:11.566564  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 12:59:11.566648  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 12:59:11.576859  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 12:59:11.586237  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 12:59:11.586329  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 12:59:11.596721  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 12:59:11.607239  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 12:59:11.607340  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 12:59:11.617626  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 12:59:11.627179  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 12:59:11.627251  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 12:59:11.637566  451238 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 12:59:11.648889  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:11.780270  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:08.018320  450884 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"True"
	I0805 12:59:08.018363  450884 pod_ready.go:81] duration metric: took 10.514788401s for pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:08.018379  450884 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:10.270876  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:10.409419  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:12.410565  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:09.643077  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:09.643655  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:09.643692  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:09.643554  452359 retry.go:31] will retry after 1.473183692s: waiting for machine to come up
	I0805 12:59:11.118468  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:11.119005  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:11.119035  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:11.118943  452359 retry.go:31] will retry after 2.036333626s: waiting for machine to come up
	I0805 12:59:13.156866  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:13.157390  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:13.157419  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:13.157339  452359 retry.go:31] will retry after 2.095065362s: waiting for machine to come up
	I0805 12:59:12.549918  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:12.781853  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:12.877381  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:12.978141  451238 api_server.go:52] waiting for apiserver process to appear ...
	I0805 12:59:12.978250  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:13.479242  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:13.978456  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:14.478575  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:14.978783  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:15.479342  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:15.978307  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:16.479180  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:12.526543  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:15.027362  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:14.909480  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:16.911090  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:15.253589  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:15.254081  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:15.254111  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:15.254020  452359 retry.go:31] will retry after 2.859783781s: waiting for machine to come up
	I0805 12:59:18.116972  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:18.117528  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:18.117559  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:18.117486  452359 retry.go:31] will retry after 4.456427854s: waiting for machine to come up
	I0805 12:59:16.978915  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:17.479019  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:17.978574  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:18.478343  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:18.978820  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:19.478488  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:19.978335  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:20.478945  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:20.979040  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:21.479324  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:17.525332  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:19.525407  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:22.025092  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:19.410416  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:21.908646  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:22.576842  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.577261  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has current primary IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.577291  450393 main.go:141] libmachine: (embed-certs-321139) Found IP for machine: 192.168.39.196
	I0805 12:59:22.577306  450393 main.go:141] libmachine: (embed-certs-321139) Reserving static IP address...
	I0805 12:59:22.577834  450393 main.go:141] libmachine: (embed-certs-321139) Reserved static IP address: 192.168.39.196
	I0805 12:59:22.577877  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "embed-certs-321139", mac: "52:54:00:6c:ad:fd", ip: "192.168.39.196"} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:22.577893  450393 main.go:141] libmachine: (embed-certs-321139) Waiting for SSH to be available...
	I0805 12:59:22.577915  450393 main.go:141] libmachine: (embed-certs-321139) DBG | skip adding static IP to network mk-embed-certs-321139 - found existing host DHCP lease matching {name: "embed-certs-321139", mac: "52:54:00:6c:ad:fd", ip: "192.168.39.196"}
	I0805 12:59:22.577922  450393 main.go:141] libmachine: (embed-certs-321139) DBG | Getting to WaitForSSH function...
	I0805 12:59:22.580080  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.580520  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:22.580552  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.580707  450393 main.go:141] libmachine: (embed-certs-321139) DBG | Using SSH client type: external
	I0805 12:59:22.580742  450393 main.go:141] libmachine: (embed-certs-321139) DBG | Using SSH private key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa (-rw-------)
	I0805 12:59:22.580764  450393 main.go:141] libmachine: (embed-certs-321139) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.196 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 12:59:22.580778  450393 main.go:141] libmachine: (embed-certs-321139) DBG | About to run SSH command:
	I0805 12:59:22.580793  450393 main.go:141] libmachine: (embed-certs-321139) DBG | exit 0
	I0805 12:59:22.703872  450393 main.go:141] libmachine: (embed-certs-321139) DBG | SSH cmd err, output: <nil>: 
	I0805 12:59:22.704333  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetConfigRaw
	I0805 12:59:22.705046  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetIP
	I0805 12:59:22.707544  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.707919  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:22.707951  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.708240  450393 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139/config.json ...
	I0805 12:59:22.708474  450393 machine.go:94] provisionDockerMachine start ...
	I0805 12:59:22.708501  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:22.708755  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:22.711177  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.711488  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:22.711510  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.711639  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:22.711842  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:22.711998  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:22.712157  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:22.712378  450393 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:22.712581  450393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0805 12:59:22.712595  450393 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 12:59:22.816371  450393 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 12:59:22.816433  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetMachineName
	I0805 12:59:22.816708  450393 buildroot.go:166] provisioning hostname "embed-certs-321139"
	I0805 12:59:22.816743  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetMachineName
	I0805 12:59:22.816959  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:22.819715  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.820085  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:22.820108  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.820321  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:22.820510  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:22.820656  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:22.820794  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:22.820952  450393 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:22.821203  450393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0805 12:59:22.821229  450393 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-321139 && echo "embed-certs-321139" | sudo tee /etc/hostname
	I0805 12:59:22.938845  450393 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-321139
	
	I0805 12:59:22.938888  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:22.942264  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.942651  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:22.942684  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.942904  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:22.943161  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:22.943383  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:22.943568  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:22.943777  450393 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:22.943987  450393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0805 12:59:22.944011  450393 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-321139' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-321139/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-321139' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 12:59:23.062700  450393 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 12:59:23.062734  450393 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19377-383955/.minikube CaCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19377-383955/.minikube}
	I0805 12:59:23.062762  450393 buildroot.go:174] setting up certificates
	I0805 12:59:23.062774  450393 provision.go:84] configureAuth start
	I0805 12:59:23.062800  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetMachineName
	I0805 12:59:23.063142  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetIP
	I0805 12:59:23.065839  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.066140  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.066175  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.066359  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:23.069214  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.069562  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.069597  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.069746  450393 provision.go:143] copyHostCerts
	I0805 12:59:23.069813  450393 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem, removing ...
	I0805 12:59:23.069827  450393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 12:59:23.069897  450393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem (1082 bytes)
	I0805 12:59:23.070014  450393 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem, removing ...
	I0805 12:59:23.070025  450393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 12:59:23.070083  450393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem (1123 bytes)
	I0805 12:59:23.070185  450393 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem, removing ...
	I0805 12:59:23.070197  450393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 12:59:23.070226  450393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem (1675 bytes)
	I0805 12:59:23.070308  450393 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem org=jenkins.embed-certs-321139 san=[127.0.0.1 192.168.39.196 embed-certs-321139 localhost minikube]
	I0805 12:59:23.223660  450393 provision.go:177] copyRemoteCerts
	I0805 12:59:23.223759  450393 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 12:59:23.223799  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:23.226548  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.226980  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.227014  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.227195  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:23.227449  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:23.227624  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:23.227801  450393 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa Username:docker}
	I0805 12:59:23.311952  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0805 12:59:23.336888  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0805 12:59:23.363397  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 12:59:23.388197  450393 provision.go:87] duration metric: took 325.408192ms to configureAuth
	I0805 12:59:23.388234  450393 buildroot.go:189] setting minikube options for container-runtime
	I0805 12:59:23.388470  450393 config.go:182] Loaded profile config "embed-certs-321139": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:59:23.388596  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:23.391247  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.391597  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.391626  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.391843  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:23.392054  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:23.392240  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:23.392371  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:23.392528  450393 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:23.392825  450393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0805 12:59:23.392853  450393 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 12:59:23.675427  450393 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 12:59:23.675459  450393 machine.go:97] duration metric: took 966.969142ms to provisionDockerMachine
	I0805 12:59:23.675472  450393 start.go:293] postStartSetup for "embed-certs-321139" (driver="kvm2")
	I0805 12:59:23.675484  450393 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 12:59:23.675515  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:23.675885  450393 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 12:59:23.675912  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:23.678780  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.679100  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.679152  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.679333  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:23.679524  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:23.679657  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:23.679860  450393 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa Username:docker}
	I0805 12:59:23.764372  450393 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 12:59:23.769059  450393 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 12:59:23.769088  450393 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/addons for local assets ...
	I0805 12:59:23.769162  450393 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/files for local assets ...
	I0805 12:59:23.769231  450393 filesync.go:149] local asset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> 3912192.pem in /etc/ssl/certs
	I0805 12:59:23.769334  450393 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 12:59:23.781287  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:59:23.808609  450393 start.go:296] duration metric: took 133.117086ms for postStartSetup
	I0805 12:59:23.808665  450393 fix.go:56] duration metric: took 20.659690035s for fixHost
	I0805 12:59:23.808694  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:23.811519  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.811948  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.811978  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.812164  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:23.812366  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:23.812539  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:23.812708  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:23.812897  450393 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:23.813137  450393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0805 12:59:23.813151  450393 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 12:59:23.916498  450393 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722862763.883942670
	
	I0805 12:59:23.916521  450393 fix.go:216] guest clock: 1722862763.883942670
	I0805 12:59:23.916536  450393 fix.go:229] Guest: 2024-08-05 12:59:23.88394267 +0000 UTC Remote: 2024-08-05 12:59:23.8086712 +0000 UTC m=+359.764794687 (delta=75.27147ms)
	I0805 12:59:23.916570  450393 fix.go:200] guest clock delta is within tolerance: 75.27147ms
	I0805 12:59:23.916578  450393 start.go:83] releasing machines lock for "embed-certs-321139", held for 20.767637373s
	I0805 12:59:23.916598  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:23.916867  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetIP
	I0805 12:59:23.919570  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.919972  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.919999  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.920142  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:23.920666  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:23.920837  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:23.920930  450393 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 12:59:23.920981  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:23.921063  450393 ssh_runner.go:195] Run: cat /version.json
	I0805 12:59:23.921083  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:23.924176  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.924209  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.924557  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.924588  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.924613  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.924635  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.924749  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:23.924936  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:23.925021  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:23.925127  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:23.925219  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:23.925286  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:23.925369  450393 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa Username:docker}
	I0805 12:59:23.925454  450393 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa Username:docker}
	I0805 12:59:24.000693  450393 ssh_runner.go:195] Run: systemctl --version
	I0805 12:59:24.023194  450393 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 12:59:24.178807  450393 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 12:59:24.184954  450393 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 12:59:24.185031  450393 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 12:59:24.201420  450393 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 12:59:24.201453  450393 start.go:495] detecting cgroup driver to use...
	I0805 12:59:24.201543  450393 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 12:59:24.218603  450393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 12:59:24.233928  450393 docker.go:217] disabling cri-docker service (if available) ...
	I0805 12:59:24.233999  450393 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 12:59:24.248455  450393 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 12:59:24.263355  450393 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 12:59:24.386806  450393 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 12:59:24.565128  450393 docker.go:233] disabling docker service ...
	I0805 12:59:24.565229  450393 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 12:59:24.581053  450393 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 12:59:24.594297  450393 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 12:59:24.716615  450393 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 12:59:24.835687  450393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 12:59:24.850666  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 12:59:24.870993  450393 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0805 12:59:24.871055  450393 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:24.881731  450393 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 12:59:24.881815  450393 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:24.893156  450393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:24.903802  450393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:24.915189  450393 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 12:59:24.926967  450393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:24.938008  450393 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:24.956033  450393 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:24.967863  450393 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 12:59:24.977758  450393 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0805 12:59:24.977822  450393 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0805 12:59:24.993837  450393 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 12:59:25.005009  450393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:59:25.135856  450393 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 12:59:25.277425  450393 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 12:59:25.277513  450393 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 12:59:25.282628  450393 start.go:563] Will wait 60s for crictl version
	I0805 12:59:25.282704  450393 ssh_runner.go:195] Run: which crictl
	I0805 12:59:25.287324  450393 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 12:59:25.335315  450393 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 12:59:25.335396  450393 ssh_runner.go:195] Run: crio --version
	I0805 12:59:25.367574  450393 ssh_runner.go:195] Run: crio --version
	I0805 12:59:25.398926  450393 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0805 12:59:21.979289  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:22.478367  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:22.978424  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:23.478877  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:23.978841  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:24.478635  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:24.978824  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:25.479076  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:25.979222  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:26.478928  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:24.025234  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:26.028817  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:23.909428  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:25.910877  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:27.911235  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:25.400219  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetIP
	I0805 12:59:25.403052  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:25.403508  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:25.403552  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:25.403849  450393 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0805 12:59:25.408402  450393 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:59:25.423146  450393 kubeadm.go:883] updating cluster {Name:embed-certs-321139 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-321139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 12:59:25.423301  450393 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 12:59:25.423368  450393 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:59:25.460713  450393 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0805 12:59:25.460795  450393 ssh_runner.go:195] Run: which lz4
	I0805 12:59:25.464997  450393 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0805 12:59:25.469397  450393 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 12:59:25.469452  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0805 12:59:26.966110  450393 crio.go:462] duration metric: took 1.501152522s to copy over tarball
	I0805 12:59:26.966207  450393 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 12:59:26.978648  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:27.478951  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:27.978405  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:28.479008  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:28.978521  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:29.479199  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:29.979288  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:30.479030  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:30.978372  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:31.479194  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:28.525888  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:31.025690  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:30.410973  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:32.910889  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:29.287605  450393 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.321364872s)
	I0805 12:59:29.287636  450393 crio.go:469] duration metric: took 2.321487153s to extract the tarball
	I0805 12:59:29.287647  450393 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0805 12:59:29.329182  450393 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:59:29.372183  450393 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 12:59:29.372211  450393 cache_images.go:84] Images are preloaded, skipping loading
	I0805 12:59:29.372220  450393 kubeadm.go:934] updating node { 192.168.39.196 8443 v1.30.3 crio true true} ...
	I0805 12:59:29.372349  450393 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-321139 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.196
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-321139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 12:59:29.372433  450393 ssh_runner.go:195] Run: crio config
	I0805 12:59:29.426003  450393 cni.go:84] Creating CNI manager for ""
	I0805 12:59:29.426025  450393 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:59:29.426036  450393 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 12:59:29.426059  450393 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.196 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-321139 NodeName:embed-certs-321139 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.196"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.196 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 12:59:29.426192  450393 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.196
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-321139"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.196
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.196"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 12:59:29.426250  450393 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 12:59:29.436248  450393 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 12:59:29.436315  450393 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 12:59:29.445844  450393 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0805 12:59:29.463125  450393 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 12:59:29.479685  450393 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0805 12:59:29.499033  450393 ssh_runner.go:195] Run: grep 192.168.39.196	control-plane.minikube.internal$ /etc/hosts
	I0805 12:59:29.503175  450393 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.196	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:59:29.516141  450393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:59:29.645914  450393 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 12:59:29.664578  450393 certs.go:68] Setting up /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139 for IP: 192.168.39.196
	I0805 12:59:29.664608  450393 certs.go:194] generating shared ca certs ...
	I0805 12:59:29.664626  450393 certs.go:226] acquiring lock for ca certs: {Name:mk0abfcaff3883fbb5243c47b487f9200d9166d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:59:29.664853  450393 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key
	I0805 12:59:29.664922  450393 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key
	I0805 12:59:29.664939  450393 certs.go:256] generating profile certs ...
	I0805 12:59:29.665058  450393 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139/client.key
	I0805 12:59:29.665143  450393 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139/apiserver.key.ce53eda3
	I0805 12:59:29.665183  450393 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139/proxy-client.key
	I0805 12:59:29.665293  450393 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem (1338 bytes)
	W0805 12:59:29.665324  450393 certs.go:480] ignoring /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219_empty.pem, impossibly tiny 0 bytes
	I0805 12:59:29.665331  450393 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 12:59:29.665360  450393 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem (1082 bytes)
	I0805 12:59:29.665382  450393 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem (1123 bytes)
	I0805 12:59:29.665405  450393 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem (1675 bytes)
	I0805 12:59:29.665442  450393 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:59:29.666287  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 12:59:29.705969  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 12:59:29.752700  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 12:59:29.779819  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 12:59:29.806578  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0805 12:59:29.832277  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0805 12:59:29.861682  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 12:59:29.888113  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0805 12:59:29.915023  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem --> /usr/share/ca-certificates/391219.pem (1338 bytes)
	I0805 12:59:29.942582  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /usr/share/ca-certificates/3912192.pem (1708 bytes)
	I0805 12:59:29.971225  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 12:59:29.999278  450393 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 12:59:30.018294  450393 ssh_runner.go:195] Run: openssl version
	I0805 12:59:30.024645  450393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 12:59:30.035446  450393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:59:30.040216  450393 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:59:30.040279  450393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:59:30.046151  450393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 12:59:30.057664  450393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391219.pem && ln -fs /usr/share/ca-certificates/391219.pem /etc/ssl/certs/391219.pem"
	I0805 12:59:30.068822  450393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/391219.pem
	I0805 12:59:30.074073  450393 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 11:39 /usr/share/ca-certificates/391219.pem
	I0805 12:59:30.074138  450393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391219.pem
	I0805 12:59:30.080126  450393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391219.pem /etc/ssl/certs/51391683.0"
	I0805 12:59:30.091168  450393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3912192.pem && ln -fs /usr/share/ca-certificates/3912192.pem /etc/ssl/certs/3912192.pem"
	I0805 12:59:30.103171  450393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3912192.pem
	I0805 12:59:30.108840  450393 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 11:39 /usr/share/ca-certificates/3912192.pem
	I0805 12:59:30.108924  450393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3912192.pem
	I0805 12:59:30.115469  450393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3912192.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 12:59:30.126742  450393 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 12:59:30.132008  450393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 12:59:30.138285  450393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 12:59:30.144251  450393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 12:59:30.150718  450393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 12:59:30.157183  450393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 12:59:30.163709  450393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 12:59:30.170852  450393 kubeadm.go:392] StartCluster: {Name:embed-certs-321139 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-321139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:59:30.170987  450393 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 12:59:30.171055  450393 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:59:30.216014  450393 cri.go:89] found id: ""
	I0805 12:59:30.216103  450393 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 12:59:30.234046  450393 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0805 12:59:30.234076  450393 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0805 12:59:30.234151  450393 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0805 12:59:30.245861  450393 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0805 12:59:30.247434  450393 kubeconfig.go:125] found "embed-certs-321139" server: "https://192.168.39.196:8443"
	I0805 12:59:30.250024  450393 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0805 12:59:30.261066  450393 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.196
	I0805 12:59:30.261116  450393 kubeadm.go:1160] stopping kube-system containers ...
	I0805 12:59:30.261140  450393 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0805 12:59:30.261201  450393 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:59:30.306587  450393 cri.go:89] found id: ""
	I0805 12:59:30.306678  450393 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0805 12:59:30.326818  450393 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 12:59:30.336908  450393 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 12:59:30.336931  450393 kubeadm.go:157] found existing configuration files:
	
	I0805 12:59:30.336984  450393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 12:59:30.346004  450393 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 12:59:30.346105  450393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 12:59:30.355979  450393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 12:59:30.366124  450393 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 12:59:30.366185  450393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 12:59:30.376923  450393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 12:59:30.386526  450393 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 12:59:30.386599  450393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 12:59:30.396661  450393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 12:59:30.406693  450393 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 12:59:30.406765  450393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 12:59:30.417789  450393 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 12:59:30.428214  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:30.554777  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:31.703579  450393 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.14876196s)
	I0805 12:59:31.703620  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:31.925724  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:31.999840  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:32.089948  450393 api_server.go:52] waiting for apiserver process to appear ...
	I0805 12:59:32.090084  450393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:32.590152  450393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:33.090222  450393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:33.115351  450393 api_server.go:72] duration metric: took 1.025404322s to wait for apiserver process to appear ...
	I0805 12:59:33.115385  450393 api_server.go:88] waiting for apiserver healthz status ...
	I0805 12:59:33.115411  450393 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0805 12:59:33.115983  450393 api_server.go:269] stopped: https://192.168.39.196:8443/healthz: Get "https://192.168.39.196:8443/healthz": dial tcp 192.168.39.196:8443: connect: connection refused
	I0805 12:59:33.616210  450393 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0805 12:59:31.978481  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:32.479031  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:32.978796  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:33.478677  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:33.979377  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:34.478595  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:34.979227  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:35.478695  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:35.978911  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:36.479327  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:33.027363  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:35.525528  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:36.274855  450393 api_server.go:279] https://192.168.39.196:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0805 12:59:36.274895  450393 api_server.go:103] status: https://192.168.39.196:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0805 12:59:36.274912  450393 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0805 12:59:36.314290  450393 api_server.go:279] https://192.168.39.196:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0805 12:59:36.314325  450393 api_server.go:103] status: https://192.168.39.196:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0805 12:59:36.615566  450393 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0805 12:59:36.620594  450393 api_server.go:279] https://192.168.39.196:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:59:36.620626  450393 api_server.go:103] status: https://192.168.39.196:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:59:37.116251  450393 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0805 12:59:37.120719  450393 api_server.go:279] https://192.168.39.196:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:59:37.120749  450393 api_server.go:103] status: https://192.168.39.196:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:59:37.616330  450393 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0805 12:59:37.620778  450393 api_server.go:279] https://192.168.39.196:8443/healthz returned 200:
	ok
	I0805 12:59:37.627608  450393 api_server.go:141] control plane version: v1.30.3
	I0805 12:59:37.627640  450393 api_server.go:131] duration metric: took 4.512246076s to wait for apiserver health ...
	I0805 12:59:37.627652  450393 cni.go:84] Creating CNI manager for ""
	I0805 12:59:37.627661  450393 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:59:37.628987  450393 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0805 12:59:35.410070  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:37.411719  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:37.630068  450393 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0805 12:59:37.650034  450393 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0805 12:59:37.691891  450393 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 12:59:37.704810  450393 system_pods.go:59] 8 kube-system pods found
	I0805 12:59:37.704855  450393 system_pods.go:61] "coredns-7db6d8ff4d-wm7lh" [e3851d79-431c-4629-bfdc-ed9615cd46aa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0805 12:59:37.704866  450393 system_pods.go:61] "etcd-embed-certs-321139" [98de664b-92d7-432d-9881-496dd8edd9f3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0805 12:59:37.704887  450393 system_pods.go:61] "kube-apiserver-embed-certs-321139" [2d93e6df-1933-4ac1-82f6-d0d8f74f6d4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0805 12:59:37.704900  450393 system_pods.go:61] "kube-controller-manager-embed-certs-321139" [84165f78-f74b-4714-81b9-eeac2771b86b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0805 12:59:37.704916  450393 system_pods.go:61] "kube-proxy-shgv2" [a19c5991-505f-4105-8c20-7afd63dd8e61] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0805 12:59:37.704928  450393 system_pods.go:61] "kube-scheduler-embed-certs-321139" [961a5013-fd55-48a2-adc2-acde33f6aed5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0805 12:59:37.704946  450393 system_pods.go:61] "metrics-server-569cc877fc-k8mrt" [6d400b20-5de5-4046-b773-39766c67cdb4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 12:59:37.704956  450393 system_pods.go:61] "storage-provisioner" [8b2db057-5262-4648-93ea-f2f0ed51a19b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0805 12:59:37.704967  450393 system_pods.go:74] duration metric: took 13.04358ms to wait for pod list to return data ...
	I0805 12:59:37.704980  450393 node_conditions.go:102] verifying NodePressure condition ...
	I0805 12:59:37.710340  450393 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 12:59:37.710367  450393 node_conditions.go:123] node cpu capacity is 2
	I0805 12:59:37.710382  450393 node_conditions.go:105] duration metric: took 5.392102ms to run NodePressure ...
	I0805 12:59:37.710402  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:37.995945  450393 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0805 12:59:38.000274  450393 kubeadm.go:739] kubelet initialised
	I0805 12:59:38.000295  450393 kubeadm.go:740] duration metric: took 4.323835ms waiting for restarted kubelet to initialise ...
	I0805 12:59:38.000302  450393 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 12:59:38.006122  450393 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-wm7lh" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:38.012368  450393 pod_ready.go:97] node "embed-certs-321139" hosting pod "coredns-7db6d8ff4d-wm7lh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.012392  450393 pod_ready.go:81] duration metric: took 6.243837ms for pod "coredns-7db6d8ff4d-wm7lh" in "kube-system" namespace to be "Ready" ...
	E0805 12:59:38.012400  450393 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-321139" hosting pod "coredns-7db6d8ff4d-wm7lh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.012406  450393 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:38.016338  450393 pod_ready.go:97] node "embed-certs-321139" hosting pod "etcd-embed-certs-321139" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.016357  450393 pod_ready.go:81] duration metric: took 3.943012ms for pod "etcd-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	E0805 12:59:38.016364  450393 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-321139" hosting pod "etcd-embed-certs-321139" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.016369  450393 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:38.021019  450393 pod_ready.go:97] node "embed-certs-321139" hosting pod "kube-apiserver-embed-certs-321139" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.021044  450393 pod_ready.go:81] duration metric: took 4.667242ms for pod "kube-apiserver-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	E0805 12:59:38.021055  450393 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-321139" hosting pod "kube-apiserver-embed-certs-321139" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.021063  450393 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:38.096303  450393 pod_ready.go:97] node "embed-certs-321139" hosting pod "kube-controller-manager-embed-certs-321139" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.096334  450393 pod_ready.go:81] duration metric: took 75.253785ms for pod "kube-controller-manager-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	E0805 12:59:38.096345  450393 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-321139" hosting pod "kube-controller-manager-embed-certs-321139" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.096351  450393 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-shgv2" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:38.495648  450393 pod_ready.go:97] node "embed-certs-321139" hosting pod "kube-proxy-shgv2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.495677  450393 pod_ready.go:81] duration metric: took 399.318117ms for pod "kube-proxy-shgv2" in "kube-system" namespace to be "Ready" ...
	E0805 12:59:38.495687  450393 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-321139" hosting pod "kube-proxy-shgv2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.495694  450393 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:38.896066  450393 pod_ready.go:97] node "embed-certs-321139" hosting pod "kube-scheduler-embed-certs-321139" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.896091  450393 pod_ready.go:81] duration metric: took 400.39101ms for pod "kube-scheduler-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	E0805 12:59:38.896101  450393 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-321139" hosting pod "kube-scheduler-embed-certs-321139" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.896108  450393 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:39.295587  450393 pod_ready.go:97] node "embed-certs-321139" hosting pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:39.295618  450393 pod_ready.go:81] duration metric: took 399.499354ms for pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace to be "Ready" ...
	E0805 12:59:39.295632  450393 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-321139" hosting pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:39.295653  450393 pod_ready.go:38] duration metric: took 1.295340252s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 12:59:39.295675  450393 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 12:59:39.308136  450393 ops.go:34] apiserver oom_adj: -16
	I0805 12:59:39.308161  450393 kubeadm.go:597] duration metric: took 9.07407738s to restartPrimaryControlPlane
	I0805 12:59:39.308170  450393 kubeadm.go:394] duration metric: took 9.137335392s to StartCluster
	I0805 12:59:39.308188  450393 settings.go:142] acquiring lock: {Name:mkef693333292ed53a03690c72ec170ce2e26d3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:59:39.308272  450393 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 12:59:39.310750  450393 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/kubeconfig: {Name:mkf2ea766e58530103015ce4ba9d1ed3336f3926 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:59:39.311015  450393 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 12:59:39.311149  450393 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 12:59:39.311240  450393 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-321139"
	I0805 12:59:39.311289  450393 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-321139"
	W0805 12:59:39.311303  450393 addons.go:243] addon storage-provisioner should already be in state true
	I0805 12:59:39.311301  450393 addons.go:69] Setting metrics-server=true in profile "embed-certs-321139"
	I0805 12:59:39.311305  450393 addons.go:69] Setting default-storageclass=true in profile "embed-certs-321139"
	I0805 12:59:39.311351  450393 host.go:66] Checking if "embed-certs-321139" exists ...
	I0805 12:59:39.311360  450393 addons.go:234] Setting addon metrics-server=true in "embed-certs-321139"
	W0805 12:59:39.311371  450393 addons.go:243] addon metrics-server should already be in state true
	I0805 12:59:39.311371  450393 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-321139"
	I0805 12:59:39.311454  450393 host.go:66] Checking if "embed-certs-321139" exists ...
	I0805 12:59:39.311287  450393 config.go:182] Loaded profile config "embed-certs-321139": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:59:39.311848  450393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:59:39.311897  450393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:59:39.311906  450393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:59:39.311912  450393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:59:39.311964  450393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:59:39.312115  450393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:59:39.313050  450393 out.go:177] * Verifying Kubernetes components...
	I0805 12:59:39.314390  450393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:59:39.327427  450393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36355
	I0805 12:59:39.327687  450393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39217
	I0805 12:59:39.328016  450393 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:59:39.328155  450393 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:59:39.328609  450393 main.go:141] libmachine: Using API Version  1
	I0805 12:59:39.328649  450393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:59:39.328735  450393 main.go:141] libmachine: Using API Version  1
	I0805 12:59:39.328786  450393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:59:39.329013  450393 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:59:39.329086  450393 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:59:39.329560  450393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:59:39.329599  450393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:59:39.329676  450393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:59:39.329721  450393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:59:39.330884  450393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34247
	I0805 12:59:39.331381  450393 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:59:39.331878  450393 main.go:141] libmachine: Using API Version  1
	I0805 12:59:39.331902  450393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:59:39.332289  450393 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:59:39.332529  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetState
	I0805 12:59:39.336244  450393 addons.go:234] Setting addon default-storageclass=true in "embed-certs-321139"
	W0805 12:59:39.336269  450393 addons.go:243] addon default-storageclass should already be in state true
	I0805 12:59:39.336305  450393 host.go:66] Checking if "embed-certs-321139" exists ...
	I0805 12:59:39.336688  450393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:59:39.336735  450393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:59:39.347255  450393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41715
	I0805 12:59:39.347411  450393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43729
	I0805 12:59:39.347776  450393 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:59:39.347910  450393 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:59:39.348271  450393 main.go:141] libmachine: Using API Version  1
	I0805 12:59:39.348291  450393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:59:39.348464  450393 main.go:141] libmachine: Using API Version  1
	I0805 12:59:39.348476  450393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:59:39.348603  450393 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:59:39.348760  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetState
	I0805 12:59:39.348817  450393 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:59:39.348955  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetState
	I0805 12:59:39.350697  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:39.350906  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:39.352896  450393 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:59:39.352895  450393 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0805 12:59:39.354185  450393 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0805 12:59:39.354207  450393 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0805 12:59:39.354224  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:39.354266  450393 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 12:59:39.354277  450393 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 12:59:39.354292  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:39.356641  450393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41381
	I0805 12:59:39.357213  450393 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:59:39.357546  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:39.357791  450393 main.go:141] libmachine: Using API Version  1
	I0805 12:59:39.357814  450393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:59:39.357867  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:39.358001  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:39.358020  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:39.359294  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:39.359322  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:39.359337  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:39.359345  450393 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:59:39.359353  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:39.359488  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:39.359624  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:39.359669  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:39.359783  450393 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa Username:docker}
	I0805 12:59:39.359977  450393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:59:39.360009  450393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:59:39.360077  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:39.360210  450393 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa Username:docker}
	I0805 12:59:39.380935  450393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33787
	I0805 12:59:39.381394  450393 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:59:39.381987  450393 main.go:141] libmachine: Using API Version  1
	I0805 12:59:39.382029  450393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:59:39.382362  450393 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:59:39.382603  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetState
	I0805 12:59:39.384225  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:39.384497  450393 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 12:59:39.384515  450393 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 12:59:39.384536  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:39.389471  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:39.389972  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:39.390001  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:39.390124  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:39.390303  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:39.390604  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:39.390791  450393 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa Username:docker}
	I0805 12:59:39.513696  450393 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 12:59:39.533291  450393 node_ready.go:35] waiting up to 6m0s for node "embed-certs-321139" to be "Ready" ...
	I0805 12:59:39.597816  450393 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 12:59:39.700234  450393 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 12:59:39.719936  450393 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0805 12:59:39.719958  450393 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0805 12:59:39.760405  450393 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0805 12:59:39.760441  450393 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0805 12:59:39.808765  450393 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0805 12:59:39.808794  450393 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0805 12:59:39.833073  450393 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0805 12:59:39.946594  450393 main.go:141] libmachine: Making call to close driver server
	I0805 12:59:39.946633  450393 main.go:141] libmachine: (embed-certs-321139) Calling .Close
	I0805 12:59:39.946968  450393 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:59:39.946995  450393 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:59:39.947052  450393 main.go:141] libmachine: (embed-certs-321139) DBG | Closing plugin on server side
	I0805 12:59:39.947121  450393 main.go:141] libmachine: Making call to close driver server
	I0805 12:59:39.947137  450393 main.go:141] libmachine: (embed-certs-321139) Calling .Close
	I0805 12:59:39.947456  450393 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:59:39.947477  450393 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:59:39.947490  450393 main.go:141] libmachine: (embed-certs-321139) DBG | Closing plugin on server side
	I0805 12:59:39.953919  450393 main.go:141] libmachine: Making call to close driver server
	I0805 12:59:39.953942  450393 main.go:141] libmachine: (embed-certs-321139) Calling .Close
	I0805 12:59:39.954189  450393 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:59:39.954209  450393 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:59:40.636249  450393 main.go:141] libmachine: Making call to close driver server
	I0805 12:59:40.636274  450393 main.go:141] libmachine: (embed-certs-321139) Calling .Close
	I0805 12:59:40.636638  450393 main.go:141] libmachine: (embed-certs-321139) DBG | Closing plugin on server side
	I0805 12:59:40.636715  450393 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:59:40.636729  450393 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:59:40.636745  450393 main.go:141] libmachine: Making call to close driver server
	I0805 12:59:40.636757  450393 main.go:141] libmachine: (embed-certs-321139) Calling .Close
	I0805 12:59:40.636989  450393 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:59:40.637008  450393 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:59:40.671789  450393 main.go:141] libmachine: Making call to close driver server
	I0805 12:59:40.671819  450393 main.go:141] libmachine: (embed-certs-321139) Calling .Close
	I0805 12:59:40.672189  450393 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:59:40.672207  450393 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:59:40.672217  450393 main.go:141] libmachine: Making call to close driver server
	I0805 12:59:40.672225  450393 main.go:141] libmachine: (embed-certs-321139) Calling .Close
	I0805 12:59:40.672468  450393 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:59:40.672485  450393 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:59:40.672499  450393 addons.go:475] Verifying addon metrics-server=true in "embed-certs-321139"
	I0805 12:59:40.674497  450393 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0805 12:59:36.978361  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:37.478380  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:37.978354  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:38.478283  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:38.979257  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:39.478407  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:39.978772  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:40.478395  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:40.979309  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:41.478302  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:38.026001  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:40.026706  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:39.909336  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:41.910240  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:40.675778  450393 addons.go:510] duration metric: took 1.364642066s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0805 12:59:41.537321  450393 node_ready.go:53] node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:44.037571  450393 node_ready.go:53] node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:41.978791  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:42.478841  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:42.979289  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:43.478344  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:43.978613  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:44.478756  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:44.978392  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:45.478363  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:45.978354  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:46.478417  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:42.524568  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:45.024950  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:47.025453  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:44.408846  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:46.410085  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:46.537183  450393 node_ready.go:53] node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:47.037178  450393 node_ready.go:49] node "embed-certs-321139" has status "Ready":"True"
	I0805 12:59:47.037206  450393 node_ready.go:38] duration metric: took 7.503884334s for node "embed-certs-321139" to be "Ready" ...
	I0805 12:59:47.037221  450393 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 12:59:47.043159  450393 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wm7lh" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:47.048037  450393 pod_ready.go:92] pod "coredns-7db6d8ff4d-wm7lh" in "kube-system" namespace has status "Ready":"True"
	I0805 12:59:47.048088  450393 pod_ready.go:81] duration metric: took 4.901694ms for pod "coredns-7db6d8ff4d-wm7lh" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:47.048102  450393 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.055429  450393 pod_ready.go:92] pod "etcd-embed-certs-321139" in "kube-system" namespace has status "Ready":"True"
	I0805 12:59:49.055454  450393 pod_ready.go:81] duration metric: took 2.007345086s for pod "etcd-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.055464  450393 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.060072  450393 pod_ready.go:92] pod "kube-apiserver-embed-certs-321139" in "kube-system" namespace has status "Ready":"True"
	I0805 12:59:49.060095  450393 pod_ready.go:81] duration metric: took 4.624968ms for pod "kube-apiserver-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.060103  450393 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.065663  450393 pod_ready.go:92] pod "kube-controller-manager-embed-certs-321139" in "kube-system" namespace has status "Ready":"True"
	I0805 12:59:49.065689  450393 pod_ready.go:81] duration metric: took 5.578205ms for pod "kube-controller-manager-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.065708  450393 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-shgv2" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.071143  450393 pod_ready.go:92] pod "kube-proxy-shgv2" in "kube-system" namespace has status "Ready":"True"
	I0805 12:59:49.071166  450393 pod_ready.go:81] duration metric: took 5.450104ms for pod "kube-proxy-shgv2" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.071174  450393 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:46.978356  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:47.478322  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:47.978417  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:48.478966  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:48.979317  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:49.478449  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:49.978364  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:50.479294  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:50.978435  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:51.478614  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:49.028075  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:51.524299  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:48.908177  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:50.908490  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:52.909257  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:49.438002  450393 pod_ready.go:92] pod "kube-scheduler-embed-certs-321139" in "kube-system" namespace has status "Ready":"True"
	I0805 12:59:49.438032  450393 pod_ready.go:81] duration metric: took 366.851004ms for pod "kube-scheduler-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.438042  450393 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:51.443490  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:53.444534  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:51.978526  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:52.479187  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:52.979090  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:53.478733  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:53.978571  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:54.478525  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:54.979125  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:55.478711  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:55.979266  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:56.478956  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:53.525369  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:55.526660  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:54.909757  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:57.409489  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:55.445189  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:57.944983  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:56.979226  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:57.479019  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:57.978634  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:58.478338  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:58.978987  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:59.479290  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:59.978383  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:00.478373  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:00.978412  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:01.479312  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:57.527240  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:00.024177  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:02.024749  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:59.908362  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:01.909101  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:00.445471  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:02.944535  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:01.978392  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:02.479119  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:02.978313  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:03.478401  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:03.979029  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:04.478963  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:04.978393  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:05.478418  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:05.978381  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:06.479229  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:04.028522  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:06.525385  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:04.409119  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:06.409863  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:05.444313  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:07.452452  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:06.979172  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:07.479251  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:07.979183  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:08.478722  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:08.979248  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:09.478527  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:09.978581  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:10.478499  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:10.978520  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:11.478843  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:09.025651  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:11.525086  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:08.909528  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:11.408408  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:13.410472  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:09.945614  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:12.443723  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:11.978536  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:12.478504  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:12.979179  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:12.979258  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:13.022653  451238 cri.go:89] found id: ""
	I0805 13:00:13.022680  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.022689  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:13.022696  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:13.022766  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:13.059292  451238 cri.go:89] found id: ""
	I0805 13:00:13.059326  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.059336  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:13.059343  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:13.059399  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:13.098750  451238 cri.go:89] found id: ""
	I0805 13:00:13.098782  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.098793  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:13.098802  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:13.098866  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:13.133307  451238 cri.go:89] found id: ""
	I0805 13:00:13.133338  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.133346  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:13.133353  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:13.133420  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:13.171124  451238 cri.go:89] found id: ""
	I0805 13:00:13.171160  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.171170  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:13.171177  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:13.171237  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:13.209200  451238 cri.go:89] found id: ""
	I0805 13:00:13.209235  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.209247  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:13.209254  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:13.209312  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:13.244261  451238 cri.go:89] found id: ""
	I0805 13:00:13.244302  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.244313  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:13.244324  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:13.244397  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:13.283295  451238 cri.go:89] found id: ""
	I0805 13:00:13.283331  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.283342  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:13.283356  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:13.283372  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:13.344134  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:13.344174  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:13.384084  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:13.384119  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:13.433784  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:13.433821  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:13.449756  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:13.449786  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:13.573090  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:16.074053  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:16.087817  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:16.087900  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:16.130938  451238 cri.go:89] found id: ""
	I0805 13:00:16.130970  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.130981  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:16.130989  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:16.131058  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:16.184208  451238 cri.go:89] found id: ""
	I0805 13:00:16.184245  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.184259  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:16.184269  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:16.184346  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:16.230959  451238 cri.go:89] found id: ""
	I0805 13:00:16.230998  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.231011  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:16.231020  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:16.231100  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:16.282886  451238 cri.go:89] found id: ""
	I0805 13:00:16.282940  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.282954  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:16.282963  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:16.283024  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:16.320345  451238 cri.go:89] found id: ""
	I0805 13:00:16.320381  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.320397  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:16.320404  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:16.320521  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:16.356390  451238 cri.go:89] found id: ""
	I0805 13:00:16.356427  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.356439  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:16.356447  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:16.356503  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:16.400477  451238 cri.go:89] found id: ""
	I0805 13:00:16.400510  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.400529  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:16.400539  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:16.400612  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:16.440634  451238 cri.go:89] found id: ""
	I0805 13:00:16.440662  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.440673  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:16.440685  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:16.440702  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:16.510879  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:16.510922  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:16.554294  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:16.554332  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:16.607798  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:16.607853  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:16.622618  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:16.622655  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:16.702599  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:14.025025  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:16.025182  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:15.909245  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:18.409729  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:14.445222  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:16.445451  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:18.944533  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:19.202789  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:19.215776  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:19.215851  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:19.250503  451238 cri.go:89] found id: ""
	I0805 13:00:19.250540  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.250551  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:19.250558  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:19.250630  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:19.287358  451238 cri.go:89] found id: ""
	I0805 13:00:19.287392  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.287403  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:19.287412  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:19.287484  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:19.322167  451238 cri.go:89] found id: ""
	I0805 13:00:19.322195  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.322203  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:19.322209  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:19.322262  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:19.356874  451238 cri.go:89] found id: ""
	I0805 13:00:19.356905  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.356923  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:19.356931  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:19.357006  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:19.395172  451238 cri.go:89] found id: ""
	I0805 13:00:19.395206  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.395217  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:19.395227  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:19.395294  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:19.438404  451238 cri.go:89] found id: ""
	I0805 13:00:19.438431  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.438439  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:19.438445  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:19.438510  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:19.474727  451238 cri.go:89] found id: ""
	I0805 13:00:19.474755  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.474762  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:19.474769  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:19.474832  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:19.513906  451238 cri.go:89] found id: ""
	I0805 13:00:19.513945  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.513953  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:19.513963  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:19.513977  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:19.528337  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:19.528378  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:19.601135  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:19.601168  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:19.601185  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:19.676792  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:19.676844  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:19.716861  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:19.716894  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:18.025634  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:20.027525  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:20.909150  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:22.910153  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:20.945009  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:23.444529  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:22.266971  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:22.280346  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:22.280422  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:22.314788  451238 cri.go:89] found id: ""
	I0805 13:00:22.314816  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.314824  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:22.314831  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:22.314884  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:22.357357  451238 cri.go:89] found id: ""
	I0805 13:00:22.357394  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.357405  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:22.357414  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:22.357483  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:22.393254  451238 cri.go:89] found id: ""
	I0805 13:00:22.393288  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.393296  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:22.393302  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:22.393366  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:22.434766  451238 cri.go:89] found id: ""
	I0805 13:00:22.434796  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.434807  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:22.434815  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:22.434887  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:22.475649  451238 cri.go:89] found id: ""
	I0805 13:00:22.475676  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.475684  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:22.475690  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:22.475754  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:22.515633  451238 cri.go:89] found id: ""
	I0805 13:00:22.515662  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.515670  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:22.515677  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:22.515757  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:22.550716  451238 cri.go:89] found id: ""
	I0805 13:00:22.550749  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.550759  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:22.550767  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:22.550849  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:22.588537  451238 cri.go:89] found id: ""
	I0805 13:00:22.588571  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.588583  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:22.588595  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:22.588609  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:22.638535  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:22.638577  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:22.654879  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:22.654919  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:22.721482  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:22.721513  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:22.721529  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:22.801442  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:22.801489  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:25.343805  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:25.358068  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:25.358176  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:25.393734  451238 cri.go:89] found id: ""
	I0805 13:00:25.393767  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.393778  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:25.393785  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:25.393849  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:25.428217  451238 cri.go:89] found id: ""
	I0805 13:00:25.428244  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.428252  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:25.428257  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:25.428316  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:25.462826  451238 cri.go:89] found id: ""
	I0805 13:00:25.462858  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.462869  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:25.462877  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:25.462961  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:25.502960  451238 cri.go:89] found id: ""
	I0805 13:00:25.502989  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.502998  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:25.503006  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:25.503072  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:25.538859  451238 cri.go:89] found id: ""
	I0805 13:00:25.538888  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.538897  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:25.538902  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:25.538964  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:25.577850  451238 cri.go:89] found id: ""
	I0805 13:00:25.577883  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.577894  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:25.577901  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:25.577988  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:25.611728  451238 cri.go:89] found id: ""
	I0805 13:00:25.611773  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.611785  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:25.611793  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:25.611865  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:25.654987  451238 cri.go:89] found id: ""
	I0805 13:00:25.655018  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.655027  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:25.655039  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:25.655052  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:25.669124  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:25.669160  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:25.747354  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:25.747380  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:25.747398  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:25.825198  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:25.825241  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:25.865511  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:25.865546  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:22.526638  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:25.024414  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:27.025393  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:25.409361  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:27.411148  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:25.444607  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:27.447460  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:28.418263  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:28.431831  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:28.431895  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:28.470249  451238 cri.go:89] found id: ""
	I0805 13:00:28.470280  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.470291  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:28.470301  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:28.470373  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:28.506935  451238 cri.go:89] found id: ""
	I0805 13:00:28.506968  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.506977  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:28.506985  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:28.507053  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:28.546621  451238 cri.go:89] found id: ""
	I0805 13:00:28.546652  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.546663  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:28.546671  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:28.546749  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:28.584699  451238 cri.go:89] found id: ""
	I0805 13:00:28.584734  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.584745  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:28.584753  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:28.584820  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:28.620693  451238 cri.go:89] found id: ""
	I0805 13:00:28.620726  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.620736  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:28.620744  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:28.620814  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:28.657340  451238 cri.go:89] found id: ""
	I0805 13:00:28.657370  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.657379  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:28.657385  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:28.657438  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:28.695126  451238 cri.go:89] found id: ""
	I0805 13:00:28.695156  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.695166  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:28.695174  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:28.695239  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:28.729757  451238 cri.go:89] found id: ""
	I0805 13:00:28.729808  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.729821  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:28.729834  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:28.729852  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:28.769642  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:28.769675  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:28.818076  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:28.818114  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:28.831466  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:28.831496  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:28.902788  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:28.902818  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:28.902836  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:31.482482  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:31.497767  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:31.497867  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:31.536922  451238 cri.go:89] found id: ""
	I0805 13:00:31.536948  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.536960  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:31.536969  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:31.537040  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:31.572422  451238 cri.go:89] found id: ""
	I0805 13:00:31.572456  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.572466  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:31.572472  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:31.572531  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:31.607961  451238 cri.go:89] found id: ""
	I0805 13:00:31.607996  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.608008  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:31.608016  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:31.608082  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:31.641771  451238 cri.go:89] found id: ""
	I0805 13:00:31.641800  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.641822  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:31.641830  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:31.641904  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:31.681661  451238 cri.go:89] found id: ""
	I0805 13:00:31.681695  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.681707  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:31.681715  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:31.681791  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:31.723777  451238 cri.go:89] found id: ""
	I0805 13:00:31.723814  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.723823  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:31.723829  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:31.723922  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:31.759898  451238 cri.go:89] found id: ""
	I0805 13:00:31.759935  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.759948  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:31.759957  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:31.760022  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:31.798433  451238 cri.go:89] found id: ""
	I0805 13:00:31.798462  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.798470  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:31.798480  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:31.798497  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:31.872005  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:31.872030  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:31.872045  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:31.952201  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:31.952240  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:29.524445  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:31.525646  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:29.909901  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:32.408826  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:29.944170  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:31.944427  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:31.995920  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:31.995955  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:32.047453  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:32.047493  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:34.562369  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:34.576644  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:34.576708  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:34.613002  451238 cri.go:89] found id: ""
	I0805 13:00:34.613036  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.613047  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:34.613056  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:34.613127  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:34.650723  451238 cri.go:89] found id: ""
	I0805 13:00:34.650757  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.650769  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:34.650777  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:34.650851  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:34.689047  451238 cri.go:89] found id: ""
	I0805 13:00:34.689073  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.689081  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:34.689088  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:34.689148  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:34.727552  451238 cri.go:89] found id: ""
	I0805 13:00:34.727592  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.727604  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:34.727612  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:34.727683  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:34.761661  451238 cri.go:89] found id: ""
	I0805 13:00:34.761696  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.761707  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:34.761715  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:34.761791  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:34.800062  451238 cri.go:89] found id: ""
	I0805 13:00:34.800116  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.800128  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:34.800137  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:34.800198  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:34.833536  451238 cri.go:89] found id: ""
	I0805 13:00:34.833566  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.833578  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:34.833586  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:34.833654  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:34.868079  451238 cri.go:89] found id: ""
	I0805 13:00:34.868117  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.868126  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:34.868135  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:34.868149  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:34.920092  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:34.920124  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:34.934484  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:34.934510  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:35.007716  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:35.007751  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:35.007768  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:35.088183  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:35.088233  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:34.024704  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:36.025754  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:34.409917  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:36.409993  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:34.444842  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:36.943985  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:38.944649  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:37.633443  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:37.647405  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:37.647470  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:37.684682  451238 cri.go:89] found id: ""
	I0805 13:00:37.684711  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.684720  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:37.684727  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:37.684779  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:37.723413  451238 cri.go:89] found id: ""
	I0805 13:00:37.723442  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.723449  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:37.723455  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:37.723506  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:37.758388  451238 cri.go:89] found id: ""
	I0805 13:00:37.758418  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.758428  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:37.758437  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:37.758501  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:37.797846  451238 cri.go:89] found id: ""
	I0805 13:00:37.797879  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.797890  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:37.797901  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:37.797971  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:37.837053  451238 cri.go:89] found id: ""
	I0805 13:00:37.837082  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.837092  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:37.837104  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:37.837163  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:37.876185  451238 cri.go:89] found id: ""
	I0805 13:00:37.876211  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.876220  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:37.876226  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:37.876294  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:37.915318  451238 cri.go:89] found id: ""
	I0805 13:00:37.915350  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.915362  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:37.915370  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:37.915429  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:37.953916  451238 cri.go:89] found id: ""
	I0805 13:00:37.953944  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.953954  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:37.953964  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:37.953976  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:37.991116  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:37.991154  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:38.043796  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:38.043838  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:38.058636  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:38.058669  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:38.143022  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:38.143051  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:38.143067  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:40.721468  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:40.735679  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:40.735774  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:40.773583  451238 cri.go:89] found id: ""
	I0805 13:00:40.773609  451238 logs.go:276] 0 containers: []
	W0805 13:00:40.773617  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:40.773626  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:40.773685  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:40.819857  451238 cri.go:89] found id: ""
	I0805 13:00:40.819886  451238 logs.go:276] 0 containers: []
	W0805 13:00:40.819895  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:40.819901  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:40.819963  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:40.857156  451238 cri.go:89] found id: ""
	I0805 13:00:40.857184  451238 logs.go:276] 0 containers: []
	W0805 13:00:40.857192  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:40.857198  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:40.857251  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:40.892933  451238 cri.go:89] found id: ""
	I0805 13:00:40.892970  451238 logs.go:276] 0 containers: []
	W0805 13:00:40.892981  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:40.892990  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:40.893046  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:40.927128  451238 cri.go:89] found id: ""
	I0805 13:00:40.927163  451238 logs.go:276] 0 containers: []
	W0805 13:00:40.927173  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:40.927182  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:40.927237  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:40.961790  451238 cri.go:89] found id: ""
	I0805 13:00:40.961817  451238 logs.go:276] 0 containers: []
	W0805 13:00:40.961826  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:40.961832  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:40.961886  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:40.996249  451238 cri.go:89] found id: ""
	I0805 13:00:40.996282  451238 logs.go:276] 0 containers: []
	W0805 13:00:40.996293  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:40.996300  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:40.996371  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:41.032305  451238 cri.go:89] found id: ""
	I0805 13:00:41.032332  451238 logs.go:276] 0 containers: []
	W0805 13:00:41.032342  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:41.032358  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:41.032375  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:41.075993  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:41.076027  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:41.126020  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:41.126057  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:41.140263  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:41.140288  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:41.216648  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:41.216670  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:41.216683  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:38.524812  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:41.024597  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:38.909518  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:40.910256  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:43.410062  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:41.443930  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:43.945026  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:43.796367  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:43.810086  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:43.810162  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:43.844373  451238 cri.go:89] found id: ""
	I0805 13:00:43.844410  451238 logs.go:276] 0 containers: []
	W0805 13:00:43.844422  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:43.844430  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:43.844502  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:43.880249  451238 cri.go:89] found id: ""
	I0805 13:00:43.880285  451238 logs.go:276] 0 containers: []
	W0805 13:00:43.880295  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:43.880303  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:43.880376  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:43.921279  451238 cri.go:89] found id: ""
	I0805 13:00:43.921313  451238 logs.go:276] 0 containers: []
	W0805 13:00:43.921323  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:43.921329  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:43.921382  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:43.963736  451238 cri.go:89] found id: ""
	I0805 13:00:43.963782  451238 logs.go:276] 0 containers: []
	W0805 13:00:43.963794  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:43.963803  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:43.963869  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:44.009001  451238 cri.go:89] found id: ""
	I0805 13:00:44.009038  451238 logs.go:276] 0 containers: []
	W0805 13:00:44.009050  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:44.009057  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:44.009128  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:44.059484  451238 cri.go:89] found id: ""
	I0805 13:00:44.059514  451238 logs.go:276] 0 containers: []
	W0805 13:00:44.059526  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:44.059534  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:44.059605  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:44.102043  451238 cri.go:89] found id: ""
	I0805 13:00:44.102075  451238 logs.go:276] 0 containers: []
	W0805 13:00:44.102088  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:44.102094  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:44.102170  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:44.137518  451238 cri.go:89] found id: ""
	I0805 13:00:44.137558  451238 logs.go:276] 0 containers: []
	W0805 13:00:44.137569  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:44.137584  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:44.137600  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:44.188139  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:44.188175  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:44.202544  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:44.202588  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:44.278486  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:44.278508  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:44.278521  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:44.363419  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:44.363458  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:46.905665  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:46.922141  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:46.922206  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:43.025461  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:45.523997  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:45.908437  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:48.409410  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:46.445919  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:48.944243  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:46.963468  451238 cri.go:89] found id: ""
	I0805 13:00:46.963494  451238 logs.go:276] 0 containers: []
	W0805 13:00:46.963502  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:46.963508  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:46.963557  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:47.003445  451238 cri.go:89] found id: ""
	I0805 13:00:47.003472  451238 logs.go:276] 0 containers: []
	W0805 13:00:47.003480  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:47.003486  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:47.003537  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:47.043271  451238 cri.go:89] found id: ""
	I0805 13:00:47.043306  451238 logs.go:276] 0 containers: []
	W0805 13:00:47.043318  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:47.043326  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:47.043394  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:47.079843  451238 cri.go:89] found id: ""
	I0805 13:00:47.079874  451238 logs.go:276] 0 containers: []
	W0805 13:00:47.079884  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:47.079893  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:47.079954  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:47.116819  451238 cri.go:89] found id: ""
	I0805 13:00:47.116847  451238 logs.go:276] 0 containers: []
	W0805 13:00:47.116856  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:47.116861  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:47.116917  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:47.156302  451238 cri.go:89] found id: ""
	I0805 13:00:47.156331  451238 logs.go:276] 0 containers: []
	W0805 13:00:47.156340  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:47.156353  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:47.156410  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:47.200419  451238 cri.go:89] found id: ""
	I0805 13:00:47.200449  451238 logs.go:276] 0 containers: []
	W0805 13:00:47.200463  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:47.200469  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:47.200533  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:47.237483  451238 cri.go:89] found id: ""
	I0805 13:00:47.237515  451238 logs.go:276] 0 containers: []
	W0805 13:00:47.237522  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:47.237532  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:47.237545  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:47.251598  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:47.251632  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:47.326457  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:47.326483  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:47.326501  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:47.410413  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:47.410455  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:47.452696  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:47.452732  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:50.005335  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:50.019610  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:50.019679  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:50.057401  451238 cri.go:89] found id: ""
	I0805 13:00:50.057435  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.057447  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:50.057456  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:50.057516  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:50.101710  451238 cri.go:89] found id: ""
	I0805 13:00:50.101743  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.101751  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:50.101758  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:50.101822  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:50.139624  451238 cri.go:89] found id: ""
	I0805 13:00:50.139658  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.139669  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:50.139677  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:50.139761  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:50.176004  451238 cri.go:89] found id: ""
	I0805 13:00:50.176031  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.176039  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:50.176045  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:50.176123  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:50.219319  451238 cri.go:89] found id: ""
	I0805 13:00:50.219352  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.219362  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:50.219369  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:50.219437  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:50.287443  451238 cri.go:89] found id: ""
	I0805 13:00:50.287478  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.287489  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:50.287498  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:50.287582  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:50.321018  451238 cri.go:89] found id: ""
	I0805 13:00:50.321047  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.321056  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:50.321063  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:50.321124  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:50.354559  451238 cri.go:89] found id: ""
	I0805 13:00:50.354597  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.354610  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:50.354625  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:50.354642  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:50.398621  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:50.398657  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:50.451693  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:50.451735  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:50.466810  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:50.466851  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:50.542431  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:50.542461  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:50.542482  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:47.525977  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:50.025280  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:52.025760  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:50.410198  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:52.908466  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:50.946086  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:53.445962  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:53.128466  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:53.144139  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:53.144216  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:53.178383  451238 cri.go:89] found id: ""
	I0805 13:00:53.178427  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.178438  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:53.178447  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:53.178516  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:53.220312  451238 cri.go:89] found id: ""
	I0805 13:00:53.220348  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.220358  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:53.220365  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:53.220432  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:53.255352  451238 cri.go:89] found id: ""
	I0805 13:00:53.255380  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.255390  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:53.255398  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:53.255473  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:53.293254  451238 cri.go:89] found id: ""
	I0805 13:00:53.293292  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.293311  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:53.293320  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:53.293395  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:53.329407  451238 cri.go:89] found id: ""
	I0805 13:00:53.329436  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.329448  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:53.329455  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:53.329523  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:53.362838  451238 cri.go:89] found id: ""
	I0805 13:00:53.362868  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.362876  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:53.362883  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:53.362957  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:53.399283  451238 cri.go:89] found id: ""
	I0805 13:00:53.399313  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.399324  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:53.399332  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:53.399405  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:53.438527  451238 cri.go:89] found id: ""
	I0805 13:00:53.438558  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.438567  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:53.438578  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:53.438597  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:53.492709  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:53.492760  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:53.507522  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:53.507555  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:53.581690  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:53.581710  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:53.581724  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:53.664402  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:53.664451  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:56.209640  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:56.224403  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:56.224487  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:56.266214  451238 cri.go:89] found id: ""
	I0805 13:00:56.266243  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.266254  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:56.266263  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:56.266328  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:56.304034  451238 cri.go:89] found id: ""
	I0805 13:00:56.304070  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.304082  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:56.304091  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:56.304172  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:56.342133  451238 cri.go:89] found id: ""
	I0805 13:00:56.342159  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.342167  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:56.342173  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:56.342225  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:56.378549  451238 cri.go:89] found id: ""
	I0805 13:00:56.378588  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.378599  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:56.378606  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:56.378667  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:56.415613  451238 cri.go:89] found id: ""
	I0805 13:00:56.415641  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.415651  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:56.415657  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:56.415715  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:56.451915  451238 cri.go:89] found id: ""
	I0805 13:00:56.451944  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.451953  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:56.451960  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:56.452021  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:56.492219  451238 cri.go:89] found id: ""
	I0805 13:00:56.492255  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.492267  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:56.492275  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:56.492347  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:56.534564  451238 cri.go:89] found id: ""
	I0805 13:00:56.534606  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.534618  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:56.534632  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:56.534652  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:56.548772  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:56.548813  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:56.625649  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:56.625678  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:56.625695  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:56.716735  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:56.716787  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:56.771881  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:56.771910  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:54.525355  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:57.025659  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:54.908805  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:56.909601  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:55.943885  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:57.945233  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:59.325624  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:59.338796  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:59.338869  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:59.375002  451238 cri.go:89] found id: ""
	I0805 13:00:59.375039  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.375050  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:59.375059  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:59.375138  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:59.410778  451238 cri.go:89] found id: ""
	I0805 13:00:59.410800  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.410810  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:59.410817  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:59.410873  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:59.453728  451238 cri.go:89] found id: ""
	I0805 13:00:59.453760  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.453771  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:59.453779  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:59.453845  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:59.492968  451238 cri.go:89] found id: ""
	I0805 13:00:59.493002  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.493013  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:59.493021  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:59.493091  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:59.533342  451238 cri.go:89] found id: ""
	I0805 13:00:59.533372  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.533383  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:59.533390  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:59.533445  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:59.569677  451238 cri.go:89] found id: ""
	I0805 13:00:59.569705  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.569715  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:59.569722  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:59.569789  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:59.605106  451238 cri.go:89] found id: ""
	I0805 13:00:59.605139  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.605150  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:59.605158  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:59.605228  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:59.639948  451238 cri.go:89] found id: ""
	I0805 13:00:59.639980  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.639989  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:59.640000  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:59.640016  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:59.679926  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:59.679956  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:59.731545  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:59.731591  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:59.746286  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:59.746320  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:59.828398  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:59.828420  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:59.828439  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:59.524365  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:01.525092  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:59.410713  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:01.909619  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:59.945483  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:02.445780  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:02.412560  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:02.429633  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:02.429718  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:02.475916  451238 cri.go:89] found id: ""
	I0805 13:01:02.475951  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.475963  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:02.475971  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:02.476061  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:02.528807  451238 cri.go:89] found id: ""
	I0805 13:01:02.528837  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.528849  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:02.528856  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:02.528924  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:02.575164  451238 cri.go:89] found id: ""
	I0805 13:01:02.575194  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.575210  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:02.575218  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:02.575286  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:02.614709  451238 cri.go:89] found id: ""
	I0805 13:01:02.614800  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.614815  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:02.614824  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:02.614902  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:02.654941  451238 cri.go:89] found id: ""
	I0805 13:01:02.654979  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.654990  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:02.654997  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:02.655069  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:02.690552  451238 cri.go:89] found id: ""
	I0805 13:01:02.690586  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.690595  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:02.690602  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:02.690657  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:02.725607  451238 cri.go:89] found id: ""
	I0805 13:01:02.725644  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.725656  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:02.725665  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:02.725745  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:02.760180  451238 cri.go:89] found id: ""
	I0805 13:01:02.760211  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.760223  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:02.760244  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:02.760262  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:02.813071  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:02.813128  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:02.828633  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:02.828665  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:02.898049  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:02.898074  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:02.898087  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:02.988077  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:02.988124  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:05.532719  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:05.546423  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:05.546489  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:05.590978  451238 cri.go:89] found id: ""
	I0805 13:01:05.591006  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.591013  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:05.591019  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:05.591071  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:05.631251  451238 cri.go:89] found id: ""
	I0805 13:01:05.631287  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.631298  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:05.631306  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:05.631391  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:05.671826  451238 cri.go:89] found id: ""
	I0805 13:01:05.671863  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.671875  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:05.671883  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:05.671951  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:05.708147  451238 cri.go:89] found id: ""
	I0805 13:01:05.708176  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.708186  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:05.708194  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:05.708262  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:05.741962  451238 cri.go:89] found id: ""
	I0805 13:01:05.741994  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.742006  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:05.742015  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:05.742087  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:05.777930  451238 cri.go:89] found id: ""
	I0805 13:01:05.777965  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.777976  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:05.777985  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:05.778061  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:05.813066  451238 cri.go:89] found id: ""
	I0805 13:01:05.813099  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.813111  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:05.813119  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:05.813189  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:05.849382  451238 cri.go:89] found id: ""
	I0805 13:01:05.849410  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.849418  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:05.849428  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:05.849440  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:05.903376  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:05.903423  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:05.918540  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:05.918575  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:05.990608  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:05.990637  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:05.990658  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:06.072524  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:06.072571  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:04.025528  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:06.525325  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:04.409190  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:06.409231  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:04.944649  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:07.445278  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:08.617528  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:08.631637  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:08.631713  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:08.669999  451238 cri.go:89] found id: ""
	I0805 13:01:08.670039  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.670050  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:08.670065  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:08.670147  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:08.705322  451238 cri.go:89] found id: ""
	I0805 13:01:08.705356  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.705365  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:08.705370  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:08.705442  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:08.744884  451238 cri.go:89] found id: ""
	I0805 13:01:08.744915  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.744927  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:08.744936  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:08.745018  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:08.782394  451238 cri.go:89] found id: ""
	I0805 13:01:08.782428  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.782440  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:08.782448  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:08.782518  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:08.816989  451238 cri.go:89] found id: ""
	I0805 13:01:08.817018  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.817027  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:08.817034  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:08.817106  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:08.856389  451238 cri.go:89] found id: ""
	I0805 13:01:08.856420  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.856431  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:08.856439  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:08.856506  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:08.891942  451238 cri.go:89] found id: ""
	I0805 13:01:08.891975  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.891986  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:08.891995  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:08.892064  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:08.930329  451238 cri.go:89] found id: ""
	I0805 13:01:08.930364  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.930375  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:08.930389  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:08.930406  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:08.972574  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:08.972610  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:09.026194  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:09.026228  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:09.040973  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:09.041002  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:09.115094  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:09.115121  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:09.115143  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:11.698322  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:11.711841  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:11.711927  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:11.749152  451238 cri.go:89] found id: ""
	I0805 13:01:11.749187  451238 logs.go:276] 0 containers: []
	W0805 13:01:11.749199  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:11.749207  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:11.749274  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:11.785395  451238 cri.go:89] found id: ""
	I0805 13:01:11.785430  451238 logs.go:276] 0 containers: []
	W0805 13:01:11.785441  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:11.785449  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:11.785516  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:11.822240  451238 cri.go:89] found id: ""
	I0805 13:01:11.822282  451238 logs.go:276] 0 containers: []
	W0805 13:01:11.822293  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:11.822302  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:11.822372  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:11.858755  451238 cri.go:89] found id: ""
	I0805 13:01:11.858794  451238 logs.go:276] 0 containers: []
	W0805 13:01:11.858805  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:11.858814  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:11.858884  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:11.893064  451238 cri.go:89] found id: ""
	I0805 13:01:11.893101  451238 logs.go:276] 0 containers: []
	W0805 13:01:11.893113  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:11.893121  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:11.893195  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:11.930965  451238 cri.go:89] found id: ""
	I0805 13:01:11.931003  451238 logs.go:276] 0 containers: []
	W0805 13:01:11.931015  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:11.931025  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:11.931089  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:09.025566  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:11.525069  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:08.910618  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:11.409157  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:09.944797  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:12.445029  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:11.967594  451238 cri.go:89] found id: ""
	I0805 13:01:11.967620  451238 logs.go:276] 0 containers: []
	W0805 13:01:11.967630  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:11.967638  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:11.967697  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:12.004978  451238 cri.go:89] found id: ""
	I0805 13:01:12.005007  451238 logs.go:276] 0 containers: []
	W0805 13:01:12.005015  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:12.005025  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:12.005037  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:12.087476  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:12.087500  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:12.087515  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:12.177690  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:12.177757  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:12.222858  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:12.222889  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:12.273322  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:12.273362  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:14.788210  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:14.802351  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:14.802426  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:14.837705  451238 cri.go:89] found id: ""
	I0805 13:01:14.837736  451238 logs.go:276] 0 containers: []
	W0805 13:01:14.837746  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:14.837755  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:14.837824  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:14.873389  451238 cri.go:89] found id: ""
	I0805 13:01:14.873420  451238 logs.go:276] 0 containers: []
	W0805 13:01:14.873430  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:14.873438  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:14.873506  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:14.913969  451238 cri.go:89] found id: ""
	I0805 13:01:14.913999  451238 logs.go:276] 0 containers: []
	W0805 13:01:14.914009  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:14.914018  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:14.914081  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:14.953478  451238 cri.go:89] found id: ""
	I0805 13:01:14.953510  451238 logs.go:276] 0 containers: []
	W0805 13:01:14.953521  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:14.953528  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:14.953584  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:14.992166  451238 cri.go:89] found id: ""
	I0805 13:01:14.992197  451238 logs.go:276] 0 containers: []
	W0805 13:01:14.992206  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:14.992212  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:14.992291  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:15.031258  451238 cri.go:89] found id: ""
	I0805 13:01:15.031285  451238 logs.go:276] 0 containers: []
	W0805 13:01:15.031293  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:15.031300  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:15.031353  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:15.068944  451238 cri.go:89] found id: ""
	I0805 13:01:15.068972  451238 logs.go:276] 0 containers: []
	W0805 13:01:15.068980  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:15.068986  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:15.069042  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:15.105413  451238 cri.go:89] found id: ""
	I0805 13:01:15.105443  451238 logs.go:276] 0 containers: []
	W0805 13:01:15.105454  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:15.105467  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:15.105489  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:15.161925  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:15.161969  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:15.177174  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:15.177206  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:15.257950  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:15.257975  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:15.257989  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:15.336672  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:15.336716  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:13.526088  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:16.025513  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:13.908773  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:15.908817  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:17.910431  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:14.945842  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:17.444869  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:17.876314  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:17.889842  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:17.889909  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:17.928050  451238 cri.go:89] found id: ""
	I0805 13:01:17.928077  451238 logs.go:276] 0 containers: []
	W0805 13:01:17.928086  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:17.928092  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:17.928150  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:17.965713  451238 cri.go:89] found id: ""
	I0805 13:01:17.965751  451238 logs.go:276] 0 containers: []
	W0805 13:01:17.965762  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:17.965770  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:17.965837  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:18.002938  451238 cri.go:89] found id: ""
	I0805 13:01:18.002972  451238 logs.go:276] 0 containers: []
	W0805 13:01:18.002984  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:18.002992  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:18.003062  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:18.040140  451238 cri.go:89] found id: ""
	I0805 13:01:18.040178  451238 logs.go:276] 0 containers: []
	W0805 13:01:18.040190  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:18.040198  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:18.040269  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:18.075427  451238 cri.go:89] found id: ""
	I0805 13:01:18.075463  451238 logs.go:276] 0 containers: []
	W0805 13:01:18.075475  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:18.075490  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:18.075558  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:18.113469  451238 cri.go:89] found id: ""
	I0805 13:01:18.113507  451238 logs.go:276] 0 containers: []
	W0805 13:01:18.113521  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:18.113528  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:18.113587  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:18.152626  451238 cri.go:89] found id: ""
	I0805 13:01:18.152662  451238 logs.go:276] 0 containers: []
	W0805 13:01:18.152672  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:18.152678  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:18.152745  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:18.189540  451238 cri.go:89] found id: ""
	I0805 13:01:18.189577  451238 logs.go:276] 0 containers: []
	W0805 13:01:18.189590  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:18.189602  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:18.189618  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:18.244314  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:18.244353  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:18.257912  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:18.257939  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:18.339659  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:18.339682  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:18.339699  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:18.425391  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:18.425449  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:20.975889  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:20.989798  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:20.989868  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:21.030858  451238 cri.go:89] found id: ""
	I0805 13:01:21.030894  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.030906  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:21.030915  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:21.030979  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:21.067367  451238 cri.go:89] found id: ""
	I0805 13:01:21.067402  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.067411  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:21.067419  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:21.067476  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:21.104307  451238 cri.go:89] found id: ""
	I0805 13:01:21.104337  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.104352  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:21.104361  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:21.104424  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:21.141486  451238 cri.go:89] found id: ""
	I0805 13:01:21.141519  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.141531  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:21.141539  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:21.141606  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:21.179247  451238 cri.go:89] found id: ""
	I0805 13:01:21.179305  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.179317  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:21.179330  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:21.179406  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:21.215030  451238 cri.go:89] found id: ""
	I0805 13:01:21.215065  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.215075  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:21.215083  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:21.215152  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:21.252982  451238 cri.go:89] found id: ""
	I0805 13:01:21.253008  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.253016  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:21.253022  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:21.253097  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:21.290256  451238 cri.go:89] found id: ""
	I0805 13:01:21.290292  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.290302  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:21.290325  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:21.290343  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:21.342809  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:21.342855  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:21.357959  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:21.358000  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:21.433087  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:21.433120  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:21.433143  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:21.514261  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:21.514312  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:18.025965  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:20.524832  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:20.409943  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:22.909233  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:19.445074  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:21.445547  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:23.445637  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:24.060402  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:24.076056  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:24.076131  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:24.115976  451238 cri.go:89] found id: ""
	I0805 13:01:24.116009  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.116022  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:24.116031  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:24.116111  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:24.158411  451238 cri.go:89] found id: ""
	I0805 13:01:24.158440  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.158448  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:24.158454  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:24.158520  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:24.194589  451238 cri.go:89] found id: ""
	I0805 13:01:24.194624  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.194635  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:24.194644  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:24.194720  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:24.231528  451238 cri.go:89] found id: ""
	I0805 13:01:24.231562  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.231569  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:24.231576  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:24.231649  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:24.268491  451238 cri.go:89] found id: ""
	I0805 13:01:24.268523  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.268532  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:24.268538  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:24.268602  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:24.306718  451238 cri.go:89] found id: ""
	I0805 13:01:24.306752  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.306763  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:24.306772  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:24.306839  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:24.343552  451238 cri.go:89] found id: ""
	I0805 13:01:24.343578  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.343586  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:24.343593  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:24.343649  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:24.384555  451238 cri.go:89] found id: ""
	I0805 13:01:24.384590  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.384602  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:24.384615  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:24.384633  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:24.430256  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:24.430298  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:24.484616  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:24.484661  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:24.500926  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:24.500958  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:24.581379  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:24.581410  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:24.581424  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:22.525806  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:24.526411  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:27.024452  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:25.408887  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:27.409717  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:25.945113  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:28.444740  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:27.167538  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:27.181959  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:27.182035  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:27.223243  451238 cri.go:89] found id: ""
	I0805 13:01:27.223282  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.223293  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:27.223301  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:27.223374  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:27.257806  451238 cri.go:89] found id: ""
	I0805 13:01:27.257843  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.257856  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:27.257864  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:27.257940  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:27.304306  451238 cri.go:89] found id: ""
	I0805 13:01:27.304342  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.304353  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:27.304370  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:27.304439  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:27.342595  451238 cri.go:89] found id: ""
	I0805 13:01:27.342623  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.342631  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:27.342638  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:27.342707  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:27.385628  451238 cri.go:89] found id: ""
	I0805 13:01:27.385661  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.385670  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:27.385677  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:27.385760  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:27.425059  451238 cri.go:89] found id: ""
	I0805 13:01:27.425091  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.425100  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:27.425106  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:27.425175  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:27.465739  451238 cri.go:89] found id: ""
	I0805 13:01:27.465783  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.465794  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:27.465807  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:27.465869  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:27.506431  451238 cri.go:89] found id: ""
	I0805 13:01:27.506460  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.506468  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:27.506477  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:27.506494  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:27.586440  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:27.586467  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:27.586482  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:27.667826  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:27.667869  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:27.710458  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:27.710496  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:27.763057  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:27.763100  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:30.278799  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:30.293788  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:30.293874  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:30.336209  451238 cri.go:89] found id: ""
	I0805 13:01:30.336240  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.336248  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:30.336255  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:30.336323  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:30.371593  451238 cri.go:89] found id: ""
	I0805 13:01:30.371627  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.371642  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:30.371649  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:30.371714  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:30.408266  451238 cri.go:89] found id: ""
	I0805 13:01:30.408298  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.408317  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:30.408325  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:30.408388  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:30.448841  451238 cri.go:89] found id: ""
	I0805 13:01:30.448864  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.448872  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:30.448878  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:30.448940  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:30.488367  451238 cri.go:89] found id: ""
	I0805 13:01:30.488403  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.488411  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:30.488418  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:30.488485  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:30.527131  451238 cri.go:89] found id: ""
	I0805 13:01:30.527163  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.527173  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:30.527181  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:30.527249  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:30.568089  451238 cri.go:89] found id: ""
	I0805 13:01:30.568122  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.568131  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:30.568138  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:30.568203  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:30.605952  451238 cri.go:89] found id: ""
	I0805 13:01:30.605990  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.606007  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:30.606021  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:30.606041  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:30.656449  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:30.656491  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:30.710124  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:30.710164  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:30.724417  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:30.724455  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:30.820639  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:30.820669  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:30.820687  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:29.025377  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:31.525340  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:29.909043  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:32.410359  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:30.445047  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:32.445931  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:33.403497  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:33.419581  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:33.419651  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:33.462011  451238 cri.go:89] found id: ""
	I0805 13:01:33.462042  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.462051  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:33.462057  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:33.462126  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:33.502476  451238 cri.go:89] found id: ""
	I0805 13:01:33.502509  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.502519  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:33.502527  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:33.502601  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:33.547392  451238 cri.go:89] found id: ""
	I0805 13:01:33.547421  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.547430  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:33.547437  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:33.547490  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:33.584013  451238 cri.go:89] found id: ""
	I0805 13:01:33.584040  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.584048  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:33.584054  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:33.584125  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:33.617325  451238 cri.go:89] found id: ""
	I0805 13:01:33.617359  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.617367  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:33.617374  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:33.617429  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:33.651922  451238 cri.go:89] found id: ""
	I0805 13:01:33.651959  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.651971  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:33.651980  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:33.652049  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:33.689487  451238 cri.go:89] found id: ""
	I0805 13:01:33.689515  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.689522  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:33.689529  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:33.689580  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:33.723220  451238 cri.go:89] found id: ""
	I0805 13:01:33.723251  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.723260  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:33.723270  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:33.723282  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:33.777271  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:33.777311  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:33.792497  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:33.792532  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:33.866801  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:33.866826  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:33.866842  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:33.946739  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:33.946774  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:36.486108  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:36.501316  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:36.501397  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:36.542082  451238 cri.go:89] found id: ""
	I0805 13:01:36.542118  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.542130  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:36.542139  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:36.542217  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:36.581005  451238 cri.go:89] found id: ""
	I0805 13:01:36.581047  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.581059  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:36.581068  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:36.581148  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:36.623945  451238 cri.go:89] found id: ""
	I0805 13:01:36.623974  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.623982  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:36.623987  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:36.624041  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:36.661632  451238 cri.go:89] found id: ""
	I0805 13:01:36.661665  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.661673  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:36.661680  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:36.661738  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:36.701808  451238 cri.go:89] found id: ""
	I0805 13:01:36.701839  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.701850  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:36.701857  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:36.701941  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:36.742287  451238 cri.go:89] found id: ""
	I0805 13:01:36.742320  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.742331  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:36.742340  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:36.742410  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:36.794581  451238 cri.go:89] found id: ""
	I0805 13:01:36.794610  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.794621  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:36.794629  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:36.794690  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:36.833271  451238 cri.go:89] found id: ""
	I0805 13:01:36.833301  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.833311  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:36.833325  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:36.833346  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:36.921427  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:36.921467  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:34.024353  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:36.025557  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:34.909401  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:36.909529  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:34.945077  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:36.945632  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:36.965468  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:36.965503  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:37.018475  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:37.018515  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:37.033671  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:37.033697  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:37.105339  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:39.606042  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:39.619215  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:39.619296  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:39.655614  451238 cri.go:89] found id: ""
	I0805 13:01:39.655648  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.655660  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:39.655668  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:39.655760  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:39.691489  451238 cri.go:89] found id: ""
	I0805 13:01:39.691523  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.691535  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:39.691543  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:39.691610  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:39.726394  451238 cri.go:89] found id: ""
	I0805 13:01:39.726427  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.726438  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:39.726446  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:39.726518  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:39.759847  451238 cri.go:89] found id: ""
	I0805 13:01:39.759897  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.759909  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:39.759918  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:39.759988  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:39.795011  451238 cri.go:89] found id: ""
	I0805 13:01:39.795043  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.795051  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:39.795057  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:39.795120  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:39.831302  451238 cri.go:89] found id: ""
	I0805 13:01:39.831336  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.831346  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:39.831356  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:39.831432  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:39.866506  451238 cri.go:89] found id: ""
	I0805 13:01:39.866540  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.866547  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:39.866554  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:39.866622  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:39.898083  451238 cri.go:89] found id: ""
	I0805 13:01:39.898108  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.898115  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:39.898128  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:39.898147  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:39.912192  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:39.912221  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:39.989216  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:39.989246  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:39.989262  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:40.069702  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:40.069746  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:40.118390  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:40.118428  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:38.525929  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:40.527120  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:38.909905  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:41.408953  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:43.409966  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:39.445474  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:41.944704  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:43.944956  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:42.669421  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:42.682287  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:42.682359  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:42.722933  451238 cri.go:89] found id: ""
	I0805 13:01:42.722961  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.722969  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:42.722975  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:42.723037  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:42.757604  451238 cri.go:89] found id: ""
	I0805 13:01:42.757635  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.757646  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:42.757654  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:42.757723  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:42.795825  451238 cri.go:89] found id: ""
	I0805 13:01:42.795852  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.795863  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:42.795871  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:42.795939  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:42.831749  451238 cri.go:89] found id: ""
	I0805 13:01:42.831779  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.831791  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:42.831800  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:42.831862  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:42.866280  451238 cri.go:89] found id: ""
	I0805 13:01:42.866310  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.866322  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:42.866330  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:42.866390  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:42.904393  451238 cri.go:89] found id: ""
	I0805 13:01:42.904427  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.904436  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:42.904445  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:42.904510  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:42.943175  451238 cri.go:89] found id: ""
	I0805 13:01:42.943204  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.943215  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:42.943223  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:42.943292  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:42.979117  451238 cri.go:89] found id: ""
	I0805 13:01:42.979144  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.979152  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:42.979174  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:42.979191  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:43.032032  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:43.032070  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:43.046285  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:43.046315  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:43.120300  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:43.120327  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:43.120347  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:43.209800  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:43.209851  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:45.759057  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:45.771984  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:45.772056  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:45.805421  451238 cri.go:89] found id: ""
	I0805 13:01:45.805451  451238 logs.go:276] 0 containers: []
	W0805 13:01:45.805459  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:45.805466  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:45.805521  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:45.841552  451238 cri.go:89] found id: ""
	I0805 13:01:45.841579  451238 logs.go:276] 0 containers: []
	W0805 13:01:45.841588  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:45.841597  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:45.841672  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:45.878502  451238 cri.go:89] found id: ""
	I0805 13:01:45.878529  451238 logs.go:276] 0 containers: []
	W0805 13:01:45.878537  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:45.878546  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:45.878622  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:45.921145  451238 cri.go:89] found id: ""
	I0805 13:01:45.921187  451238 logs.go:276] 0 containers: []
	W0805 13:01:45.921198  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:45.921207  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:45.921273  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:45.958408  451238 cri.go:89] found id: ""
	I0805 13:01:45.958437  451238 logs.go:276] 0 containers: []
	W0805 13:01:45.958445  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:45.958452  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:45.958521  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:45.994632  451238 cri.go:89] found id: ""
	I0805 13:01:45.994660  451238 logs.go:276] 0 containers: []
	W0805 13:01:45.994669  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:45.994676  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:45.994727  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:46.032930  451238 cri.go:89] found id: ""
	I0805 13:01:46.032961  451238 logs.go:276] 0 containers: []
	W0805 13:01:46.032971  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:46.032978  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:46.033041  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:46.074396  451238 cri.go:89] found id: ""
	I0805 13:01:46.074429  451238 logs.go:276] 0 containers: []
	W0805 13:01:46.074441  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:46.074454  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:46.074475  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:46.131977  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:46.132020  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:46.147924  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:46.147957  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:46.222005  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:46.222038  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:46.222054  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:46.306799  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:46.306842  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:43.024643  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:45.524936  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:45.410385  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:47.909281  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:46.444746  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:48.950198  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:48.856982  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:48.870945  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:48.871025  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:48.930811  451238 cri.go:89] found id: ""
	I0805 13:01:48.930837  451238 logs.go:276] 0 containers: []
	W0805 13:01:48.930852  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:48.930858  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:48.930917  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:48.986604  451238 cri.go:89] found id: ""
	I0805 13:01:48.986629  451238 logs.go:276] 0 containers: []
	W0805 13:01:48.986637  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:48.986643  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:48.986706  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:49.039433  451238 cri.go:89] found id: ""
	I0805 13:01:49.039468  451238 logs.go:276] 0 containers: []
	W0805 13:01:49.039479  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:49.039487  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:49.039555  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:49.079593  451238 cri.go:89] found id: ""
	I0805 13:01:49.079625  451238 logs.go:276] 0 containers: []
	W0805 13:01:49.079637  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:49.079645  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:49.079714  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:49.116243  451238 cri.go:89] found id: ""
	I0805 13:01:49.116274  451238 logs.go:276] 0 containers: []
	W0805 13:01:49.116284  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:49.116292  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:49.116360  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:49.158744  451238 cri.go:89] found id: ""
	I0805 13:01:49.158779  451238 logs.go:276] 0 containers: []
	W0805 13:01:49.158790  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:49.158799  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:49.158868  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:49.193747  451238 cri.go:89] found id: ""
	I0805 13:01:49.193778  451238 logs.go:276] 0 containers: []
	W0805 13:01:49.193786  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:49.193792  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:49.193843  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:49.227663  451238 cri.go:89] found id: ""
	I0805 13:01:49.227691  451238 logs.go:276] 0 containers: []
	W0805 13:01:49.227704  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:49.227714  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:49.227727  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:49.281380  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:49.281424  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:49.296286  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:49.296318  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:49.368584  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:49.368609  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:49.368625  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:49.453857  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:49.453909  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:48.024987  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:50.026076  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:50.408363  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:52.410039  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:51.444602  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:53.445118  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:51.993057  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:52.006066  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:52.006148  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:52.043179  451238 cri.go:89] found id: ""
	I0805 13:01:52.043212  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.043223  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:52.043231  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:52.043300  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:52.076469  451238 cri.go:89] found id: ""
	I0805 13:01:52.076502  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.076512  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:52.076520  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:52.076586  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:52.112443  451238 cri.go:89] found id: ""
	I0805 13:01:52.112477  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.112488  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:52.112497  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:52.112569  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:52.147589  451238 cri.go:89] found id: ""
	I0805 13:01:52.147620  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.147631  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:52.147638  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:52.147702  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:52.184016  451238 cri.go:89] found id: ""
	I0805 13:01:52.184053  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.184063  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:52.184072  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:52.184134  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:52.219670  451238 cri.go:89] found id: ""
	I0805 13:01:52.219702  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.219714  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:52.219727  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:52.219820  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:52.258697  451238 cri.go:89] found id: ""
	I0805 13:01:52.258731  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.258744  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:52.258752  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:52.258818  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:52.299599  451238 cri.go:89] found id: ""
	I0805 13:01:52.299636  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.299649  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:52.299665  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:52.299683  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:52.351730  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:52.351772  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:52.365993  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:52.366022  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:52.436019  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:52.436041  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:52.436056  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:52.520082  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:52.520118  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:55.064214  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:55.077358  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:55.077454  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:55.110523  451238 cri.go:89] found id: ""
	I0805 13:01:55.110555  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.110564  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:55.110570  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:55.110630  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:55.147870  451238 cri.go:89] found id: ""
	I0805 13:01:55.147905  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.147916  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:55.147925  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:55.147998  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:55.180769  451238 cri.go:89] found id: ""
	I0805 13:01:55.180803  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.180814  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:55.180822  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:55.180890  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:55.217290  451238 cri.go:89] found id: ""
	I0805 13:01:55.217332  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.217343  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:55.217353  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:55.217420  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:55.254185  451238 cri.go:89] found id: ""
	I0805 13:01:55.254221  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.254232  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:55.254239  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:55.254295  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:55.290633  451238 cri.go:89] found id: ""
	I0805 13:01:55.290662  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.290673  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:55.290681  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:55.290747  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:55.325830  451238 cri.go:89] found id: ""
	I0805 13:01:55.325862  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.325873  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:55.325880  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:55.325947  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:55.359887  451238 cri.go:89] found id: ""
	I0805 13:01:55.359922  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.359931  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:55.359941  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:55.359953  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:55.418251  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:55.418299  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:55.432007  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:55.432038  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:55.507177  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:55.507205  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:55.507219  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:55.586919  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:55.586965  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:52.525480  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:54.525653  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:57.024834  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:54.410408  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:56.909810  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:55.944741  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:57.946654  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:58.128822  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:58.142726  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:58.142799  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:58.178027  451238 cri.go:89] found id: ""
	I0805 13:01:58.178056  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.178067  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:58.178075  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:58.178147  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:58.213309  451238 cri.go:89] found id: ""
	I0805 13:01:58.213340  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.213351  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:58.213358  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:58.213430  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:58.247296  451238 cri.go:89] found id: ""
	I0805 13:01:58.247323  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.247332  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:58.247338  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:58.247393  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:58.280226  451238 cri.go:89] found id: ""
	I0805 13:01:58.280255  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.280266  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:58.280277  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:58.280335  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:58.316934  451238 cri.go:89] found id: ""
	I0805 13:01:58.316969  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.316981  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:58.316989  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:58.317055  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:58.360931  451238 cri.go:89] found id: ""
	I0805 13:01:58.360967  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.360979  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:58.360987  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:58.361055  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:58.399112  451238 cri.go:89] found id: ""
	I0805 13:01:58.399150  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.399163  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:58.399171  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:58.399244  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:58.441903  451238 cri.go:89] found id: ""
	I0805 13:01:58.441930  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.441941  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:58.441952  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:58.441967  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:58.524869  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:58.524908  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:58.562598  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:58.562634  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:58.618274  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:58.618313  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:58.633011  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:58.633039  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:58.706287  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:01.206971  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:01.222277  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:01.222357  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:01.266949  451238 cri.go:89] found id: ""
	I0805 13:02:01.266982  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.266993  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:01.267007  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:01.267108  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:01.306765  451238 cri.go:89] found id: ""
	I0805 13:02:01.306791  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.306799  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:01.306805  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:01.306859  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:01.345108  451238 cri.go:89] found id: ""
	I0805 13:02:01.345145  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.345157  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:01.345164  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:01.345227  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:01.383201  451238 cri.go:89] found id: ""
	I0805 13:02:01.383231  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.383239  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:01.383245  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:01.383307  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:01.419292  451238 cri.go:89] found id: ""
	I0805 13:02:01.419320  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.419331  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:01.419338  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:01.419410  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:01.456447  451238 cri.go:89] found id: ""
	I0805 13:02:01.456482  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.456492  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:01.456500  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:01.456568  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:01.496266  451238 cri.go:89] found id: ""
	I0805 13:02:01.496298  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.496306  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:01.496312  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:01.496375  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:01.541492  451238 cri.go:89] found id: ""
	I0805 13:02:01.541529  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.541541  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:01.541555  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:01.541571  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:01.593140  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:01.593185  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:01.606641  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:01.606670  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:01.681989  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:01.682015  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:01.682030  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:01.765612  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:01.765655  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:59.025355  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:01.025443  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:59.408591  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:01.409368  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:00.445254  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:02.944495  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:04.311066  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:04.326530  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:04.326599  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:04.360091  451238 cri.go:89] found id: ""
	I0805 13:02:04.360124  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.360136  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:04.360142  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:04.360214  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:04.398983  451238 cri.go:89] found id: ""
	I0805 13:02:04.399014  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.399026  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:04.399045  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:04.399122  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:04.433444  451238 cri.go:89] found id: ""
	I0805 13:02:04.433474  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.433483  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:04.433495  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:04.433546  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:04.470113  451238 cri.go:89] found id: ""
	I0805 13:02:04.470145  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.470156  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:04.470167  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:04.470233  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:04.505695  451238 cri.go:89] found id: ""
	I0805 13:02:04.505721  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.505731  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:04.505738  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:04.505801  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:04.544093  451238 cri.go:89] found id: ""
	I0805 13:02:04.544121  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.544129  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:04.544136  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:04.544196  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:04.579663  451238 cri.go:89] found id: ""
	I0805 13:02:04.579702  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.579715  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:04.579724  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:04.579803  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:04.616524  451238 cri.go:89] found id: ""
	I0805 13:02:04.616565  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.616577  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:04.616590  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:04.616607  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:04.693014  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:04.693035  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:04.693048  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:04.772508  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:04.772550  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:04.813014  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:04.813043  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:04.864653  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:04.864702  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:03.525225  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:06.024868  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:03.908365  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:05.908993  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:07.910958  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:05.444593  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:07.444737  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:07.378816  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:07.392347  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:07.392439  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:07.425843  451238 cri.go:89] found id: ""
	I0805 13:02:07.425876  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.425887  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:07.425895  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:07.425958  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:07.461547  451238 cri.go:89] found id: ""
	I0805 13:02:07.461575  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.461584  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:07.461591  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:07.461651  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:07.496461  451238 cri.go:89] found id: ""
	I0805 13:02:07.496500  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.496510  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:07.496521  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:07.496599  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:07.531520  451238 cri.go:89] found id: ""
	I0805 13:02:07.531556  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.531566  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:07.531574  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:07.531642  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:07.571821  451238 cri.go:89] found id: ""
	I0805 13:02:07.571855  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.571866  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:07.571876  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:07.571948  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:07.611111  451238 cri.go:89] found id: ""
	I0805 13:02:07.611151  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.611159  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:07.611165  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:07.611226  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:07.651428  451238 cri.go:89] found id: ""
	I0805 13:02:07.651456  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.651464  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:07.651470  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:07.651520  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:07.689828  451238 cri.go:89] found id: ""
	I0805 13:02:07.689858  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.689866  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:07.689877  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:07.689893  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:07.746381  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:07.746422  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:07.760953  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:07.760989  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:07.834859  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:07.834883  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:07.834901  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:07.915344  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:07.915376  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:10.459232  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:10.472789  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:10.472853  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:10.508434  451238 cri.go:89] found id: ""
	I0805 13:02:10.508462  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.508470  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:10.508477  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:10.508539  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:10.543487  451238 cri.go:89] found id: ""
	I0805 13:02:10.543515  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.543524  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:10.543530  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:10.543582  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:10.588274  451238 cri.go:89] found id: ""
	I0805 13:02:10.588302  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.588310  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:10.588317  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:10.588379  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:10.620810  451238 cri.go:89] found id: ""
	I0805 13:02:10.620851  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.620863  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:10.620871  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:10.620945  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:10.657882  451238 cri.go:89] found id: ""
	I0805 13:02:10.657913  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.657923  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:10.657929  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:10.657993  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:10.696188  451238 cri.go:89] found id: ""
	I0805 13:02:10.696220  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.696229  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:10.696235  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:10.696294  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:10.729942  451238 cri.go:89] found id: ""
	I0805 13:02:10.729977  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.729988  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:10.729996  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:10.730050  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:10.761972  451238 cri.go:89] found id: ""
	I0805 13:02:10.762000  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.762008  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:10.762018  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:10.762032  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:10.816859  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:10.816890  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:10.830348  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:10.830379  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:10.902720  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:10.902753  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:10.902771  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:10.981464  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:10.981505  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:08.024948  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:10.525441  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:10.408841  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:12.409506  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:09.445359  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:11.944853  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:13.528296  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:13.541813  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:13.541887  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:13.575632  451238 cri.go:89] found id: ""
	I0805 13:02:13.575669  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.575681  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:13.575689  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:13.575766  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:13.612646  451238 cri.go:89] found id: ""
	I0805 13:02:13.612680  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.612691  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:13.612699  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:13.612755  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:13.650310  451238 cri.go:89] found id: ""
	I0805 13:02:13.650341  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.650361  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:13.650369  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:13.650439  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:13.686941  451238 cri.go:89] found id: ""
	I0805 13:02:13.686970  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.686981  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:13.686990  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:13.687054  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:13.722250  451238 cri.go:89] found id: ""
	I0805 13:02:13.722285  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.722297  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:13.722306  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:13.722388  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:13.758337  451238 cri.go:89] found id: ""
	I0805 13:02:13.758367  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.758375  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:13.758382  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:13.758443  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:13.792980  451238 cri.go:89] found id: ""
	I0805 13:02:13.793016  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.793028  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:13.793036  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:13.793127  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:13.831511  451238 cri.go:89] found id: ""
	I0805 13:02:13.831539  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.831547  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:13.831558  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:13.831579  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:13.885124  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:13.885169  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:13.899112  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:13.899155  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:13.977058  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:13.977099  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:13.977115  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:14.060873  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:14.060911  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:16.602595  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:16.617557  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:16.617638  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:16.660212  451238 cri.go:89] found id: ""
	I0805 13:02:16.660244  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.660256  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:16.660264  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:16.660323  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:16.695515  451238 cri.go:89] found id: ""
	I0805 13:02:16.695553  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.695564  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:16.695572  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:16.695638  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:16.732844  451238 cri.go:89] found id: ""
	I0805 13:02:16.732875  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.732884  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:16.732891  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:16.732943  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:16.772465  451238 cri.go:89] found id: ""
	I0805 13:02:16.772497  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.772504  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:16.772517  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:16.772582  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:16.809826  451238 cri.go:89] found id: ""
	I0805 13:02:16.809863  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.809875  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:16.809882  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:16.809949  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:16.849480  451238 cri.go:89] found id: ""
	I0805 13:02:16.849512  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.849523  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:16.849531  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:16.849598  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:16.884098  451238 cri.go:89] found id: ""
	I0805 13:02:16.884132  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.884144  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:16.884152  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:16.884222  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:16.920497  451238 cri.go:89] found id: ""
	I0805 13:02:16.920523  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.920530  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:16.920541  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:16.920556  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:13.025299  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:15.525474  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:14.908633  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:16.909254  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:14.445321  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:16.945044  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:18.945630  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:16.975287  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:16.975317  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:16.989524  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:16.989552  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:17.057997  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:17.058022  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:17.058037  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:17.133721  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:17.133763  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:19.672385  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:19.687948  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:19.688017  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:19.724105  451238 cri.go:89] found id: ""
	I0805 13:02:19.724132  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.724140  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:19.724147  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:19.724199  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:19.758263  451238 cri.go:89] found id: ""
	I0805 13:02:19.758296  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.758306  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:19.758314  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:19.758381  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:19.792924  451238 cri.go:89] found id: ""
	I0805 13:02:19.792954  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.792961  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:19.792967  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:19.793023  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:19.826340  451238 cri.go:89] found id: ""
	I0805 13:02:19.826367  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.826375  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:19.826382  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:19.826434  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:19.864289  451238 cri.go:89] found id: ""
	I0805 13:02:19.864323  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.864334  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:19.864343  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:19.864413  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:19.899630  451238 cri.go:89] found id: ""
	I0805 13:02:19.899661  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.899673  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:19.899682  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:19.899786  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:19.935798  451238 cri.go:89] found id: ""
	I0805 13:02:19.935826  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.935836  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:19.935843  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:19.935896  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:19.977984  451238 cri.go:89] found id: ""
	I0805 13:02:19.978019  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.978031  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:19.978044  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:19.978062  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:20.030096  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:20.030131  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:20.043878  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:20.043940  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:20.119251  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:20.119279  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:20.119297  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:20.202445  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:20.202488  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:18.026282  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:20.524225  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:19.408760  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:21.410108  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:21.445045  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:23.944150  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:22.744728  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:22.758606  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:22.758675  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:22.791663  451238 cri.go:89] found id: ""
	I0805 13:02:22.791696  451238 logs.go:276] 0 containers: []
	W0805 13:02:22.791708  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:22.791717  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:22.791821  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:22.826568  451238 cri.go:89] found id: ""
	I0805 13:02:22.826594  451238 logs.go:276] 0 containers: []
	W0805 13:02:22.826603  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:22.826609  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:22.826671  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:22.860430  451238 cri.go:89] found id: ""
	I0805 13:02:22.860459  451238 logs.go:276] 0 containers: []
	W0805 13:02:22.860470  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:22.860479  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:22.860543  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:22.893815  451238 cri.go:89] found id: ""
	I0805 13:02:22.893846  451238 logs.go:276] 0 containers: []
	W0805 13:02:22.893854  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:22.893860  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:22.893929  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:22.929804  451238 cri.go:89] found id: ""
	I0805 13:02:22.929830  451238 logs.go:276] 0 containers: []
	W0805 13:02:22.929840  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:22.929849  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:22.929915  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:22.964918  451238 cri.go:89] found id: ""
	I0805 13:02:22.964950  451238 logs.go:276] 0 containers: []
	W0805 13:02:22.964961  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:22.964969  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:22.965035  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:23.000236  451238 cri.go:89] found id: ""
	I0805 13:02:23.000271  451238 logs.go:276] 0 containers: []
	W0805 13:02:23.000282  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:23.000290  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:23.000354  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:23.052075  451238 cri.go:89] found id: ""
	I0805 13:02:23.052108  451238 logs.go:276] 0 containers: []
	W0805 13:02:23.052117  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:23.052128  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:23.052141  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:23.104213  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:23.104248  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:23.118811  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:23.118851  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:23.188552  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:23.188578  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:23.188595  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:23.272518  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:23.272562  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:25.811116  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:25.825030  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:25.825113  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:25.864282  451238 cri.go:89] found id: ""
	I0805 13:02:25.864318  451238 logs.go:276] 0 containers: []
	W0805 13:02:25.864331  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:25.864339  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:25.864413  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:25.901712  451238 cri.go:89] found id: ""
	I0805 13:02:25.901746  451238 logs.go:276] 0 containers: []
	W0805 13:02:25.901754  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:25.901760  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:25.901822  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:25.937036  451238 cri.go:89] found id: ""
	I0805 13:02:25.937068  451238 logs.go:276] 0 containers: []
	W0805 13:02:25.937077  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:25.937083  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:25.937146  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:25.974598  451238 cri.go:89] found id: ""
	I0805 13:02:25.974627  451238 logs.go:276] 0 containers: []
	W0805 13:02:25.974638  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:25.974646  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:25.974713  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:26.011083  451238 cri.go:89] found id: ""
	I0805 13:02:26.011116  451238 logs.go:276] 0 containers: []
	W0805 13:02:26.011124  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:26.011130  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:26.011190  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:26.050187  451238 cri.go:89] found id: ""
	I0805 13:02:26.050219  451238 logs.go:276] 0 containers: []
	W0805 13:02:26.050231  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:26.050242  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:26.050317  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:26.085038  451238 cri.go:89] found id: ""
	I0805 13:02:26.085067  451238 logs.go:276] 0 containers: []
	W0805 13:02:26.085077  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:26.085086  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:26.085151  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:26.122121  451238 cri.go:89] found id: ""
	I0805 13:02:26.122150  451238 logs.go:276] 0 containers: []
	W0805 13:02:26.122158  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:26.122173  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:26.122191  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:26.193819  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:26.193850  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:26.193865  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:26.273453  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:26.273492  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:26.312474  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:26.312509  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:26.363176  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:26.363215  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:22.524303  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:24.525047  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:26.528347  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:23.909120  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:26.409913  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:25.944824  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:28.444803  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:28.878523  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:28.892242  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:28.892330  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:28.928650  451238 cri.go:89] found id: ""
	I0805 13:02:28.928682  451238 logs.go:276] 0 containers: []
	W0805 13:02:28.928693  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:28.928702  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:28.928772  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:28.965582  451238 cri.go:89] found id: ""
	I0805 13:02:28.965615  451238 logs.go:276] 0 containers: []
	W0805 13:02:28.965626  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:28.965634  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:28.965698  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:29.001824  451238 cri.go:89] found id: ""
	I0805 13:02:29.001855  451238 logs.go:276] 0 containers: []
	W0805 13:02:29.001865  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:29.001874  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:29.001939  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:29.037688  451238 cri.go:89] found id: ""
	I0805 13:02:29.037715  451238 logs.go:276] 0 containers: []
	W0805 13:02:29.037722  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:29.037730  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:29.037780  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:29.078495  451238 cri.go:89] found id: ""
	I0805 13:02:29.078540  451238 logs.go:276] 0 containers: []
	W0805 13:02:29.078552  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:29.078559  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:29.078627  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:29.113728  451238 cri.go:89] found id: ""
	I0805 13:02:29.113764  451238 logs.go:276] 0 containers: []
	W0805 13:02:29.113776  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:29.113786  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:29.113851  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:29.147590  451238 cri.go:89] found id: ""
	I0805 13:02:29.147618  451238 logs.go:276] 0 containers: []
	W0805 13:02:29.147629  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:29.147638  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:29.147702  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:29.186015  451238 cri.go:89] found id: ""
	I0805 13:02:29.186043  451238 logs.go:276] 0 containers: []
	W0805 13:02:29.186052  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:29.186062  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:29.186074  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:29.242795  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:29.242850  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:29.257012  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:29.257046  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:29.330528  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:29.330555  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:29.330569  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:29.418109  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:29.418145  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:29.025256  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:31.526187  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:28.909283  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:31.409736  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:30.944380  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:32.945421  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:31.986351  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:32.001265  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:32.001349  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:32.035152  451238 cri.go:89] found id: ""
	I0805 13:02:32.035191  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.035200  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:32.035208  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:32.035262  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:32.069086  451238 cri.go:89] found id: ""
	I0805 13:02:32.069118  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.069128  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:32.069136  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:32.069204  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:32.103788  451238 cri.go:89] found id: ""
	I0805 13:02:32.103814  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.103822  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:32.103831  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:32.103893  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:32.139104  451238 cri.go:89] found id: ""
	I0805 13:02:32.139138  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.139149  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:32.139157  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:32.139222  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:32.192759  451238 cri.go:89] found id: ""
	I0805 13:02:32.192789  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.192798  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:32.192804  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:32.192865  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:32.231080  451238 cri.go:89] found id: ""
	I0805 13:02:32.231115  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.231126  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:32.231135  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:32.231200  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:32.266547  451238 cri.go:89] found id: ""
	I0805 13:02:32.266578  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.266587  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:32.266594  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:32.266647  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:32.301828  451238 cri.go:89] found id: ""
	I0805 13:02:32.301856  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.301865  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:32.301875  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:32.301888  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:32.358439  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:32.358479  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:32.372349  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:32.372383  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:32.442335  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:32.442369  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:32.442388  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:32.521705  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:32.521744  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:35.060867  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:35.074370  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:35.074433  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:35.111149  451238 cri.go:89] found id: ""
	I0805 13:02:35.111181  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.111191  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:35.111200  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:35.111268  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:35.153781  451238 cri.go:89] found id: ""
	I0805 13:02:35.153814  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.153825  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:35.153832  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:35.153894  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:35.193207  451238 cri.go:89] found id: ""
	I0805 13:02:35.193239  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.193256  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:35.193291  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:35.193370  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:35.243879  451238 cri.go:89] found id: ""
	I0805 13:02:35.243915  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.243928  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:35.243936  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:35.243994  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:35.297922  451238 cri.go:89] found id: ""
	I0805 13:02:35.297954  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.297966  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:35.297973  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:35.298039  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:35.333201  451238 cri.go:89] found id: ""
	I0805 13:02:35.333234  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.333245  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:35.333254  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:35.333316  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:35.366327  451238 cri.go:89] found id: ""
	I0805 13:02:35.366361  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.366373  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:35.366381  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:35.366449  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:35.401515  451238 cri.go:89] found id: ""
	I0805 13:02:35.401546  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.401555  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:35.401565  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:35.401578  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:35.451057  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:35.451090  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:35.465054  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:35.465095  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:35.547111  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:35.547142  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:35.547160  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:35.627451  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:35.627490  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:34.025104  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:36.524904  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:33.908489  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:35.909183  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:37.909360  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:35.445317  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:37.446056  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:38.169022  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:38.181892  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:38.181968  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:38.217919  451238 cri.go:89] found id: ""
	I0805 13:02:38.217951  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.217961  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:38.217970  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:38.218041  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:38.253967  451238 cri.go:89] found id: ""
	I0805 13:02:38.253999  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.254008  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:38.254020  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:38.254073  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:38.293757  451238 cri.go:89] found id: ""
	I0805 13:02:38.293789  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.293801  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:38.293809  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:38.293904  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:38.329657  451238 cri.go:89] found id: ""
	I0805 13:02:38.329686  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.329697  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:38.329705  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:38.329772  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:38.364602  451238 cri.go:89] found id: ""
	I0805 13:02:38.364635  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.364647  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:38.364656  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:38.364732  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:38.396352  451238 cri.go:89] found id: ""
	I0805 13:02:38.396382  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.396394  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:38.396403  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:38.396471  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:38.429172  451238 cri.go:89] found id: ""
	I0805 13:02:38.429203  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.429214  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:38.429223  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:38.429293  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:38.464855  451238 cri.go:89] found id: ""
	I0805 13:02:38.464891  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.464903  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:38.464916  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:38.464931  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:38.514924  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:38.514967  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:38.530076  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:38.530113  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:38.602472  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:38.602494  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:38.602509  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:38.683905  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:38.683948  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:41.226878  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:41.245027  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:41.245100  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:41.280482  451238 cri.go:89] found id: ""
	I0805 13:02:41.280511  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.280523  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:41.280532  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:41.280597  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:41.316592  451238 cri.go:89] found id: ""
	I0805 13:02:41.316622  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.316633  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:41.316641  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:41.316708  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:41.353282  451238 cri.go:89] found id: ""
	I0805 13:02:41.353313  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.353324  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:41.353333  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:41.353397  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:41.393379  451238 cri.go:89] found id: ""
	I0805 13:02:41.393406  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.393417  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:41.393426  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:41.393502  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:41.430980  451238 cri.go:89] found id: ""
	I0805 13:02:41.431012  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.431023  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:41.431031  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:41.431106  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:41.467228  451238 cri.go:89] found id: ""
	I0805 13:02:41.467261  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.467273  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:41.467281  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:41.467348  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:41.502105  451238 cri.go:89] found id: ""
	I0805 13:02:41.502153  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.502166  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:41.502175  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:41.502250  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:41.539286  451238 cri.go:89] found id: ""
	I0805 13:02:41.539314  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.539325  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:41.539338  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:41.539353  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:41.592135  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:41.592175  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:41.608151  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:41.608184  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:41.680096  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:41.680131  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:41.680148  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:41.759589  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:41.759628  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:39.025448  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:41.526590  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:40.409447  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:42.909412  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:39.945459  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:42.444630  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:44.300461  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:44.314310  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:44.314388  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:44.348516  451238 cri.go:89] found id: ""
	I0805 13:02:44.348549  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.348562  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:44.348570  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:44.348635  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:44.388256  451238 cri.go:89] found id: ""
	I0805 13:02:44.388289  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.388299  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:44.388309  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:44.388383  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:44.426743  451238 cri.go:89] found id: ""
	I0805 13:02:44.426778  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.426786  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:44.426792  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:44.426848  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:44.463008  451238 cri.go:89] found id: ""
	I0805 13:02:44.463044  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.463054  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:44.463062  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:44.463129  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:44.497662  451238 cri.go:89] found id: ""
	I0805 13:02:44.497696  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.497707  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:44.497715  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:44.497789  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:44.534253  451238 cri.go:89] found id: ""
	I0805 13:02:44.534281  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.534288  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:44.534294  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:44.534378  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:44.574350  451238 cri.go:89] found id: ""
	I0805 13:02:44.574380  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.574390  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:44.574398  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:44.574468  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:44.609984  451238 cri.go:89] found id: ""
	I0805 13:02:44.610018  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.610031  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:44.610044  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:44.610060  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:44.650363  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:44.650402  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:44.700997  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:44.701032  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:44.716841  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:44.716874  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:44.785482  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:44.785502  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:44.785517  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:44.023932  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:46.025733  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:44.909613  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:47.409724  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:44.445234  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:46.944157  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:48.946098  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:47.365382  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:47.378779  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:47.378851  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:47.413615  451238 cri.go:89] found id: ""
	I0805 13:02:47.413636  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.413645  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:47.413651  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:47.413699  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:47.448536  451238 cri.go:89] found id: ""
	I0805 13:02:47.448563  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.448572  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:47.448578  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:47.448629  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:47.490817  451238 cri.go:89] found id: ""
	I0805 13:02:47.490847  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.490856  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:47.490862  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:47.490931  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:47.533151  451238 cri.go:89] found id: ""
	I0805 13:02:47.533179  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.533187  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:47.533193  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:47.533250  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:47.571991  451238 cri.go:89] found id: ""
	I0805 13:02:47.572022  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.572030  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:47.572036  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:47.572096  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:47.606943  451238 cri.go:89] found id: ""
	I0805 13:02:47.606976  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.606987  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:47.606995  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:47.607073  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:47.644704  451238 cri.go:89] found id: ""
	I0805 13:02:47.644741  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.644753  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:47.644762  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:47.644828  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:47.687361  451238 cri.go:89] found id: ""
	I0805 13:02:47.687395  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.687408  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:47.687427  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:47.687453  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:47.766572  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:47.766614  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:47.812209  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:47.812242  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:47.862948  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:47.862987  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:47.878697  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:47.878729  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:47.951680  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:50.452861  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:50.466370  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:50.466440  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:50.500001  451238 cri.go:89] found id: ""
	I0805 13:02:50.500031  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.500043  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:50.500051  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:50.500126  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:50.541752  451238 cri.go:89] found id: ""
	I0805 13:02:50.541786  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.541794  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:50.541800  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:50.541864  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:50.578889  451238 cri.go:89] found id: ""
	I0805 13:02:50.578915  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.578923  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:50.578930  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:50.578984  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:50.614865  451238 cri.go:89] found id: ""
	I0805 13:02:50.614896  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.614906  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:50.614912  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:50.614980  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:50.656169  451238 cri.go:89] found id: ""
	I0805 13:02:50.656195  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.656202  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:50.656209  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:50.656277  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:50.695050  451238 cri.go:89] found id: ""
	I0805 13:02:50.695082  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.695099  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:50.695108  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:50.695187  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:50.733205  451238 cri.go:89] found id: ""
	I0805 13:02:50.733233  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.733242  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:50.733249  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:50.733300  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:50.770654  451238 cri.go:89] found id: ""
	I0805 13:02:50.770683  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.770693  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:50.770706  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:50.770721  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:50.826521  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:50.826567  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:50.842153  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:50.842181  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:50.916445  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:50.916474  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:50.916487  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:50.999973  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:51.000020  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:48.525240  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:51.024459  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:49.907505  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:51.909037  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:50.946199  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:53.444128  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:53.539541  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:53.553804  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:53.553893  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:53.593075  451238 cri.go:89] found id: ""
	I0805 13:02:53.593105  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.593114  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:53.593121  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:53.593190  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:53.629967  451238 cri.go:89] found id: ""
	I0805 13:02:53.630001  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.630012  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:53.630020  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:53.630088  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:53.663535  451238 cri.go:89] found id: ""
	I0805 13:02:53.663564  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.663572  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:53.663577  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:53.663635  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:53.697650  451238 cri.go:89] found id: ""
	I0805 13:02:53.697676  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.697684  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:53.697690  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:53.697741  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:53.732845  451238 cri.go:89] found id: ""
	I0805 13:02:53.732873  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.732883  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:53.732891  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:53.732950  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:53.774673  451238 cri.go:89] found id: ""
	I0805 13:02:53.774703  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.774712  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:53.774719  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:53.774783  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:53.815368  451238 cri.go:89] found id: ""
	I0805 13:02:53.815401  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.815413  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:53.815423  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:53.815487  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:53.849726  451238 cri.go:89] found id: ""
	I0805 13:02:53.849760  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.849771  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:53.849785  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:53.849801  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:53.925356  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:53.925398  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:53.966721  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:53.966751  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:54.023096  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:54.023140  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:54.037634  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:54.037666  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:54.115159  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:56.616326  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:56.629665  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:56.629744  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:56.665665  451238 cri.go:89] found id: ""
	I0805 13:02:56.665701  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.665713  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:56.665722  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:56.665790  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:56.700446  451238 cri.go:89] found id: ""
	I0805 13:02:56.700473  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.700481  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:56.700488  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:56.700554  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:56.737152  451238 cri.go:89] found id: ""
	I0805 13:02:56.737190  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.737202  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:56.737210  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:56.737283  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:56.777909  451238 cri.go:89] found id: ""
	I0805 13:02:56.777942  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.777954  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:56.777961  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:56.778027  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:56.813503  451238 cri.go:89] found id: ""
	I0805 13:02:56.813537  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.813547  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:56.813556  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:56.813625  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:56.848964  451238 cri.go:89] found id: ""
	I0805 13:02:56.848993  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.849002  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:56.849008  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:56.849071  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:56.884310  451238 cri.go:89] found id: ""
	I0805 13:02:56.884339  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.884347  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:56.884356  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:56.884417  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:56.925895  451238 cri.go:89] found id: ""
	I0805 13:02:56.925926  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.925936  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:56.925948  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:56.925962  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:53.025086  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:55.025424  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:57.026117  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:53.909851  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:56.411536  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:55.945123  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:57.945278  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:56.982847  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:56.982882  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:56.997703  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:56.997742  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:57.071130  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:57.071153  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:57.071174  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:57.152985  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:57.153029  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:59.697501  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:59.711799  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:59.711879  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:59.746992  451238 cri.go:89] found id: ""
	I0805 13:02:59.747024  451238 logs.go:276] 0 containers: []
	W0805 13:02:59.747035  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:59.747043  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:59.747115  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:59.780563  451238 cri.go:89] found id: ""
	I0805 13:02:59.780592  451238 logs.go:276] 0 containers: []
	W0805 13:02:59.780604  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:59.780611  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:59.780676  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:59.816973  451238 cri.go:89] found id: ""
	I0805 13:02:59.817007  451238 logs.go:276] 0 containers: []
	W0805 13:02:59.817019  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:59.817027  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:59.817098  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:59.851989  451238 cri.go:89] found id: ""
	I0805 13:02:59.852018  451238 logs.go:276] 0 containers: []
	W0805 13:02:59.852028  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:59.852035  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:59.852086  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:59.887491  451238 cri.go:89] found id: ""
	I0805 13:02:59.887517  451238 logs.go:276] 0 containers: []
	W0805 13:02:59.887525  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:59.887535  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:59.887587  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:59.924965  451238 cri.go:89] found id: ""
	I0805 13:02:59.924997  451238 logs.go:276] 0 containers: []
	W0805 13:02:59.925005  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:59.925012  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:59.925062  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:59.965830  451238 cri.go:89] found id: ""
	I0805 13:02:59.965860  451238 logs.go:276] 0 containers: []
	W0805 13:02:59.965868  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:59.965875  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:59.965932  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:03:00.003208  451238 cri.go:89] found id: ""
	I0805 13:03:00.003241  451238 logs.go:276] 0 containers: []
	W0805 13:03:00.003250  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:03:00.003260  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:00.003275  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:00.056865  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:00.056911  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:00.070563  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:00.070593  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:03:00.137931  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:03:00.137957  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:00.137976  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:00.221598  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:03:00.221649  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:59.525042  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:02.024461  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:58.903499  450576 pod_ready.go:81] duration metric: took 4m0.001018928s for pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace to be "Ready" ...
	E0805 13:02:58.903533  450576 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace to be "Ready" (will not retry!)
	I0805 13:02:58.903556  450576 pod_ready.go:38] duration metric: took 4m8.049032492s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 13:02:58.903598  450576 kubeadm.go:597] duration metric: took 4m18.518107211s to restartPrimaryControlPlane
	W0805 13:02:58.903786  450576 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0805 13:02:58.903819  450576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0805 13:02:59.945464  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:02.443954  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:02.761328  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:03:02.775836  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:03:02.775904  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:03:02.812714  451238 cri.go:89] found id: ""
	I0805 13:03:02.812752  451238 logs.go:276] 0 containers: []
	W0805 13:03:02.812764  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:03:02.812773  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:03:02.812848  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:03:02.850072  451238 cri.go:89] found id: ""
	I0805 13:03:02.850103  451238 logs.go:276] 0 containers: []
	W0805 13:03:02.850130  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:03:02.850138  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:03:02.850197  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:03:02.886956  451238 cri.go:89] found id: ""
	I0805 13:03:02.887081  451238 logs.go:276] 0 containers: []
	W0805 13:03:02.887103  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:03:02.887114  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:03:02.887188  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:03:02.924874  451238 cri.go:89] found id: ""
	I0805 13:03:02.924906  451238 logs.go:276] 0 containers: []
	W0805 13:03:02.924918  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:03:02.924925  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:03:02.924996  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:03:02.965965  451238 cri.go:89] found id: ""
	I0805 13:03:02.965996  451238 logs.go:276] 0 containers: []
	W0805 13:03:02.966007  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:03:02.966015  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:03:02.966101  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:03:03.001081  451238 cri.go:89] found id: ""
	I0805 13:03:03.001118  451238 logs.go:276] 0 containers: []
	W0805 13:03:03.001130  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:03:03.001140  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:03:03.001201  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:03:03.036194  451238 cri.go:89] found id: ""
	I0805 13:03:03.036223  451238 logs.go:276] 0 containers: []
	W0805 13:03:03.036234  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:03:03.036243  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:03:03.036303  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:03:03.071905  451238 cri.go:89] found id: ""
	I0805 13:03:03.071940  451238 logs.go:276] 0 containers: []
	W0805 13:03:03.071951  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:03:03.071964  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:03.071982  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:03.124400  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:03.124442  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:03.138492  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:03.138520  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:03:03.207300  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:03:03.207326  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:03.207342  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:03.294941  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:03:03.294983  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:03:05.836187  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:03:05.850504  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:03:05.850609  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:03:05.889692  451238 cri.go:89] found id: ""
	I0805 13:03:05.889718  451238 logs.go:276] 0 containers: []
	W0805 13:03:05.889729  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:03:05.889737  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:03:05.889804  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:03:05.924597  451238 cri.go:89] found id: ""
	I0805 13:03:05.924630  451238 logs.go:276] 0 containers: []
	W0805 13:03:05.924640  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:03:05.924647  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:03:05.924711  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:03:05.960373  451238 cri.go:89] found id: ""
	I0805 13:03:05.960404  451238 logs.go:276] 0 containers: []
	W0805 13:03:05.960413  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:03:05.960419  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:03:05.960471  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:03:05.996583  451238 cri.go:89] found id: ""
	I0805 13:03:05.996617  451238 logs.go:276] 0 containers: []
	W0805 13:03:05.996628  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:03:05.996636  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:03:05.996708  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:03:06.033539  451238 cri.go:89] found id: ""
	I0805 13:03:06.033567  451238 logs.go:276] 0 containers: []
	W0805 13:03:06.033575  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:03:06.033586  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:03:06.033655  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:03:06.069348  451238 cri.go:89] found id: ""
	I0805 13:03:06.069378  451238 logs.go:276] 0 containers: []
	W0805 13:03:06.069391  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:03:06.069401  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:03:06.069466  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:03:06.103570  451238 cri.go:89] found id: ""
	I0805 13:03:06.103599  451238 logs.go:276] 0 containers: []
	W0805 13:03:06.103607  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:03:06.103613  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:03:06.103665  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:03:06.140230  451238 cri.go:89] found id: ""
	I0805 13:03:06.140260  451238 logs.go:276] 0 containers: []
	W0805 13:03:06.140271  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:03:06.140284  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:06.140300  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:06.191073  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:06.191123  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:06.204825  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:06.204857  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:03:06.281309  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:03:06.281339  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:06.281358  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:06.361709  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:03:06.361749  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:03:04.025007  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:06.524506  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:04.444267  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:06.444910  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:08.445441  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:08.903194  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:03:08.921602  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:03:08.921681  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:03:08.960916  451238 cri.go:89] found id: ""
	I0805 13:03:08.960945  451238 logs.go:276] 0 containers: []
	W0805 13:03:08.960975  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:03:08.960986  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:03:08.961055  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:03:08.996316  451238 cri.go:89] found id: ""
	I0805 13:03:08.996417  451238 logs.go:276] 0 containers: []
	W0805 13:03:08.996436  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:03:08.996448  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:03:08.996522  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:03:09.038536  451238 cri.go:89] found id: ""
	I0805 13:03:09.038572  451238 logs.go:276] 0 containers: []
	W0805 13:03:09.038584  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:03:09.038593  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:03:09.038664  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:03:09.075368  451238 cri.go:89] found id: ""
	I0805 13:03:09.075396  451238 logs.go:276] 0 containers: []
	W0805 13:03:09.075405  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:03:09.075412  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:03:09.075474  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:03:09.114232  451238 cri.go:89] found id: ""
	I0805 13:03:09.114262  451238 logs.go:276] 0 containers: []
	W0805 13:03:09.114272  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:03:09.114280  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:03:09.114353  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:03:09.161878  451238 cri.go:89] found id: ""
	I0805 13:03:09.161964  451238 logs.go:276] 0 containers: []
	W0805 13:03:09.161978  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:03:09.161988  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:03:09.162062  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:03:09.206694  451238 cri.go:89] found id: ""
	I0805 13:03:09.206727  451238 logs.go:276] 0 containers: []
	W0805 13:03:09.206739  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:03:09.206748  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:03:09.206890  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:03:09.257029  451238 cri.go:89] found id: ""
	I0805 13:03:09.257066  451238 logs.go:276] 0 containers: []
	W0805 13:03:09.257079  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:03:09.257090  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:09.257107  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:09.278638  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:09.278679  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:03:09.353760  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:03:09.353781  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:09.353793  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:09.438371  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:03:09.438419  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:03:09.487253  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:09.487297  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:08.018954  450884 pod_ready.go:81] duration metric: took 4m0.00055059s for pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace to be "Ready" ...
	E0805 13:03:08.018987  450884 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace to be "Ready" (will not retry!)
	I0805 13:03:08.019010  450884 pod_ready.go:38] duration metric: took 4m11.028507743s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 13:03:08.019048  450884 kubeadm.go:597] duration metric: took 4m19.097834327s to restartPrimaryControlPlane
	W0805 13:03:08.019122  450884 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0805 13:03:08.019157  450884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0805 13:03:10.945002  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:12.945953  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:12.042215  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:03:12.055721  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:03:12.055812  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:03:12.096936  451238 cri.go:89] found id: ""
	I0805 13:03:12.096965  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.096977  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:03:12.096985  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:03:12.097051  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:03:12.136149  451238 cri.go:89] found id: ""
	I0805 13:03:12.136181  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.136192  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:03:12.136199  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:03:12.136276  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:03:12.180568  451238 cri.go:89] found id: ""
	I0805 13:03:12.180606  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.180618  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:03:12.180626  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:03:12.180695  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:03:12.221759  451238 cri.go:89] found id: ""
	I0805 13:03:12.221794  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.221806  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:03:12.221815  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:03:12.221882  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:03:12.259460  451238 cri.go:89] found id: ""
	I0805 13:03:12.259490  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.259498  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:03:12.259508  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:03:12.259563  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:03:12.301245  451238 cri.go:89] found id: ""
	I0805 13:03:12.301277  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.301289  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:03:12.301297  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:03:12.301368  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:03:12.343640  451238 cri.go:89] found id: ""
	I0805 13:03:12.343678  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.343690  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:03:12.343698  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:03:12.343809  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:03:12.382729  451238 cri.go:89] found id: ""
	I0805 13:03:12.382762  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.382774  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:03:12.382787  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:12.382807  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:12.400862  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:12.400897  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:03:12.478755  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:03:12.478788  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:12.478807  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:12.566029  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:03:12.566080  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:03:12.611834  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:12.611929  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:15.171517  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:03:15.185569  451238 kubeadm.go:597] duration metric: took 4m3.737627997s to restartPrimaryControlPlane
	W0805 13:03:15.185662  451238 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0805 13:03:15.185697  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0805 13:03:15.669994  451238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 13:03:15.684794  451238 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 13:03:15.695088  451238 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 13:03:15.705403  451238 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 13:03:15.705427  451238 kubeadm.go:157] found existing configuration files:
	
	I0805 13:03:15.705488  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 13:03:15.714777  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 13:03:15.714833  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 13:03:15.724437  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 13:03:15.733263  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 13:03:15.733317  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 13:03:15.743004  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 13:03:15.752219  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 13:03:15.752278  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 13:03:15.761788  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 13:03:15.771193  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 13:03:15.771245  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 13:03:15.780964  451238 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 13:03:15.855628  451238 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0805 13:03:15.855751  451238 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 13:03:16.015686  451238 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 13:03:16.015880  451238 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 13:03:16.016041  451238 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 13:03:16.207054  451238 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 13:03:16.209133  451238 out.go:204]   - Generating certificates and keys ...
	I0805 13:03:16.209256  451238 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 13:03:16.209376  451238 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 13:03:16.209493  451238 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0805 13:03:16.209597  451238 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0805 13:03:16.209703  451238 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0805 13:03:16.211637  451238 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0805 13:03:16.211726  451238 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0805 13:03:16.211833  451238 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0805 13:03:16.211959  451238 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0805 13:03:16.212690  451238 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0805 13:03:16.212863  451238 kubeadm.go:310] [certs] Using the existing "sa" key
	I0805 13:03:16.212963  451238 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 13:03:16.283080  451238 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 13:03:16.609523  451238 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 13:03:16.765635  451238 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 13:03:16.934487  451238 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 13:03:16.955335  451238 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 13:03:16.956267  451238 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 13:03:16.956328  451238 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 13:03:17.088081  451238 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 13:03:15.445305  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:17.447306  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:17.090118  451238 out.go:204]   - Booting up control plane ...
	I0805 13:03:17.090264  451238 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 13:03:17.100902  451238 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 13:03:17.101263  451238 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 13:03:17.102210  451238 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 13:03:17.112522  451238 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0805 13:03:19.943658  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:21.944253  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:23.945158  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:25.252381  450576 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.348530672s)
	I0805 13:03:25.252504  450576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 13:03:25.269305  450576 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 13:03:25.279322  450576 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 13:03:25.289241  450576 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 13:03:25.289266  450576 kubeadm.go:157] found existing configuration files:
	
	I0805 13:03:25.289304  450576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 13:03:25.298671  450576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 13:03:25.298732  450576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 13:03:25.309962  450576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 13:03:25.320180  450576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 13:03:25.320247  450576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 13:03:25.330481  450576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 13:03:25.340565  450576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 13:03:25.340652  450576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 13:03:25.351244  450576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 13:03:25.361443  450576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 13:03:25.361536  450576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 13:03:25.371655  450576 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 13:03:25.419277  450576 kubeadm.go:310] W0805 13:03:25.398597    2979 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0805 13:03:25.420220  450576 kubeadm.go:310] W0805 13:03:25.399642    2979 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0805 13:03:25.537148  450576 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 13:03:25.945501  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:27.945972  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:33.413703  450576 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-rc.0
	I0805 13:03:33.413775  450576 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 13:03:33.413863  450576 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 13:03:33.414008  450576 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 13:03:33.414152  450576 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0805 13:03:33.414235  450576 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 13:03:33.415804  450576 out.go:204]   - Generating certificates and keys ...
	I0805 13:03:33.415874  450576 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 13:03:33.415949  450576 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 13:03:33.416037  450576 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0805 13:03:33.416101  450576 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0805 13:03:33.416174  450576 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0805 13:03:33.416237  450576 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0805 13:03:33.416289  450576 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0805 13:03:33.416357  450576 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0805 13:03:33.416437  450576 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0805 13:03:33.416518  450576 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0805 13:03:33.416553  450576 kubeadm.go:310] [certs] Using the existing "sa" key
	I0805 13:03:33.416603  450576 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 13:03:33.416646  450576 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 13:03:33.416701  450576 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0805 13:03:33.416745  450576 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 13:03:33.416816  450576 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 13:03:33.416878  450576 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 13:03:33.416971  450576 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 13:03:33.417059  450576 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 13:03:33.418572  450576 out.go:204]   - Booting up control plane ...
	I0805 13:03:33.418671  450576 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 13:03:33.418751  450576 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 13:03:33.418833  450576 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 13:03:33.418965  450576 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 13:03:33.419092  450576 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 13:03:33.419172  450576 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 13:03:33.419342  450576 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0805 13:03:33.419488  450576 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0805 13:03:33.419577  450576 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.308417ms
	I0805 13:03:33.419672  450576 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0805 13:03:33.419780  450576 kubeadm.go:310] [api-check] The API server is healthy after 5.001429681s
	I0805 13:03:33.419908  450576 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 13:03:33.420049  450576 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 13:03:33.420117  450576 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0805 13:03:33.420293  450576 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-669469 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 13:03:33.420385  450576 kubeadm.go:310] [bootstrap-token] Using token: i9zl3x.c4hzh1c9ccxlydzt
	I0805 13:03:33.421925  450576 out.go:204]   - Configuring RBAC rules ...
	I0805 13:03:33.422042  450576 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 13:03:33.422157  450576 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 13:03:33.422352  450576 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 13:03:33.422488  450576 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 13:03:33.422649  450576 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 13:03:33.422784  450576 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 13:03:33.422914  450576 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 13:03:33.422991  450576 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0805 13:03:33.423060  450576 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0805 13:03:33.423070  450576 kubeadm.go:310] 
	I0805 13:03:33.423160  450576 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0805 13:03:33.423173  450576 kubeadm.go:310] 
	I0805 13:03:33.423274  450576 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0805 13:03:33.423283  450576 kubeadm.go:310] 
	I0805 13:03:33.423316  450576 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0805 13:03:33.423409  450576 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 13:03:33.423495  450576 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 13:03:33.423513  450576 kubeadm.go:310] 
	I0805 13:03:33.423616  450576 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0805 13:03:33.423628  450576 kubeadm.go:310] 
	I0805 13:03:33.423692  450576 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 13:03:33.423701  450576 kubeadm.go:310] 
	I0805 13:03:33.423793  450576 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0805 13:03:33.423931  450576 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 13:03:33.424030  450576 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 13:03:33.424039  450576 kubeadm.go:310] 
	I0805 13:03:33.424106  450576 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0805 13:03:33.424176  450576 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0805 13:03:33.424185  450576 kubeadm.go:310] 
	I0805 13:03:33.424282  450576 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token i9zl3x.c4hzh1c9ccxlydzt \
	I0805 13:03:33.424430  450576 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d5d31a77e9c4cbf19599d2fca5d8f2345e115b01301fa4b841f92bcfec86ddc6 \
	I0805 13:03:33.424473  450576 kubeadm.go:310] 	--control-plane 
	I0805 13:03:33.424482  450576 kubeadm.go:310] 
	I0805 13:03:33.424588  450576 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0805 13:03:33.424602  450576 kubeadm.go:310] 
	I0805 13:03:33.424725  450576 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token i9zl3x.c4hzh1c9ccxlydzt \
	I0805 13:03:33.424870  450576 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d5d31a77e9c4cbf19599d2fca5d8f2345e115b01301fa4b841f92bcfec86ddc6 
	I0805 13:03:33.424892  450576 cni.go:84] Creating CNI manager for ""
	I0805 13:03:33.424911  450576 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 13:03:33.426503  450576 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0805 13:03:33.427981  450576 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0805 13:03:33.439484  450576 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0805 13:03:33.458459  450576 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 13:03:33.458547  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:33.458579  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-669469 minikube.k8s.io/updated_at=2024_08_05T13_03_33_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f minikube.k8s.io/name=no-preload-669469 minikube.k8s.io/primary=true
	I0805 13:03:33.488847  450576 ops.go:34] apiserver oom_adj: -16
	I0805 13:03:29.946423  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:32.444923  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:33.674306  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:34.174940  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:34.674936  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:35.174693  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:35.675004  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:36.174801  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:36.674878  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:37.174394  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:37.263948  450576 kubeadm.go:1113] duration metric: took 3.805464287s to wait for elevateKubeSystemPrivileges
	I0805 13:03:37.263985  450576 kubeadm.go:394] duration metric: took 4m56.93214495s to StartCluster
	I0805 13:03:37.264025  450576 settings.go:142] acquiring lock: {Name:mkef693333292ed53a03690c72ec170ce2e26d3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 13:03:37.264143  450576 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 13:03:37.265965  450576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/kubeconfig: {Name:mkf2ea766e58530103015ce4ba9d1ed3336f3926 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 13:03:37.266283  450576 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.223 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 13:03:37.266400  450576 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 13:03:37.266469  450576 addons.go:69] Setting storage-provisioner=true in profile "no-preload-669469"
	I0805 13:03:37.266510  450576 addons.go:234] Setting addon storage-provisioner=true in "no-preload-669469"
	W0805 13:03:37.266518  450576 addons.go:243] addon storage-provisioner should already be in state true
	I0805 13:03:37.266519  450576 config.go:182] Loaded profile config "no-preload-669469": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0805 13:03:37.266551  450576 host.go:66] Checking if "no-preload-669469" exists ...
	I0805 13:03:37.266505  450576 addons.go:69] Setting default-storageclass=true in profile "no-preload-669469"
	I0805 13:03:37.266547  450576 addons.go:69] Setting metrics-server=true in profile "no-preload-669469"
	I0805 13:03:37.266612  450576 addons.go:234] Setting addon metrics-server=true in "no-preload-669469"
	I0805 13:03:37.266616  450576 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-669469"
	W0805 13:03:37.266627  450576 addons.go:243] addon metrics-server should already be in state true
	I0805 13:03:37.266668  450576 host.go:66] Checking if "no-preload-669469" exists ...
	I0805 13:03:37.267002  450576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:03:37.267002  450576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:03:37.267035  450576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:03:37.267049  450576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:03:37.267041  450576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:03:37.267085  450576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:03:37.267985  450576 out.go:177] * Verifying Kubernetes components...
	I0805 13:03:37.269486  450576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 13:03:37.283242  450576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44391
	I0805 13:03:37.283291  450576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35597
	I0805 13:03:37.283245  450576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38679
	I0805 13:03:37.283710  450576 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:03:37.283785  450576 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:03:37.283717  450576 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:03:37.284296  450576 main.go:141] libmachine: Using API Version  1
	I0805 13:03:37.284316  450576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:03:37.284319  450576 main.go:141] libmachine: Using API Version  1
	I0805 13:03:37.284296  450576 main.go:141] libmachine: Using API Version  1
	I0805 13:03:37.284335  450576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:03:37.284360  450576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:03:37.284734  450576 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:03:37.284735  450576 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:03:37.284746  450576 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:03:37.284963  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetState
	I0805 13:03:37.285343  450576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:03:37.285375  450576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:03:37.285387  450576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:03:37.285441  450576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:03:37.288699  450576 addons.go:234] Setting addon default-storageclass=true in "no-preload-669469"
	W0805 13:03:37.288722  450576 addons.go:243] addon default-storageclass should already be in state true
	I0805 13:03:37.288753  450576 host.go:66] Checking if "no-preload-669469" exists ...
	I0805 13:03:37.289023  450576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:03:37.289049  450576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:03:37.303814  450576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38647
	I0805 13:03:37.304491  450576 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:03:37.305081  450576 main.go:141] libmachine: Using API Version  1
	I0805 13:03:37.305104  450576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:03:37.305552  450576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42975
	I0805 13:03:37.305566  450576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36331
	I0805 13:03:37.305583  450576 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:03:37.305928  450576 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:03:37.306007  450576 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:03:37.306148  450576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:03:37.306190  450576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:03:37.306485  450576 main.go:141] libmachine: Using API Version  1
	I0805 13:03:37.306503  450576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:03:37.306595  450576 main.go:141] libmachine: Using API Version  1
	I0805 13:03:37.306611  450576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:03:37.306971  450576 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:03:37.306998  450576 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:03:37.307157  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetState
	I0805 13:03:37.307162  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetState
	I0805 13:03:37.309002  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 13:03:37.309241  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 13:03:37.311054  450576 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0805 13:03:37.311055  450576 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 13:03:37.312682  450576 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0805 13:03:37.312695  450576 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0805 13:03:37.312710  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 13:03:37.312834  450576 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 13:03:37.312856  450576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 13:03:37.312874  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 13:03:37.317044  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 13:03:37.317635  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 13:03:37.317660  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 13:03:37.317753  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 13:03:37.317955  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 13:03:37.318141  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 13:03:37.318360  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 13:03:37.318400  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 13:03:37.318427  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 13:03:37.318539  450576 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa Username:docker}
	I0805 13:03:37.318633  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 13:03:37.318967  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 13:03:37.319111  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 13:03:37.319241  450576 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa Username:docker}
	I0805 13:03:37.325066  450576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46527
	I0805 13:03:37.325633  450576 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:03:37.326052  450576 main.go:141] libmachine: Using API Version  1
	I0805 13:03:37.326071  450576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:03:37.326326  450576 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:03:37.326473  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetState
	I0805 13:03:37.328502  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 13:03:37.328814  450576 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 13:03:37.328826  450576 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 13:03:37.328839  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 13:03:37.331482  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 13:03:37.331853  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 13:03:37.331874  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 13:03:37.332013  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 13:03:37.332169  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 13:03:37.332270  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 13:03:37.332358  450576 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa Username:docker}
	I0805 13:03:37.483477  450576 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 13:03:37.501924  450576 node_ready.go:35] waiting up to 6m0s for node "no-preload-669469" to be "Ready" ...
	I0805 13:03:37.511394  450576 node_ready.go:49] node "no-preload-669469" has status "Ready":"True"
	I0805 13:03:37.511427  450576 node_ready.go:38] duration metric: took 9.462968ms for node "no-preload-669469" to be "Ready" ...
	I0805 13:03:37.511443  450576 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 13:03:37.526505  450576 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 13:03:37.575598  450576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 13:03:37.583338  450576 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0805 13:03:37.583362  450576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0805 13:03:37.594019  450576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 13:03:37.629885  450576 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0805 13:03:37.629913  450576 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0805 13:03:37.684790  450576 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0805 13:03:37.684825  450576 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0805 13:03:37.753629  450576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0805 13:03:37.857352  450576 main.go:141] libmachine: Making call to close driver server
	I0805 13:03:37.857386  450576 main.go:141] libmachine: (no-preload-669469) Calling .Close
	I0805 13:03:37.857777  450576 main.go:141] libmachine: (no-preload-669469) DBG | Closing plugin on server side
	I0805 13:03:37.857780  450576 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:03:37.857812  450576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:03:37.857829  450576 main.go:141] libmachine: Making call to close driver server
	I0805 13:03:37.857838  450576 main.go:141] libmachine: (no-preload-669469) Calling .Close
	I0805 13:03:37.858101  450576 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:03:37.858117  450576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:03:37.858153  450576 main.go:141] libmachine: (no-preload-669469) DBG | Closing plugin on server side
	I0805 13:03:37.871616  450576 main.go:141] libmachine: Making call to close driver server
	I0805 13:03:37.871639  450576 main.go:141] libmachine: (no-preload-669469) Calling .Close
	I0805 13:03:37.871970  450576 main.go:141] libmachine: (no-preload-669469) DBG | Closing plugin on server side
	I0805 13:03:37.872022  450576 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:03:37.872031  450576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:03:38.290429  450576 main.go:141] libmachine: Making call to close driver server
	I0805 13:03:38.290449  450576 main.go:141] libmachine: (no-preload-669469) Calling .Close
	I0805 13:03:38.290784  450576 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:03:38.290856  450576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:03:38.290871  450576 main.go:141] libmachine: Making call to close driver server
	I0805 13:03:38.290880  450576 main.go:141] libmachine: (no-preload-669469) Calling .Close
	I0805 13:03:38.290829  450576 main.go:141] libmachine: (no-preload-669469) DBG | Closing plugin on server side
	I0805 13:03:38.291265  450576 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:03:38.291289  450576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:03:38.291271  450576 main.go:141] libmachine: (no-preload-669469) DBG | Closing plugin on server side
	I0805 13:03:38.880274  450576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.126602375s)
	I0805 13:03:38.880331  450576 main.go:141] libmachine: Making call to close driver server
	I0805 13:03:38.880344  450576 main.go:141] libmachine: (no-preload-669469) Calling .Close
	I0805 13:03:38.880868  450576 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:03:38.880896  450576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:03:38.880906  450576 main.go:141] libmachine: Making call to close driver server
	I0805 13:03:38.880916  450576 main.go:141] libmachine: (no-preload-669469) Calling .Close
	I0805 13:03:38.880871  450576 main.go:141] libmachine: (no-preload-669469) DBG | Closing plugin on server side
	I0805 13:03:38.881196  450576 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:03:38.881204  450576 main.go:141] libmachine: (no-preload-669469) DBG | Closing plugin on server side
	I0805 13:03:38.881211  450576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:03:38.881230  450576 addons.go:475] Verifying addon metrics-server=true in "no-preload-669469"
	I0805 13:03:38.882896  450576 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0805 13:03:34.945631  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:37.446855  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:39.741362  450884 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.722174979s)
	I0805 13:03:39.741438  450884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 13:03:39.760465  450884 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 13:03:39.770587  450884 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 13:03:39.780157  450884 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 13:03:39.780177  450884 kubeadm.go:157] found existing configuration files:
	
	I0805 13:03:39.780215  450884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0805 13:03:39.790172  450884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 13:03:39.790243  450884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 13:03:39.803838  450884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0805 13:03:39.816314  450884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 13:03:39.816367  450884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 13:03:39.826636  450884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0805 13:03:39.836513  450884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 13:03:39.836570  450884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 13:03:39.846356  450884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0805 13:03:39.855694  450884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 13:03:39.855770  450884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 13:03:39.865721  450884 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 13:03:40.081251  450884 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 13:03:38.884521  450576 addons.go:510] duration metric: took 1.618121451s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0805 13:03:39.536758  450576 pod_ready.go:102] pod "etcd-no-preload-669469" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:41.035239  450576 pod_ready.go:92] pod "etcd-no-preload-669469" in "kube-system" namespace has status "Ready":"True"
	I0805 13:03:41.035266  450576 pod_ready.go:81] duration metric: took 3.508734543s for pod "etcd-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 13:03:41.035280  450576 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 13:03:41.042787  450576 pod_ready.go:92] pod "kube-apiserver-no-preload-669469" in "kube-system" namespace has status "Ready":"True"
	I0805 13:03:41.042811  450576 pod_ready.go:81] duration metric: took 7.522909ms for pod "kube-apiserver-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 13:03:41.042824  450576 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 13:03:42.048338  450576 pod_ready.go:92] pod "kube-controller-manager-no-preload-669469" in "kube-system" namespace has status "Ready":"True"
	I0805 13:03:42.048363  450576 pod_ready.go:81] duration metric: took 1.005531569s for pod "kube-controller-manager-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 13:03:42.048373  450576 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 13:03:39.945815  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:42.445704  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:44.056394  450576 pod_ready.go:102] pod "kube-scheduler-no-preload-669469" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:45.555280  450576 pod_ready.go:92] pod "kube-scheduler-no-preload-669469" in "kube-system" namespace has status "Ready":"True"
	I0805 13:03:45.555310  450576 pod_ready.go:81] duration metric: took 3.506927542s for pod "kube-scheduler-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 13:03:45.555321  450576 pod_ready.go:38] duration metric: took 8.043865797s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 13:03:45.555338  450576 api_server.go:52] waiting for apiserver process to appear ...
	I0805 13:03:45.555397  450576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:03:45.572225  450576 api_server.go:72] duration metric: took 8.30589728s to wait for apiserver process to appear ...
	I0805 13:03:45.572249  450576 api_server.go:88] waiting for apiserver healthz status ...
	I0805 13:03:45.572272  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 13:03:45.578042  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 200:
	ok
	I0805 13:03:45.579014  450576 api_server.go:141] control plane version: v1.31.0-rc.0
	I0805 13:03:45.579034  450576 api_server.go:131] duration metric: took 6.778214ms to wait for apiserver health ...
	I0805 13:03:45.579042  450576 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 13:03:45.585537  450576 system_pods.go:59] 9 kube-system pods found
	I0805 13:03:45.585660  450576 system_pods.go:61] "coredns-6f6b679f8f-npbmj" [9eea9e0a-697b-42c9-857c-a3556c658fde] Running
	I0805 13:03:45.585673  450576 system_pods.go:61] "coredns-6f6b679f8f-pqhwx" [3d7bb193-e93e-49b8-be4b-943f2d7fe59d] Running
	I0805 13:03:45.585679  450576 system_pods.go:61] "etcd-no-preload-669469" [550acfbb-f255-470e-9e4f-a6eb36447951] Running
	I0805 13:03:45.585687  450576 system_pods.go:61] "kube-apiserver-no-preload-669469" [57089d30-f83b-4f06-8281-8bcdfb571df9] Running
	I0805 13:03:45.585694  450576 system_pods.go:61] "kube-controller-manager-no-preload-669469" [8f3b2de3-6296-4f95-8d91-b9408c8eb38b] Running
	I0805 13:03:45.585700  450576 system_pods.go:61] "kube-proxy-tpn5s" [f89e32f9-d750-41ac-891e-e3ca4a4fbbd2] Running
	I0805 13:03:45.585705  450576 system_pods.go:61] "kube-scheduler-no-preload-669469" [69af56a0-7269-4bc5-83ea-c632c7b8d060] Running
	I0805 13:03:45.585716  450576 system_pods.go:61] "metrics-server-6867b74b74-x4j7b" [55a747e4-f9a7-41f1-b584-470048ba6fcb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 13:03:45.585726  450576 system_pods.go:61] "storage-provisioner" [cb19adf6-e208-4709-b02f-ae32acc30478] Running
	I0805 13:03:45.585736  450576 system_pods.go:74] duration metric: took 6.688464ms to wait for pod list to return data ...
	I0805 13:03:45.585749  450576 default_sa.go:34] waiting for default service account to be created ...
	I0805 13:03:45.589498  450576 default_sa.go:45] found service account: "default"
	I0805 13:03:45.589526  450576 default_sa.go:55] duration metric: took 3.765664ms for default service account to be created ...
	I0805 13:03:45.589535  450576 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 13:03:45.597499  450576 system_pods.go:86] 9 kube-system pods found
	I0805 13:03:45.597527  450576 system_pods.go:89] "coredns-6f6b679f8f-npbmj" [9eea9e0a-697b-42c9-857c-a3556c658fde] Running
	I0805 13:03:45.597533  450576 system_pods.go:89] "coredns-6f6b679f8f-pqhwx" [3d7bb193-e93e-49b8-be4b-943f2d7fe59d] Running
	I0805 13:03:45.597537  450576 system_pods.go:89] "etcd-no-preload-669469" [550acfbb-f255-470e-9e4f-a6eb36447951] Running
	I0805 13:03:45.597541  450576 system_pods.go:89] "kube-apiserver-no-preload-669469" [57089d30-f83b-4f06-8281-8bcdfb571df9] Running
	I0805 13:03:45.597547  450576 system_pods.go:89] "kube-controller-manager-no-preload-669469" [8f3b2de3-6296-4f95-8d91-b9408c8eb38b] Running
	I0805 13:03:45.597550  450576 system_pods.go:89] "kube-proxy-tpn5s" [f89e32f9-d750-41ac-891e-e3ca4a4fbbd2] Running
	I0805 13:03:45.597554  450576 system_pods.go:89] "kube-scheduler-no-preload-669469" [69af56a0-7269-4bc5-83ea-c632c7b8d060] Running
	I0805 13:03:45.597563  450576 system_pods.go:89] "metrics-server-6867b74b74-x4j7b" [55a747e4-f9a7-41f1-b584-470048ba6fcb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 13:03:45.597568  450576 system_pods.go:89] "storage-provisioner" [cb19adf6-e208-4709-b02f-ae32acc30478] Running
	I0805 13:03:45.597577  450576 system_pods.go:126] duration metric: took 8.035546ms to wait for k8s-apps to be running ...
	I0805 13:03:45.597586  450576 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 13:03:45.597631  450576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 13:03:45.619317  450576 system_svc.go:56] duration metric: took 21.706117ms WaitForService to wait for kubelet
	I0805 13:03:45.619365  450576 kubeadm.go:582] duration metric: took 8.353035332s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 13:03:45.619398  450576 node_conditions.go:102] verifying NodePressure condition ...
	I0805 13:03:45.622763  450576 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 13:03:45.622790  450576 node_conditions.go:123] node cpu capacity is 2
	I0805 13:03:45.622801  450576 node_conditions.go:105] duration metric: took 3.396756ms to run NodePressure ...
	I0805 13:03:45.622814  450576 start.go:241] waiting for startup goroutines ...
	I0805 13:03:45.622821  450576 start.go:246] waiting for cluster config update ...
	I0805 13:03:45.622831  450576 start.go:255] writing updated cluster config ...
	I0805 13:03:45.623102  450576 ssh_runner.go:195] Run: rm -f paused
	I0805 13:03:45.682547  450576 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-rc.0 (minor skew: 1)
	I0805 13:03:45.684415  450576 out.go:177] * Done! kubectl is now configured to use "no-preload-669469" cluster and "default" namespace by default
	I0805 13:03:48.707730  450884 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0805 13:03:48.707817  450884 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 13:03:48.707920  450884 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 13:03:48.708065  450884 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 13:03:48.708218  450884 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 13:03:48.708311  450884 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 13:03:48.709807  450884 out.go:204]   - Generating certificates and keys ...
	I0805 13:03:48.709878  450884 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 13:03:48.709931  450884 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 13:03:48.710008  450884 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0805 13:03:48.710084  450884 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0805 13:03:48.710148  450884 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0805 13:03:48.710196  450884 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0805 13:03:48.710251  450884 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0805 13:03:48.710316  450884 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0805 13:03:48.710415  450884 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0805 13:03:48.710520  450884 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0805 13:03:48.710582  450884 kubeadm.go:310] [certs] Using the existing "sa" key
	I0805 13:03:48.710656  450884 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 13:03:48.710700  450884 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 13:03:48.710746  450884 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0805 13:03:48.710790  450884 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 13:03:48.710843  450884 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 13:03:48.710895  450884 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 13:03:48.710971  450884 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 13:03:48.711055  450884 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 13:03:48.713503  450884 out.go:204]   - Booting up control plane ...
	I0805 13:03:48.713601  450884 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 13:03:48.713687  450884 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 13:03:48.713763  450884 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 13:03:48.713911  450884 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 13:03:48.714039  450884 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 13:03:48.714105  450884 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 13:03:48.714222  450884 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0805 13:03:48.714284  450884 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0805 13:03:48.714345  450884 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.128103ms
	I0805 13:03:48.714423  450884 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0805 13:03:48.714491  450884 kubeadm.go:310] [api-check] The API server is healthy after 5.502076793s
	I0805 13:03:48.714600  450884 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 13:03:48.714730  450884 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 13:03:48.714794  450884 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0805 13:03:48.714987  450884 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-371585 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 13:03:48.715075  450884 kubeadm.go:310] [bootstrap-token] Using token: cpuyhq.sjq5yhx27tk7meks
	I0805 13:03:48.716575  450884 out.go:204]   - Configuring RBAC rules ...
	I0805 13:03:48.716686  450884 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 13:03:48.716775  450884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 13:03:48.716952  450884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 13:03:48.717075  450884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 13:03:48.717196  450884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 13:03:48.717270  450884 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 13:03:48.717391  450884 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 13:03:48.717450  450884 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0805 13:03:48.717512  450884 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0805 13:03:48.717521  450884 kubeadm.go:310] 
	I0805 13:03:48.717613  450884 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0805 13:03:48.717623  450884 kubeadm.go:310] 
	I0805 13:03:48.717724  450884 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0805 13:03:48.717734  450884 kubeadm.go:310] 
	I0805 13:03:48.717768  450884 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0805 13:03:48.717848  450884 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 13:03:48.717892  450884 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 13:03:48.717898  450884 kubeadm.go:310] 
	I0805 13:03:48.717968  450884 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0805 13:03:48.717978  450884 kubeadm.go:310] 
	I0805 13:03:48.718047  450884 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 13:03:48.718057  450884 kubeadm.go:310] 
	I0805 13:03:48.718133  450884 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0805 13:03:48.718220  450884 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 13:03:48.718297  450884 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 13:03:48.718304  450884 kubeadm.go:310] 
	I0805 13:03:48.718422  450884 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0805 13:03:48.718506  450884 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0805 13:03:48.718513  450884 kubeadm.go:310] 
	I0805 13:03:48.718585  450884 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token cpuyhq.sjq5yhx27tk7meks \
	I0805 13:03:48.718669  450884 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d5d31a77e9c4cbf19599d2fca5d8f2345e115b01301fa4b841f92bcfec86ddc6 \
	I0805 13:03:48.718688  450884 kubeadm.go:310] 	--control-plane 
	I0805 13:03:48.718694  450884 kubeadm.go:310] 
	I0805 13:03:48.718761  450884 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0805 13:03:48.718769  450884 kubeadm.go:310] 
	I0805 13:03:48.718848  450884 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token cpuyhq.sjq5yhx27tk7meks \
	I0805 13:03:48.718948  450884 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d5d31a77e9c4cbf19599d2fca5d8f2345e115b01301fa4b841f92bcfec86ddc6 
	I0805 13:03:48.718957  450884 cni.go:84] Creating CNI manager for ""
	I0805 13:03:48.718965  450884 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 13:03:48.720262  450884 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0805 13:03:44.946225  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:47.444313  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:48.721390  450884 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0805 13:03:48.732324  450884 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0805 13:03:48.750318  450884 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 13:03:48.750397  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:48.750398  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-371585 minikube.k8s.io/updated_at=2024_08_05T13_03_48_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f minikube.k8s.io/name=default-k8s-diff-port-371585 minikube.k8s.io/primary=true
	I0805 13:03:48.781590  450884 ops.go:34] apiserver oom_adj: -16
	I0805 13:03:48.966544  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:49.467473  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:49.967093  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:50.466813  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:50.967183  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:51.467350  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:51.967432  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:49.444667  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:49.444719  450393 pod_ready.go:81] duration metric: took 4m0.006667631s for pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace to be "Ready" ...
	E0805 13:03:49.444731  450393 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0805 13:03:49.444738  450393 pod_ready.go:38] duration metric: took 4m2.407503205s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 13:03:49.444757  450393 api_server.go:52] waiting for apiserver process to appear ...
	I0805 13:03:49.444787  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:03:49.444849  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:03:49.502039  450393 cri.go:89] found id: "be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7"
	I0805 13:03:49.502067  450393 cri.go:89] found id: ""
	I0805 13:03:49.502079  450393 logs.go:276] 1 containers: [be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7]
	I0805 13:03:49.502139  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:49.510426  450393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:03:49.510494  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:03:49.553861  450393 cri.go:89] found id: "85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804"
	I0805 13:03:49.553889  450393 cri.go:89] found id: ""
	I0805 13:03:49.553899  450393 logs.go:276] 1 containers: [85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804]
	I0805 13:03:49.553960  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:49.558802  450393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:03:49.558868  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:03:49.594787  450393 cri.go:89] found id: "b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb"
	I0805 13:03:49.594810  450393 cri.go:89] found id: ""
	I0805 13:03:49.594828  450393 logs.go:276] 1 containers: [b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb]
	I0805 13:03:49.594891  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:49.599735  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:03:49.599822  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:03:49.637856  450393 cri.go:89] found id: "8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756"
	I0805 13:03:49.637878  450393 cri.go:89] found id: ""
	I0805 13:03:49.637886  450393 logs.go:276] 1 containers: [8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756]
	I0805 13:03:49.637939  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:49.642228  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:03:49.642295  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:03:49.683822  450393 cri.go:89] found id: "c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0"
	I0805 13:03:49.683844  450393 cri.go:89] found id: ""
	I0805 13:03:49.683853  450393 logs.go:276] 1 containers: [c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0]
	I0805 13:03:49.683913  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:49.688077  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:03:49.688155  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:03:49.724887  450393 cri.go:89] found id: "75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f"
	I0805 13:03:49.724913  450393 cri.go:89] found id: ""
	I0805 13:03:49.724923  450393 logs.go:276] 1 containers: [75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f]
	I0805 13:03:49.724987  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:49.728965  450393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:03:49.729052  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:03:49.765826  450393 cri.go:89] found id: ""
	I0805 13:03:49.765859  450393 logs.go:276] 0 containers: []
	W0805 13:03:49.765871  450393 logs.go:278] No container was found matching "kindnet"
	I0805 13:03:49.765878  450393 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0805 13:03:49.765944  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0805 13:03:49.803790  450393 cri.go:89] found id: "07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b"
	I0805 13:03:49.803811  450393 cri.go:89] found id: "2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86"
	I0805 13:03:49.803815  450393 cri.go:89] found id: ""
	I0805 13:03:49.803823  450393 logs.go:276] 2 containers: [07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b 2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86]
	I0805 13:03:49.803887  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:49.808064  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:49.812308  450393 logs.go:123] Gathering logs for storage-provisioner [2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86] ...
	I0805 13:03:49.812332  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86"
	I0805 13:03:49.851842  450393 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:49.851867  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:50.418758  450393 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:50.418808  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 13:03:50.564965  450393 logs.go:123] Gathering logs for coredns [b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb] ...
	I0805 13:03:50.564999  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb"
	I0805 13:03:50.608518  450393 logs.go:123] Gathering logs for kube-apiserver [be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7] ...
	I0805 13:03:50.608557  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7"
	I0805 13:03:50.658446  450393 logs.go:123] Gathering logs for etcd [85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804] ...
	I0805 13:03:50.658482  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804"
	I0805 13:03:50.699924  450393 logs.go:123] Gathering logs for kube-scheduler [8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756] ...
	I0805 13:03:50.699962  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756"
	I0805 13:03:50.741228  450393 logs.go:123] Gathering logs for kube-proxy [c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0] ...
	I0805 13:03:50.741264  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0"
	I0805 13:03:50.776100  450393 logs.go:123] Gathering logs for kube-controller-manager [75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f] ...
	I0805 13:03:50.776133  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f"
	I0805 13:03:50.827847  450393 logs.go:123] Gathering logs for storage-provisioner [07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b] ...
	I0805 13:03:50.827880  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b"
	I0805 13:03:50.867699  450393 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:50.867731  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:50.920049  450393 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:50.920085  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:50.934198  450393 logs.go:123] Gathering logs for container status ...
	I0805 13:03:50.934224  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:03:53.477808  450393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:03:53.494062  450393 api_server.go:72] duration metric: took 4m14.183013645s to wait for apiserver process to appear ...
	I0805 13:03:53.494093  450393 api_server.go:88] waiting for apiserver healthz status ...
	I0805 13:03:53.494143  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:03:53.494211  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:03:53.534293  450393 cri.go:89] found id: "be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7"
	I0805 13:03:53.534322  450393 cri.go:89] found id: ""
	I0805 13:03:53.534333  450393 logs.go:276] 1 containers: [be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7]
	I0805 13:03:53.534400  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:53.539014  450393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:03:53.539088  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:03:53.576587  450393 cri.go:89] found id: "85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804"
	I0805 13:03:53.576608  450393 cri.go:89] found id: ""
	I0805 13:03:53.576616  450393 logs.go:276] 1 containers: [85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804]
	I0805 13:03:53.576667  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:53.582068  450393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:03:53.582147  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:03:53.623240  450393 cri.go:89] found id: "b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb"
	I0805 13:03:53.623264  450393 cri.go:89] found id: ""
	I0805 13:03:53.623274  450393 logs.go:276] 1 containers: [b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb]
	I0805 13:03:53.623352  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:53.627638  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:03:53.627699  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:03:53.668167  450393 cri.go:89] found id: "8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756"
	I0805 13:03:53.668198  450393 cri.go:89] found id: ""
	I0805 13:03:53.668209  450393 logs.go:276] 1 containers: [8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756]
	I0805 13:03:53.668281  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:53.672390  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:03:53.672469  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:03:53.714046  450393 cri.go:89] found id: "c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0"
	I0805 13:03:53.714069  450393 cri.go:89] found id: ""
	I0805 13:03:53.714078  450393 logs.go:276] 1 containers: [c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0]
	I0805 13:03:53.714130  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:53.718325  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:03:53.718392  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:03:53.756343  450393 cri.go:89] found id: "75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f"
	I0805 13:03:53.756372  450393 cri.go:89] found id: ""
	I0805 13:03:53.756382  450393 logs.go:276] 1 containers: [75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f]
	I0805 13:03:53.756444  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:53.760627  450393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:03:53.760696  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:03:53.806370  450393 cri.go:89] found id: ""
	I0805 13:03:53.806406  450393 logs.go:276] 0 containers: []
	W0805 13:03:53.806424  450393 logs.go:278] No container was found matching "kindnet"
	I0805 13:03:53.806432  450393 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0805 13:03:53.806505  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0805 13:03:53.843082  450393 cri.go:89] found id: "07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b"
	I0805 13:03:53.843116  450393 cri.go:89] found id: "2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86"
	I0805 13:03:53.843121  450393 cri.go:89] found id: ""
	I0805 13:03:53.843129  450393 logs.go:276] 2 containers: [07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b 2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86]
	I0805 13:03:53.843188  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:53.847214  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:53.851093  450393 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:53.851112  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:52.467589  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:52.967390  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:53.466580  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:53.967544  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:54.467454  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:54.967281  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:55.467111  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:55.967513  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:56.467255  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:56.967513  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:54.296506  450393 logs.go:123] Gathering logs for kube-apiserver [be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7] ...
	I0805 13:03:54.296556  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7"
	I0805 13:03:54.343983  450393 logs.go:123] Gathering logs for etcd [85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804] ...
	I0805 13:03:54.344026  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804"
	I0805 13:03:54.389236  450393 logs.go:123] Gathering logs for coredns [b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb] ...
	I0805 13:03:54.389271  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb"
	I0805 13:03:54.427964  450393 logs.go:123] Gathering logs for kube-proxy [c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0] ...
	I0805 13:03:54.427996  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0"
	I0805 13:03:54.465953  450393 logs.go:123] Gathering logs for kube-controller-manager [75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f] ...
	I0805 13:03:54.465988  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f"
	I0805 13:03:54.521755  450393 logs.go:123] Gathering logs for storage-provisioner [07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b] ...
	I0805 13:03:54.521835  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b"
	I0805 13:03:54.565481  450393 logs.go:123] Gathering logs for storage-provisioner [2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86] ...
	I0805 13:03:54.565513  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86"
	I0805 13:03:54.606592  450393 logs.go:123] Gathering logs for container status ...
	I0805 13:03:54.606634  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:03:54.650820  450393 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:54.650858  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:54.704512  450393 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:54.704559  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:54.722149  450393 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:54.722184  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 13:03:54.844289  450393 logs.go:123] Gathering logs for kube-scheduler [8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756] ...
	I0805 13:03:54.844324  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756"
	I0805 13:03:57.386998  450393 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0805 13:03:57.391714  450393 api_server.go:279] https://192.168.39.196:8443/healthz returned 200:
	ok
	I0805 13:03:57.392752  450393 api_server.go:141] control plane version: v1.30.3
	I0805 13:03:57.392776  450393 api_server.go:131] duration metric: took 3.898675075s to wait for apiserver health ...
	I0805 13:03:57.392783  450393 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 13:03:57.392812  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:03:57.392868  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:03:57.430171  450393 cri.go:89] found id: "be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7"
	I0805 13:03:57.430201  450393 cri.go:89] found id: ""
	I0805 13:03:57.430210  450393 logs.go:276] 1 containers: [be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7]
	I0805 13:03:57.430270  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:57.434861  450393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:03:57.434920  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:03:57.490595  450393 cri.go:89] found id: "85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804"
	I0805 13:03:57.490622  450393 cri.go:89] found id: ""
	I0805 13:03:57.490632  450393 logs.go:276] 1 containers: [85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804]
	I0805 13:03:57.490702  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:57.496054  450393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:03:57.496141  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:03:57.540248  450393 cri.go:89] found id: "b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb"
	I0805 13:03:57.540278  450393 cri.go:89] found id: ""
	I0805 13:03:57.540289  450393 logs.go:276] 1 containers: [b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb]
	I0805 13:03:57.540353  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:57.547750  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:03:57.547820  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:03:57.595821  450393 cri.go:89] found id: "8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756"
	I0805 13:03:57.595852  450393 cri.go:89] found id: ""
	I0805 13:03:57.595864  450393 logs.go:276] 1 containers: [8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756]
	I0805 13:03:57.595932  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:57.600153  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:03:57.600225  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:03:57.640382  450393 cri.go:89] found id: "c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0"
	I0805 13:03:57.640409  450393 cri.go:89] found id: ""
	I0805 13:03:57.640418  450393 logs.go:276] 1 containers: [c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0]
	I0805 13:03:57.640486  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:57.645476  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:03:57.645569  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:03:57.700199  450393 cri.go:89] found id: "75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f"
	I0805 13:03:57.700224  450393 cri.go:89] found id: ""
	I0805 13:03:57.700233  450393 logs.go:276] 1 containers: [75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f]
	I0805 13:03:57.700294  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:57.704818  450393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:03:57.704874  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:03:57.745647  450393 cri.go:89] found id: ""
	I0805 13:03:57.745677  450393 logs.go:276] 0 containers: []
	W0805 13:03:57.745687  450393 logs.go:278] No container was found matching "kindnet"
	I0805 13:03:57.745696  450393 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0805 13:03:57.745760  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0805 13:03:57.787327  450393 cri.go:89] found id: "07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b"
	I0805 13:03:57.787367  450393 cri.go:89] found id: "2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86"
	I0805 13:03:57.787374  450393 cri.go:89] found id: ""
	I0805 13:03:57.787384  450393 logs.go:276] 2 containers: [07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b 2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86]
	I0805 13:03:57.787448  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:57.792340  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:57.796906  450393 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:57.796933  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:57.850401  450393 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:57.850447  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 13:03:57.961760  450393 logs.go:123] Gathering logs for kube-apiserver [be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7] ...
	I0805 13:03:57.961808  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7"
	I0805 13:03:58.009682  450393 logs.go:123] Gathering logs for etcd [85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804] ...
	I0805 13:03:58.009720  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804"
	I0805 13:03:58.061874  450393 logs.go:123] Gathering logs for kube-proxy [c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0] ...
	I0805 13:03:58.061915  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0"
	I0805 13:03:58.105715  450393 logs.go:123] Gathering logs for kube-controller-manager [75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f] ...
	I0805 13:03:58.105745  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f"
	I0805 13:03:58.164739  450393 logs.go:123] Gathering logs for storage-provisioner [07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b] ...
	I0805 13:03:58.164780  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b"
	I0805 13:03:58.203530  450393 logs.go:123] Gathering logs for storage-provisioner [2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86] ...
	I0805 13:03:58.203579  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86"
	I0805 13:03:58.245478  450393 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:58.245511  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:58.647807  450393 logs.go:123] Gathering logs for container status ...
	I0805 13:03:58.647857  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:03:58.694175  450393 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:58.694211  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:58.709744  450393 logs.go:123] Gathering logs for coredns [b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb] ...
	I0805 13:03:58.709773  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb"
	I0805 13:03:58.750668  450393 logs.go:123] Gathering logs for kube-scheduler [8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756] ...
	I0805 13:03:58.750698  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756"
	I0805 13:04:01.297212  450393 system_pods.go:59] 8 kube-system pods found
	I0805 13:04:01.297248  450393 system_pods.go:61] "coredns-7db6d8ff4d-wm7lh" [e3851d79-431c-4629-bfdc-ed9615cd46aa] Running
	I0805 13:04:01.297255  450393 system_pods.go:61] "etcd-embed-certs-321139" [98de664b-92d7-432d-9881-496dd8edd9f3] Running
	I0805 13:04:01.297261  450393 system_pods.go:61] "kube-apiserver-embed-certs-321139" [2d93e6df-1933-4ac1-82f6-d0d8f74f6d4e] Running
	I0805 13:04:01.297265  450393 system_pods.go:61] "kube-controller-manager-embed-certs-321139" [84165f78-f74b-4714-81b9-eeac2771b86b] Running
	I0805 13:04:01.297269  450393 system_pods.go:61] "kube-proxy-shgv2" [a19c5991-505f-4105-8c20-7afd63dd8e61] Running
	I0805 13:04:01.297273  450393 system_pods.go:61] "kube-scheduler-embed-certs-321139" [961a5013-fd55-48a2-adc2-acde33f6aed5] Running
	I0805 13:04:01.297281  450393 system_pods.go:61] "metrics-server-569cc877fc-k8mrt" [6d400b20-5de5-4046-b773-39766c67cdb4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 13:04:01.297289  450393 system_pods.go:61] "storage-provisioner" [8b2db057-5262-4648-93ea-f2f0ed51a19b] Running
	I0805 13:04:01.297300  450393 system_pods.go:74] duration metric: took 3.904508974s to wait for pod list to return data ...
	I0805 13:04:01.297312  450393 default_sa.go:34] waiting for default service account to be created ...
	I0805 13:04:01.299765  450393 default_sa.go:45] found service account: "default"
	I0805 13:04:01.299792  450393 default_sa.go:55] duration metric: took 2.470684ms for default service account to be created ...
	I0805 13:04:01.299802  450393 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 13:04:01.304612  450393 system_pods.go:86] 8 kube-system pods found
	I0805 13:04:01.304644  450393 system_pods.go:89] "coredns-7db6d8ff4d-wm7lh" [e3851d79-431c-4629-bfdc-ed9615cd46aa] Running
	I0805 13:04:01.304651  450393 system_pods.go:89] "etcd-embed-certs-321139" [98de664b-92d7-432d-9881-496dd8edd9f3] Running
	I0805 13:04:01.304656  450393 system_pods.go:89] "kube-apiserver-embed-certs-321139" [2d93e6df-1933-4ac1-82f6-d0d8f74f6d4e] Running
	I0805 13:04:01.304661  450393 system_pods.go:89] "kube-controller-manager-embed-certs-321139" [84165f78-f74b-4714-81b9-eeac2771b86b] Running
	I0805 13:04:01.304665  450393 system_pods.go:89] "kube-proxy-shgv2" [a19c5991-505f-4105-8c20-7afd63dd8e61] Running
	I0805 13:04:01.304670  450393 system_pods.go:89] "kube-scheduler-embed-certs-321139" [961a5013-fd55-48a2-adc2-acde33f6aed5] Running
	I0805 13:04:01.304677  450393 system_pods.go:89] "metrics-server-569cc877fc-k8mrt" [6d400b20-5de5-4046-b773-39766c67cdb4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 13:04:01.304685  450393 system_pods.go:89] "storage-provisioner" [8b2db057-5262-4648-93ea-f2f0ed51a19b] Running
	I0805 13:04:01.304694  450393 system_pods.go:126] duration metric: took 4.885808ms to wait for k8s-apps to be running ...
	I0805 13:04:01.304702  450393 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 13:04:01.304751  450393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 13:04:01.323278  450393 system_svc.go:56] duration metric: took 18.55935ms WaitForService to wait for kubelet
	I0805 13:04:01.323316  450393 kubeadm.go:582] duration metric: took 4m22.01227204s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 13:04:01.323349  450393 node_conditions.go:102] verifying NodePressure condition ...
	I0805 13:04:01.326802  450393 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 13:04:01.326829  450393 node_conditions.go:123] node cpu capacity is 2
	I0805 13:04:01.326843  450393 node_conditions.go:105] duration metric: took 3.486931ms to run NodePressure ...
	I0805 13:04:01.326859  450393 start.go:241] waiting for startup goroutines ...
	I0805 13:04:01.326869  450393 start.go:246] waiting for cluster config update ...
	I0805 13:04:01.326883  450393 start.go:255] writing updated cluster config ...
	I0805 13:04:01.327230  450393 ssh_runner.go:195] Run: rm -f paused
	I0805 13:04:01.380315  450393 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0805 13:04:01.381891  450393 out.go:177] * Done! kubectl is now configured to use "embed-certs-321139" cluster and "default" namespace by default
	I0805 13:03:57.113870  451238 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0805 13:03:57.114408  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:03:57.114630  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:03:57.467412  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:57.967538  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:58.467217  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:58.967035  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:59.466816  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:59.966909  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:04:00.467553  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:04:00.967667  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:04:01.467382  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:04:01.967495  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:04:02.085428  450884 kubeadm.go:1113] duration metric: took 13.335097096s to wait for elevateKubeSystemPrivileges
	I0805 13:04:02.085464  450884 kubeadm.go:394] duration metric: took 5m13.227479413s to StartCluster
	I0805 13:04:02.085482  450884 settings.go:142] acquiring lock: {Name:mkef693333292ed53a03690c72ec170ce2e26d3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 13:04:02.085571  450884 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 13:04:02.087178  450884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/kubeconfig: {Name:mkf2ea766e58530103015ce4ba9d1ed3336f3926 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 13:04:02.087425  450884 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.228 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 13:04:02.087550  450884 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 13:04:02.087653  450884 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-371585"
	I0805 13:04:02.087659  450884 config.go:182] Loaded profile config "default-k8s-diff-port-371585": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 13:04:02.087681  450884 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-371585"
	I0805 13:04:02.087697  450884 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-371585"
	I0805 13:04:02.087718  450884 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-371585"
	W0805 13:04:02.087729  450884 addons.go:243] addon metrics-server should already be in state true
	I0805 13:04:02.087783  450884 host.go:66] Checking if "default-k8s-diff-port-371585" exists ...
	I0805 13:04:02.087727  450884 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-371585"
	I0805 13:04:02.087692  450884 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-371585"
	W0805 13:04:02.087953  450884 addons.go:243] addon storage-provisioner should already be in state true
	I0805 13:04:02.087986  450884 host.go:66] Checking if "default-k8s-diff-port-371585" exists ...
	I0805 13:04:02.088243  450884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:04:02.088294  450884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:04:02.088243  450884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:04:02.088377  450884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:04:02.088406  450884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:04:02.088415  450884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:04:02.088935  450884 out.go:177] * Verifying Kubernetes components...
	I0805 13:04:02.090386  450884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 13:04:02.105328  450884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39169
	I0805 13:04:02.105335  450884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33049
	I0805 13:04:02.105853  450884 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:04:02.105848  450884 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:04:02.106395  450884 main.go:141] libmachine: Using API Version  1
	I0805 13:04:02.106398  450884 main.go:141] libmachine: Using API Version  1
	I0805 13:04:02.106420  450884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:04:02.106423  450884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:04:02.106506  450884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33831
	I0805 13:04:02.106879  450884 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:04:02.106957  450884 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:04:02.106982  450884 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:04:02.107193  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetState
	I0805 13:04:02.107508  450884 main.go:141] libmachine: Using API Version  1
	I0805 13:04:02.107522  450884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:04:02.107534  450884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:04:02.107561  450884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:04:02.107903  450884 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:04:02.108458  450884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:04:02.108490  450884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:04:02.111681  450884 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-371585"
	W0805 13:04:02.111709  450884 addons.go:243] addon default-storageclass should already be in state true
	I0805 13:04:02.111775  450884 host.go:66] Checking if "default-k8s-diff-port-371585" exists ...
	I0805 13:04:02.113601  450884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:04:02.113648  450884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:04:02.127860  450884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37207
	I0805 13:04:02.128512  450884 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:04:02.128619  450884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39253
	I0805 13:04:02.129023  450884 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:04:02.129174  450884 main.go:141] libmachine: Using API Version  1
	I0805 13:04:02.129198  450884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:04:02.129495  450884 main.go:141] libmachine: Using API Version  1
	I0805 13:04:02.129516  450884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:04:02.129566  450884 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:04:02.129850  450884 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:04:02.129879  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetState
	I0805 13:04:02.130443  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetState
	I0805 13:04:02.131691  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 13:04:02.132370  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 13:04:02.133468  450884 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 13:04:02.134210  450884 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0805 13:04:02.134899  450884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37161
	I0805 13:04:02.135049  450884 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0805 13:04:02.135067  450884 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0805 13:04:02.135099  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 13:04:02.135183  450884 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 13:04:02.135201  450884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 13:04:02.135216  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 13:04:02.135404  450884 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:04:02.136704  450884 main.go:141] libmachine: Using API Version  1
	I0805 13:04:02.136723  450884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:04:02.138362  450884 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:04:02.138801  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 13:04:02.138918  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 13:04:02.139264  450884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:04:02.139290  450884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:04:02.139335  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 13:04:02.139377  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 13:04:02.139404  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 13:04:02.139448  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 13:04:02.139482  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 13:04:02.139503  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 13:04:02.139581  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 13:04:02.139637  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 13:04:02.139737  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 13:04:02.139807  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 13:04:02.139867  450884 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa Username:docker}
	I0805 13:04:02.139909  450884 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa Username:docker}
	I0805 13:04:02.159720  450884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34137
	I0805 13:04:02.160199  450884 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:04:02.160744  450884 main.go:141] libmachine: Using API Version  1
	I0805 13:04:02.160770  450884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:04:02.161048  450884 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:04:02.161246  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetState
	I0805 13:04:02.162535  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 13:04:02.162788  450884 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 13:04:02.162805  450884 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 13:04:02.162825  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 13:04:02.165787  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 13:04:02.166204  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 13:04:02.166236  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 13:04:02.166411  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 13:04:02.166594  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 13:04:02.166744  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 13:04:02.166876  450884 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa Username:docker}
	I0805 13:04:02.349175  450884 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 13:04:02.453663  450884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 13:04:02.462474  450884 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-371585" to be "Ready" ...
	I0805 13:04:02.472177  450884 node_ready.go:49] node "default-k8s-diff-port-371585" has status "Ready":"True"
	I0805 13:04:02.472201  450884 node_ready.go:38] duration metric: took 9.692872ms for node "default-k8s-diff-port-371585" to be "Ready" ...
	I0805 13:04:02.472211  450884 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 13:04:02.474341  450884 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0805 13:04:02.474363  450884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0805 13:04:02.485604  450884 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-5vxpl" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:02.514889  450884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 13:04:02.543388  450884 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0805 13:04:02.543428  450884 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0805 13:04:02.618040  450884 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0805 13:04:02.618094  450884 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0805 13:04:02.716705  450884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0805 13:04:02.784102  450884 main.go:141] libmachine: Making call to close driver server
	I0805 13:04:02.784193  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .Close
	I0805 13:04:02.784545  450884 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:04:02.784566  450884 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:04:02.784577  450884 main.go:141] libmachine: Making call to close driver server
	I0805 13:04:02.784586  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .Close
	I0805 13:04:02.784588  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | Closing plugin on server side
	I0805 13:04:02.784851  450884 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:04:02.784868  450884 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:04:02.784868  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | Closing plugin on server side
	I0805 13:04:02.797584  450884 main.go:141] libmachine: Making call to close driver server
	I0805 13:04:02.797617  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .Close
	I0805 13:04:02.797938  450884 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:04:02.797956  450884 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:04:03.431060  450884 main.go:141] libmachine: Making call to close driver server
	I0805 13:04:03.431091  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .Close
	I0805 13:04:03.431452  450884 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:04:03.431494  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | Closing plugin on server side
	I0805 13:04:03.431511  450884 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:04:03.431530  450884 main.go:141] libmachine: Making call to close driver server
	I0805 13:04:03.431539  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .Close
	I0805 13:04:03.431839  450884 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:04:03.431893  450884 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:04:03.746668  450884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.029912928s)
	I0805 13:04:03.746734  450884 main.go:141] libmachine: Making call to close driver server
	I0805 13:04:03.746750  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .Close
	I0805 13:04:03.747152  450884 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:04:03.747180  450884 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:04:03.747191  450884 main.go:141] libmachine: Making call to close driver server
	I0805 13:04:03.747200  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .Close
	I0805 13:04:03.748527  450884 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:04:03.748558  450884 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:04:03.748571  450884 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-371585"
	I0805 13:04:03.750522  450884 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0805 13:04:03.751714  450884 addons.go:510] duration metric: took 1.664163176s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0805 13:04:04.491832  450884 pod_ready.go:92] pod "coredns-7db6d8ff4d-5vxpl" in "kube-system" namespace has status "Ready":"True"
	I0805 13:04:04.491861  450884 pod_ready.go:81] duration metric: took 2.00623062s for pod "coredns-7db6d8ff4d-5vxpl" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.491870  450884 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-qtt9j" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.496173  450884 pod_ready.go:92] pod "coredns-7db6d8ff4d-qtt9j" in "kube-system" namespace has status "Ready":"True"
	I0805 13:04:04.496194  450884 pod_ready.go:81] duration metric: took 4.317446ms for pod "coredns-7db6d8ff4d-qtt9j" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.496202  450884 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.500270  450884 pod_ready.go:92] pod "etcd-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"True"
	I0805 13:04:04.500297  450884 pod_ready.go:81] duration metric: took 4.088399ms for pod "etcd-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.500309  450884 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.504892  450884 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"True"
	I0805 13:04:04.504917  450884 pod_ready.go:81] duration metric: took 4.598589ms for pod "kube-apiserver-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.504926  450884 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.509448  450884 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"True"
	I0805 13:04:04.509468  450884 pod_ready.go:81] duration metric: took 4.535174ms for pod "kube-controller-manager-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.509478  450884 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4v6sn" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.890517  450884 pod_ready.go:92] pod "kube-proxy-4v6sn" in "kube-system" namespace has status "Ready":"True"
	I0805 13:04:04.890544  450884 pod_ready.go:81] duration metric: took 381.059204ms for pod "kube-proxy-4v6sn" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.890552  450884 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:05.289670  450884 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"True"
	I0805 13:04:05.289701  450884 pod_ready.go:81] duration metric: took 399.141309ms for pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:05.289712  450884 pod_ready.go:38] duration metric: took 2.817491444s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 13:04:05.289732  450884 api_server.go:52] waiting for apiserver process to appear ...
	I0805 13:04:05.289805  450884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:04:05.305815  450884 api_server.go:72] duration metric: took 3.218344531s to wait for apiserver process to appear ...
	I0805 13:04:05.305848  450884 api_server.go:88] waiting for apiserver healthz status ...
	I0805 13:04:05.305870  450884 api_server.go:253] Checking apiserver healthz at https://192.168.50.228:8444/healthz ...
	I0805 13:04:05.311144  450884 api_server.go:279] https://192.168.50.228:8444/healthz returned 200:
	ok
	I0805 13:04:05.312427  450884 api_server.go:141] control plane version: v1.30.3
	I0805 13:04:05.312450  450884 api_server.go:131] duration metric: took 6.595933ms to wait for apiserver health ...
	I0805 13:04:05.312460  450884 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 13:04:05.493376  450884 system_pods.go:59] 9 kube-system pods found
	I0805 13:04:05.493417  450884 system_pods.go:61] "coredns-7db6d8ff4d-5vxpl" [6f6aa906-d76f-4f92-8de4-4d3a4a1ee733] Running
	I0805 13:04:05.493425  450884 system_pods.go:61] "coredns-7db6d8ff4d-qtt9j" [8dcadd0b-af8c-4d76-a1d1-ceeaffb725b8] Running
	I0805 13:04:05.493432  450884 system_pods.go:61] "etcd-default-k8s-diff-port-371585" [c3ab12b8-78ea-42c5-a1d3-e37eb9e72961] Running
	I0805 13:04:05.493438  450884 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-371585" [16d27e99-f652-4e88-907f-c2895f051a8a] Running
	I0805 13:04:05.493444  450884 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-371585" [f8d0d828-a7fb-4887-bbf9-e3ad9fd3ebf3] Running
	I0805 13:04:05.493450  450884 system_pods.go:61] "kube-proxy-4v6sn" [497a1512-cdee-49ff-92ea-ea523d3de2a4] Running
	I0805 13:04:05.493456  450884 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-371585" [48ae4522-6d11-4f79-820b-68eb06410186] Running
	I0805 13:04:05.493465  450884 system_pods.go:61] "metrics-server-569cc877fc-xf92r" [edb560ac-ddb1-4afa-b3a3-aa054ea38162] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 13:04:05.493475  450884 system_pods.go:61] "storage-provisioner" [8f3de3fc-9b34-4a46-a7cf-5487647b06ca] Running
	I0805 13:04:05.493488  450884 system_pods.go:74] duration metric: took 181.019102ms to wait for pod list to return data ...
	I0805 13:04:05.493504  450884 default_sa.go:34] waiting for default service account to be created ...
	I0805 13:04:05.688283  450884 default_sa.go:45] found service account: "default"
	I0805 13:04:05.688313  450884 default_sa.go:55] duration metric: took 194.799711ms for default service account to be created ...
	I0805 13:04:05.688323  450884 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 13:04:05.892656  450884 system_pods.go:86] 9 kube-system pods found
	I0805 13:04:05.892688  450884 system_pods.go:89] "coredns-7db6d8ff4d-5vxpl" [6f6aa906-d76f-4f92-8de4-4d3a4a1ee733] Running
	I0805 13:04:05.892696  450884 system_pods.go:89] "coredns-7db6d8ff4d-qtt9j" [8dcadd0b-af8c-4d76-a1d1-ceeaffb725b8] Running
	I0805 13:04:05.892702  450884 system_pods.go:89] "etcd-default-k8s-diff-port-371585" [c3ab12b8-78ea-42c5-a1d3-e37eb9e72961] Running
	I0805 13:04:05.892709  450884 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-371585" [16d27e99-f652-4e88-907f-c2895f051a8a] Running
	I0805 13:04:05.892715  450884 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-371585" [f8d0d828-a7fb-4887-bbf9-e3ad9fd3ebf3] Running
	I0805 13:04:05.892721  450884 system_pods.go:89] "kube-proxy-4v6sn" [497a1512-cdee-49ff-92ea-ea523d3de2a4] Running
	I0805 13:04:05.892727  450884 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-371585" [48ae4522-6d11-4f79-820b-68eb06410186] Running
	I0805 13:04:05.892737  450884 system_pods.go:89] "metrics-server-569cc877fc-xf92r" [edb560ac-ddb1-4afa-b3a3-aa054ea38162] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 13:04:05.892743  450884 system_pods.go:89] "storage-provisioner" [8f3de3fc-9b34-4a46-a7cf-5487647b06ca] Running
	I0805 13:04:05.892755  450884 system_pods.go:126] duration metric: took 204.423562ms to wait for k8s-apps to be running ...
	I0805 13:04:05.892765  450884 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 13:04:05.892819  450884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 13:04:05.907542  450884 system_svc.go:56] duration metric: took 14.764349ms WaitForService to wait for kubelet
	I0805 13:04:05.907576  450884 kubeadm.go:582] duration metric: took 3.820116927s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 13:04:05.907599  450884 node_conditions.go:102] verifying NodePressure condition ...
	I0805 13:04:06.089000  450884 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 13:04:06.089025  450884 node_conditions.go:123] node cpu capacity is 2
	I0805 13:04:06.089035  450884 node_conditions.go:105] duration metric: took 181.431221ms to run NodePressure ...
	I0805 13:04:06.089047  450884 start.go:241] waiting for startup goroutines ...
	I0805 13:04:06.089054  450884 start.go:246] waiting for cluster config update ...
	I0805 13:04:06.089065  450884 start.go:255] writing updated cluster config ...
	I0805 13:04:06.089373  450884 ssh_runner.go:195] Run: rm -f paused
	I0805 13:04:06.140202  450884 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0805 13:04:06.142149  450884 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-371585" cluster and "default" namespace by default
	I0805 13:04:02.115811  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:04:02.116057  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:04:12.115990  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:04:12.116208  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:04:32.116734  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:04:32.117001  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:05:12.119196  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:05:12.119475  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:05:12.119502  451238 kubeadm.go:310] 
	I0805 13:05:12.119564  451238 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0805 13:05:12.119622  451238 kubeadm.go:310] 		timed out waiting for the condition
	I0805 13:05:12.119634  451238 kubeadm.go:310] 
	I0805 13:05:12.119680  451238 kubeadm.go:310] 	This error is likely caused by:
	I0805 13:05:12.119724  451238 kubeadm.go:310] 		- The kubelet is not running
	I0805 13:05:12.119880  451238 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0805 13:05:12.119898  451238 kubeadm.go:310] 
	I0805 13:05:12.120029  451238 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0805 13:05:12.120114  451238 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0805 13:05:12.120169  451238 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0805 13:05:12.120179  451238 kubeadm.go:310] 
	I0805 13:05:12.120321  451238 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0805 13:05:12.120445  451238 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0805 13:05:12.120455  451238 kubeadm.go:310] 
	I0805 13:05:12.120612  451238 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0805 13:05:12.120751  451238 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0805 13:05:12.120888  451238 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0805 13:05:12.121010  451238 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0805 13:05:12.121023  451238 kubeadm.go:310] 
	I0805 13:05:12.121325  451238 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 13:05:12.121458  451238 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0805 13:05:12.121545  451238 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0805 13:05:12.121714  451238 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0805 13:05:12.121782  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0805 13:05:12.587687  451238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 13:05:12.603422  451238 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 13:05:12.614302  451238 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 13:05:12.614330  451238 kubeadm.go:157] found existing configuration files:
	
	I0805 13:05:12.614391  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 13:05:12.625131  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 13:05:12.625199  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 13:05:12.635606  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 13:05:12.644896  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 13:05:12.644953  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 13:05:12.655178  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 13:05:12.664668  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 13:05:12.664753  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 13:05:12.675174  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 13:05:12.684765  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 13:05:12.684834  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 13:05:12.694762  451238 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 13:05:12.930906  451238 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 13:07:09.256859  451238 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0805 13:07:09.257016  451238 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0805 13:07:09.258511  451238 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0805 13:07:09.258579  451238 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 13:07:09.258710  451238 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 13:07:09.258881  451238 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 13:07:09.259022  451238 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 13:07:09.259125  451238 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 13:07:09.260912  451238 out.go:204]   - Generating certificates and keys ...
	I0805 13:07:09.261023  451238 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 13:07:09.261123  451238 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 13:07:09.261232  451238 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0805 13:07:09.261319  451238 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0805 13:07:09.261411  451238 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0805 13:07:09.261507  451238 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0805 13:07:09.261601  451238 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0805 13:07:09.261690  451238 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0805 13:07:09.261801  451238 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0805 13:07:09.261946  451238 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0805 13:07:09.262015  451238 kubeadm.go:310] [certs] Using the existing "sa" key
	I0805 13:07:09.262119  451238 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 13:07:09.262198  451238 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 13:07:09.262273  451238 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 13:07:09.262369  451238 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 13:07:09.262464  451238 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 13:07:09.262615  451238 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 13:07:09.262731  451238 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 13:07:09.262770  451238 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 13:07:09.262831  451238 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 13:07:09.264428  451238 out.go:204]   - Booting up control plane ...
	I0805 13:07:09.264537  451238 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 13:07:09.264663  451238 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 13:07:09.264774  451238 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 13:07:09.264896  451238 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 13:07:09.265144  451238 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0805 13:07:09.265224  451238 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0805 13:07:09.265318  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:07:09.265554  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:07:09.265630  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:07:09.265783  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:07:09.265886  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:07:09.266143  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:07:09.266221  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:07:09.266387  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:07:09.266472  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:07:09.266656  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:07:09.266673  451238 kubeadm.go:310] 
	I0805 13:07:09.266707  451238 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0805 13:07:09.266738  451238 kubeadm.go:310] 		timed out waiting for the condition
	I0805 13:07:09.266743  451238 kubeadm.go:310] 
	I0805 13:07:09.266788  451238 kubeadm.go:310] 	This error is likely caused by:
	I0805 13:07:09.266819  451238 kubeadm.go:310] 		- The kubelet is not running
	I0805 13:07:09.266924  451238 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0805 13:07:09.266932  451238 kubeadm.go:310] 
	I0805 13:07:09.267050  451238 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0805 13:07:09.267137  451238 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0805 13:07:09.267192  451238 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0805 13:07:09.267201  451238 kubeadm.go:310] 
	I0805 13:07:09.267316  451238 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0805 13:07:09.267435  451238 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0805 13:07:09.267445  451238 kubeadm.go:310] 
	I0805 13:07:09.267570  451238 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0805 13:07:09.267683  451238 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0805 13:07:09.267802  451238 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0805 13:07:09.267898  451238 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0805 13:07:09.267986  451238 kubeadm.go:310] 
	I0805 13:07:09.268003  451238 kubeadm.go:394] duration metric: took 7m57.870990174s to StartCluster
	I0805 13:07:09.268066  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:07:09.268158  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:07:09.311436  451238 cri.go:89] found id: ""
	I0805 13:07:09.311471  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.311497  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:07:09.311509  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:07:09.311573  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:07:09.347748  451238 cri.go:89] found id: ""
	I0805 13:07:09.347776  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.347784  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:07:09.347797  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:07:09.347860  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:07:09.385418  451238 cri.go:89] found id: ""
	I0805 13:07:09.385445  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.385453  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:07:09.385460  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:07:09.385517  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:07:09.427209  451238 cri.go:89] found id: ""
	I0805 13:07:09.427255  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.427268  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:07:09.427276  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:07:09.427360  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:07:09.461763  451238 cri.go:89] found id: ""
	I0805 13:07:09.461787  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.461795  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:07:09.461801  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:07:09.461854  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:07:09.498655  451238 cri.go:89] found id: ""
	I0805 13:07:09.498692  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.498705  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:07:09.498713  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:07:09.498782  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:07:09.534100  451238 cri.go:89] found id: ""
	I0805 13:07:09.534134  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.534143  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:07:09.534149  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:07:09.534207  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:07:09.570089  451238 cri.go:89] found id: ""
	I0805 13:07:09.570125  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.570137  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:07:09.570153  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:07:09.570176  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:07:09.625158  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:07:09.625199  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:07:09.640087  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:07:09.640119  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:07:09.719851  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:07:09.719879  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:07:09.719895  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:07:09.832717  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:07:09.832758  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0805 13:07:09.878585  451238 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0805 13:07:09.878653  451238 out.go:239] * 
	W0805 13:07:09.878739  451238 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0805 13:07:09.878767  451238 out.go:239] * 
	W0805 13:07:09.879755  451238 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 13:07:09.883027  451238 out.go:177] 
	W0805 13:07:09.884197  451238 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0805 13:07:09.884243  451238 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0805 13:07:09.884265  451238 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0805 13:07:09.885783  451238 out.go:177] 
	
	
	==> CRI-O <==
	Aug 05 13:13:08 default-k8s-diff-port-371585 crio[728]: time="2024-08-05 13:13:08.194325055Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b3679af0-f9fd-4d80-9303-10cf03e4268f name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:13:08 default-k8s-diff-port-371585 crio[728]: time="2024-08-05 13:13:08.194540810Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1116fb42f7d411e469d722adffd8ba7bf79322eabd75d66df0f7dc83f8811592,PodSandboxId:02d39de3ad0a62de0832c560a36b7c1b7b6a163fe6477ab3ce7a1f406e5cc732,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722863043852812507,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f3de3fc-9b34-4a46-a7cf-5487647b06ca,},Annotations:map[string]string{io.kubernetes.container.hash: effae2af,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:897074922bcfd326f191a84410e8303aed84e33b3973c6e4d825139733379ae1,PodSandboxId:5ca89a57de01359ec982f461d451756cf2846c5af49d1759e8001d37ab291401,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722863043160628624,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5vxpl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f6aa906-d76f-4f92-8de4-4d3a4a1ee733,},Annotations:map[string]string{io.kubernetes.container.hash: 6596e46f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae5a9ec4aaae6f94d84456461e05b6809eba096807bee66b7e93cc7be633d593,PodSandboxId:6e0294ae4fb3e5c08b9f5e297746ddaf6211ea5555fc85f3bf4945493c9a697e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722863042916935208,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qtt9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 8dcadd0b-af8c-4d76-a1d1-ceeaffb725b8,},Annotations:map[string]string{io.kubernetes.container.hash: 8a966db1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6a75d2e01ad7329f9fb3c03f149c3f4888aedb2d018815471e53b33eda0c5e1,PodSandboxId:f766239566395fb73fdd0176cc0814edf40deb921cea7b36a6753630fcdfd73c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING
,CreatedAt:1722863041804354684,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4v6sn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 497a1512-cdee-49ff-92ea-ea523d3de2a4,},Annotations:map[string]string{io.kubernetes.container.hash: d9fcce48,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cbbbd91c583002208ae7fcce84f734e068b113b0be7adf24c99938212088dca,PodSandboxId:437c0d82e3552aa7c0a8934650f942eb96476ff56d5df6facf17a8dd09036aa4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722863022402012575,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-371585,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f73de6958734815f839b54482962df70,},Annotations:map[string]string{io.kubernetes.container.hash: ecc30e00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e042c530805a04720b16fd04e723152d67c28e78732ac823e9e29ccd368eb5,PodSandboxId:959ebdea65b37ea7c851dc05e5cfb1d0676184d98af9be3ae672f253784a8dac,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722863022360230567,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-371585,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f7dae362ca8e66156643a6c11b9c286,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc1d9ae10f71a4ca57796b2ebb02b9ab1d598c3d4ce6aafd0b5e9d143ecbe2c9,PodSandboxId:31b7ccd50e4d1fa94f51572ae633cb1afcc7006c199f5ea2ee5c18801369c095,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722863022367210315,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-371585,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9260e1be1654581fec665fd54ad4bcb,},Annotations:map[string]string{io.kubernetes.container.hash: fa1be0c9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aab94bc76b4e689f42e1dcfd7779a9cef2e2cd34d3e887e9847458c0fa130f32,PodSandboxId:d5f7a49f52f8e8e96ab379fed95e85d019178bd5214e580b31cd3e6a8498e1fb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722863022271973773,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-371585,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2cc68ee0da609e8d11e788f77345eaf,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b3679af0-f9fd-4d80-9303-10cf03e4268f name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:13:08 default-k8s-diff-port-371585 crio[728]: time="2024-08-05 13:13:08.220379112Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8a1a732c-f464-46d3-a172-61cdce83872b name=/runtime.v1.RuntimeService/Version
	Aug 05 13:13:08 default-k8s-diff-port-371585 crio[728]: time="2024-08-05 13:13:08.220527302Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8a1a732c-f464-46d3-a172-61cdce83872b name=/runtime.v1.RuntimeService/Version
	Aug 05 13:13:08 default-k8s-diff-port-371585 crio[728]: time="2024-08-05 13:13:08.221682180Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=38f63a06-7e05-45ad-ae9d-39910d9718f9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:13:08 default-k8s-diff-port-371585 crio[728]: time="2024-08-05 13:13:08.222215875Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863588222191065,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=38f63a06-7e05-45ad-ae9d-39910d9718f9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:13:08 default-k8s-diff-port-371585 crio[728]: time="2024-08-05 13:13:08.222855776Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8bb11c68-f37e-4bdc-a982-40b979a5ac2b name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:13:08 default-k8s-diff-port-371585 crio[728]: time="2024-08-05 13:13:08.222978757Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8bb11c68-f37e-4bdc-a982-40b979a5ac2b name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:13:08 default-k8s-diff-port-371585 crio[728]: time="2024-08-05 13:13:08.223218072Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1116fb42f7d411e469d722adffd8ba7bf79322eabd75d66df0f7dc83f8811592,PodSandboxId:02d39de3ad0a62de0832c560a36b7c1b7b6a163fe6477ab3ce7a1f406e5cc732,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722863043852812507,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f3de3fc-9b34-4a46-a7cf-5487647b06ca,},Annotations:map[string]string{io.kubernetes.container.hash: effae2af,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:897074922bcfd326f191a84410e8303aed84e33b3973c6e4d825139733379ae1,PodSandboxId:5ca89a57de01359ec982f461d451756cf2846c5af49d1759e8001d37ab291401,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722863043160628624,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5vxpl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f6aa906-d76f-4f92-8de4-4d3a4a1ee733,},Annotations:map[string]string{io.kubernetes.container.hash: 6596e46f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae5a9ec4aaae6f94d84456461e05b6809eba096807bee66b7e93cc7be633d593,PodSandboxId:6e0294ae4fb3e5c08b9f5e297746ddaf6211ea5555fc85f3bf4945493c9a697e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722863042916935208,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qtt9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 8dcadd0b-af8c-4d76-a1d1-ceeaffb725b8,},Annotations:map[string]string{io.kubernetes.container.hash: 8a966db1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6a75d2e01ad7329f9fb3c03f149c3f4888aedb2d018815471e53b33eda0c5e1,PodSandboxId:f766239566395fb73fdd0176cc0814edf40deb921cea7b36a6753630fcdfd73c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING
,CreatedAt:1722863041804354684,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4v6sn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 497a1512-cdee-49ff-92ea-ea523d3de2a4,},Annotations:map[string]string{io.kubernetes.container.hash: d9fcce48,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cbbbd91c583002208ae7fcce84f734e068b113b0be7adf24c99938212088dca,PodSandboxId:437c0d82e3552aa7c0a8934650f942eb96476ff56d5df6facf17a8dd09036aa4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722863022402012575,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-371585,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f73de6958734815f839b54482962df70,},Annotations:map[string]string{io.kubernetes.container.hash: ecc30e00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e042c530805a04720b16fd04e723152d67c28e78732ac823e9e29ccd368eb5,PodSandboxId:959ebdea65b37ea7c851dc05e5cfb1d0676184d98af9be3ae672f253784a8dac,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722863022360230567,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-371585,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f7dae362ca8e66156643a6c11b9c286,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc1d9ae10f71a4ca57796b2ebb02b9ab1d598c3d4ce6aafd0b5e9d143ecbe2c9,PodSandboxId:31b7ccd50e4d1fa94f51572ae633cb1afcc7006c199f5ea2ee5c18801369c095,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722863022367210315,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-371585,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9260e1be1654581fec665fd54ad4bcb,},Annotations:map[string]string{io.kubernetes.container.hash: fa1be0c9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aab94bc76b4e689f42e1dcfd7779a9cef2e2cd34d3e887e9847458c0fa130f32,PodSandboxId:d5f7a49f52f8e8e96ab379fed95e85d019178bd5214e580b31cd3e6a8498e1fb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722863022271973773,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-371585,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2cc68ee0da609e8d11e788f77345eaf,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8bb11c68-f37e-4bdc-a982-40b979a5ac2b name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:13:08 default-k8s-diff-port-371585 crio[728]: time="2024-08-05 13:13:08.232821170Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=c00bdf23-d816-4df5-823a-8f8bc5cbc42c name=/runtime.v1.RuntimeService/Status
	Aug 05 13:13:08 default-k8s-diff-port-371585 crio[728]: time="2024-08-05 13:13:08.233070862Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=c00bdf23-d816-4df5-823a-8f8bc5cbc42c name=/runtime.v1.RuntimeService/Status
	Aug 05 13:13:08 default-k8s-diff-port-371585 crio[728]: time="2024-08-05 13:13:08.264040702Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f95fd187-5e0a-4256-b3b2-e41b3123220b name=/runtime.v1.RuntimeService/Version
	Aug 05 13:13:08 default-k8s-diff-port-371585 crio[728]: time="2024-08-05 13:13:08.264111642Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f95fd187-5e0a-4256-b3b2-e41b3123220b name=/runtime.v1.RuntimeService/Version
	Aug 05 13:13:08 default-k8s-diff-port-371585 crio[728]: time="2024-08-05 13:13:08.265292550Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6c8c3480-ba60-4f21-ab43-3dcacb8745c5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:13:08 default-k8s-diff-port-371585 crio[728]: time="2024-08-05 13:13:08.265768609Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863588265744927,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6c8c3480-ba60-4f21-ab43-3dcacb8745c5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:13:08 default-k8s-diff-port-371585 crio[728]: time="2024-08-05 13:13:08.266507102Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7f20be37-ddcd-465f-8583-b21e202a37e3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:13:08 default-k8s-diff-port-371585 crio[728]: time="2024-08-05 13:13:08.266557471Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7f20be37-ddcd-465f-8583-b21e202a37e3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:13:08 default-k8s-diff-port-371585 crio[728]: time="2024-08-05 13:13:08.266773102Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1116fb42f7d411e469d722adffd8ba7bf79322eabd75d66df0f7dc83f8811592,PodSandboxId:02d39de3ad0a62de0832c560a36b7c1b7b6a163fe6477ab3ce7a1f406e5cc732,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722863043852812507,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f3de3fc-9b34-4a46-a7cf-5487647b06ca,},Annotations:map[string]string{io.kubernetes.container.hash: effae2af,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:897074922bcfd326f191a84410e8303aed84e33b3973c6e4d825139733379ae1,PodSandboxId:5ca89a57de01359ec982f461d451756cf2846c5af49d1759e8001d37ab291401,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722863043160628624,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5vxpl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f6aa906-d76f-4f92-8de4-4d3a4a1ee733,},Annotations:map[string]string{io.kubernetes.container.hash: 6596e46f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae5a9ec4aaae6f94d84456461e05b6809eba096807bee66b7e93cc7be633d593,PodSandboxId:6e0294ae4fb3e5c08b9f5e297746ddaf6211ea5555fc85f3bf4945493c9a697e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722863042916935208,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qtt9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 8dcadd0b-af8c-4d76-a1d1-ceeaffb725b8,},Annotations:map[string]string{io.kubernetes.container.hash: 8a966db1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6a75d2e01ad7329f9fb3c03f149c3f4888aedb2d018815471e53b33eda0c5e1,PodSandboxId:f766239566395fb73fdd0176cc0814edf40deb921cea7b36a6753630fcdfd73c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING
,CreatedAt:1722863041804354684,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4v6sn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 497a1512-cdee-49ff-92ea-ea523d3de2a4,},Annotations:map[string]string{io.kubernetes.container.hash: d9fcce48,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cbbbd91c583002208ae7fcce84f734e068b113b0be7adf24c99938212088dca,PodSandboxId:437c0d82e3552aa7c0a8934650f942eb96476ff56d5df6facf17a8dd09036aa4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722863022402012575,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-371585,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f73de6958734815f839b54482962df70,},Annotations:map[string]string{io.kubernetes.container.hash: ecc30e00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e042c530805a04720b16fd04e723152d67c28e78732ac823e9e29ccd368eb5,PodSandboxId:959ebdea65b37ea7c851dc05e5cfb1d0676184d98af9be3ae672f253784a8dac,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722863022360230567,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-371585,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f7dae362ca8e66156643a6c11b9c286,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc1d9ae10f71a4ca57796b2ebb02b9ab1d598c3d4ce6aafd0b5e9d143ecbe2c9,PodSandboxId:31b7ccd50e4d1fa94f51572ae633cb1afcc7006c199f5ea2ee5c18801369c095,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722863022367210315,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-371585,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9260e1be1654581fec665fd54ad4bcb,},Annotations:map[string]string{io.kubernetes.container.hash: fa1be0c9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aab94bc76b4e689f42e1dcfd7779a9cef2e2cd34d3e887e9847458c0fa130f32,PodSandboxId:d5f7a49f52f8e8e96ab379fed95e85d019178bd5214e580b31cd3e6a8498e1fb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722863022271973773,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-371585,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2cc68ee0da609e8d11e788f77345eaf,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7f20be37-ddcd-465f-8583-b21e202a37e3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:13:08 default-k8s-diff-port-371585 crio[728]: time="2024-08-05 13:13:08.301781642Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b92225d4-4701-4283-80a5-7064b2347c5a name=/runtime.v1.RuntimeService/Version
	Aug 05 13:13:08 default-k8s-diff-port-371585 crio[728]: time="2024-08-05 13:13:08.301864257Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b92225d4-4701-4283-80a5-7064b2347c5a name=/runtime.v1.RuntimeService/Version
	Aug 05 13:13:08 default-k8s-diff-port-371585 crio[728]: time="2024-08-05 13:13:08.305585739Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b45de372-1686-443f-947d-95720b6ac4f8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:13:08 default-k8s-diff-port-371585 crio[728]: time="2024-08-05 13:13:08.306036463Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863588306015343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b45de372-1686-443f-947d-95720b6ac4f8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:13:08 default-k8s-diff-port-371585 crio[728]: time="2024-08-05 13:13:08.306765426Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a4ba6555-fddc-4a75-8744-439c7077eefd name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:13:08 default-k8s-diff-port-371585 crio[728]: time="2024-08-05 13:13:08.306919133Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a4ba6555-fddc-4a75-8744-439c7077eefd name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:13:08 default-k8s-diff-port-371585 crio[728]: time="2024-08-05 13:13:08.307170160Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1116fb42f7d411e469d722adffd8ba7bf79322eabd75d66df0f7dc83f8811592,PodSandboxId:02d39de3ad0a62de0832c560a36b7c1b7b6a163fe6477ab3ce7a1f406e5cc732,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722863043852812507,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f3de3fc-9b34-4a46-a7cf-5487647b06ca,},Annotations:map[string]string{io.kubernetes.container.hash: effae2af,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:897074922bcfd326f191a84410e8303aed84e33b3973c6e4d825139733379ae1,PodSandboxId:5ca89a57de01359ec982f461d451756cf2846c5af49d1759e8001d37ab291401,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722863043160628624,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5vxpl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f6aa906-d76f-4f92-8de4-4d3a4a1ee733,},Annotations:map[string]string{io.kubernetes.container.hash: 6596e46f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae5a9ec4aaae6f94d84456461e05b6809eba096807bee66b7e93cc7be633d593,PodSandboxId:6e0294ae4fb3e5c08b9f5e297746ddaf6211ea5555fc85f3bf4945493c9a697e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722863042916935208,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qtt9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 8dcadd0b-af8c-4d76-a1d1-ceeaffb725b8,},Annotations:map[string]string{io.kubernetes.container.hash: 8a966db1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6a75d2e01ad7329f9fb3c03f149c3f4888aedb2d018815471e53b33eda0c5e1,PodSandboxId:f766239566395fb73fdd0176cc0814edf40deb921cea7b36a6753630fcdfd73c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING
,CreatedAt:1722863041804354684,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4v6sn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 497a1512-cdee-49ff-92ea-ea523d3de2a4,},Annotations:map[string]string{io.kubernetes.container.hash: d9fcce48,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cbbbd91c583002208ae7fcce84f734e068b113b0be7adf24c99938212088dca,PodSandboxId:437c0d82e3552aa7c0a8934650f942eb96476ff56d5df6facf17a8dd09036aa4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722863022402012575,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-371585,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f73de6958734815f839b54482962df70,},Annotations:map[string]string{io.kubernetes.container.hash: ecc30e00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e042c530805a04720b16fd04e723152d67c28e78732ac823e9e29ccd368eb5,PodSandboxId:959ebdea65b37ea7c851dc05e5cfb1d0676184d98af9be3ae672f253784a8dac,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722863022360230567,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-371585,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f7dae362ca8e66156643a6c11b9c286,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc1d9ae10f71a4ca57796b2ebb02b9ab1d598c3d4ce6aafd0b5e9d143ecbe2c9,PodSandboxId:31b7ccd50e4d1fa94f51572ae633cb1afcc7006c199f5ea2ee5c18801369c095,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722863022367210315,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-371585,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9260e1be1654581fec665fd54ad4bcb,},Annotations:map[string]string{io.kubernetes.container.hash: fa1be0c9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aab94bc76b4e689f42e1dcfd7779a9cef2e2cd34d3e887e9847458c0fa130f32,PodSandboxId:d5f7a49f52f8e8e96ab379fed95e85d019178bd5214e580b31cd3e6a8498e1fb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722863022271973773,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-371585,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2cc68ee0da609e8d11e788f77345eaf,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a4ba6555-fddc-4a75-8744-439c7077eefd name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1116fb42f7d41       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   02d39de3ad0a6       storage-provisioner
	897074922bcfd       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   5ca89a57de013       coredns-7db6d8ff4d-5vxpl
	ae5a9ec4aaae6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   6e0294ae4fb3e       coredns-7db6d8ff4d-qtt9j
	d6a75d2e01ad7       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   9 minutes ago       Running             kube-proxy                0                   f766239566395       kube-proxy-4v6sn
	3cbbbd91c5830       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   437c0d82e3552       etcd-default-k8s-diff-port-371585
	dc1d9ae10f71a       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   9 minutes ago       Running             kube-apiserver            2                   31b7ccd50e4d1       kube-apiserver-default-k8s-diff-port-371585
	82e042c530805       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   9 minutes ago       Running             kube-scheduler            2                   959ebdea65b37       kube-scheduler-default-k8s-diff-port-371585
	aab94bc76b4e6       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   9 minutes ago       Running             kube-controller-manager   2                   d5f7a49f52f8e       kube-controller-manager-default-k8s-diff-port-371585
	
	
	==> coredns [897074922bcfd326f191a84410e8303aed84e33b3973c6e4d825139733379ae1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [ae5a9ec4aaae6f94d84456461e05b6809eba096807bee66b7e93cc7be633d593] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-371585
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-371585
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f
	                    minikube.k8s.io/name=default-k8s-diff-port-371585
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_05T13_03_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 13:03:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-371585
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 13:12:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 13:09:15 +0000   Mon, 05 Aug 2024 13:03:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 13:09:15 +0000   Mon, 05 Aug 2024 13:03:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 13:09:15 +0000   Mon, 05 Aug 2024 13:03:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 13:09:15 +0000   Mon, 05 Aug 2024 13:03:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.228
	  Hostname:    default-k8s-diff-port-371585
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 74d91729f35f4a63ad357597796476dc
	  System UUID:                74d91729-f35f-4a63-ad35-7597796476dc
	  Boot ID:                    dfc844bf-7a50-44db-8c15-aa02cd2e61bf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-5vxpl                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 coredns-7db6d8ff4d-qtt9j                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 etcd-default-k8s-diff-port-371585                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-apiserver-default-k8s-diff-port-371585             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-371585    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-proxy-4v6sn                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m8s
	  kube-system                 kube-scheduler-default-k8s-diff-port-371585             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 metrics-server-569cc877fc-xf92r                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m5s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m6s   kube-proxy       
	  Normal  Starting                 9m21s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m20s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m20s  kubelet          Node default-k8s-diff-port-371585 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m20s  kubelet          Node default-k8s-diff-port-371585 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m20s  kubelet          Node default-k8s-diff-port-371585 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m8s   node-controller  Node default-k8s-diff-port-371585 event: Registered Node default-k8s-diff-port-371585 in Controller
	
	
	==> dmesg <==
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050767] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040956] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.813351] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.608000] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.397969] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.202541] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.123638] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.228260] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.149949] systemd-fstab-generator[681]: Ignoring "noauto" option for root device
	[  +0.382615] systemd-fstab-generator[712]: Ignoring "noauto" option for root device
	[  +4.864588] systemd-fstab-generator[809]: Ignoring "noauto" option for root device
	[  +0.057541] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.046870] systemd-fstab-generator[933]: Ignoring "noauto" option for root device
	[  +5.622921] kauditd_printk_skb: 97 callbacks suppressed
	[Aug 5 12:59] kauditd_printk_skb: 79 callbacks suppressed
	[Aug 5 13:03] kauditd_printk_skb: 9 callbacks suppressed
	[  +1.474131] systemd-fstab-generator[3564]: Ignoring "noauto" option for root device
	[  +4.536707] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.012069] systemd-fstab-generator[3887]: Ignoring "noauto" option for root device
	[Aug 5 13:04] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.431057] systemd-fstab-generator[4201]: Ignoring "noauto" option for root device
	[Aug 5 13:05] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [3cbbbd91c583002208ae7fcce84f734e068b113b0be7adf24c99938212088dca] <==
	{"level":"info","ts":"2024-08-05T13:03:42.855314Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-05T13:03:42.855624Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"6b82b4cd33cdf876","initial-advertise-peer-urls":["https://192.168.50.228:2380"],"listen-peer-urls":["https://192.168.50.228:2380"],"advertise-client-urls":["https://192.168.50.228:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.228:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-05T13:03:42.856066Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b82b4cd33cdf876 switched to configuration voters=(7746953102461106294)"}
	{"level":"info","ts":"2024-08-05T13:03:42.856666Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d09f5fd90ccfd668","local-member-id":"6b82b4cd33cdf876","added-peer-id":"6b82b4cd33cdf876","added-peer-peer-urls":["https://192.168.50.228:2380"]}
	{"level":"info","ts":"2024-08-05T13:03:42.857234Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-05T13:03:42.857414Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.228:2380"}
	{"level":"info","ts":"2024-08-05T13:03:42.85753Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.228:2380"}
	{"level":"info","ts":"2024-08-05T13:03:43.70053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b82b4cd33cdf876 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-05T13:03:43.700631Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b82b4cd33cdf876 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-05T13:03:43.700684Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b82b4cd33cdf876 received MsgPreVoteResp from 6b82b4cd33cdf876 at term 1"}
	{"level":"info","ts":"2024-08-05T13:03:43.700713Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b82b4cd33cdf876 became candidate at term 2"}
	{"level":"info","ts":"2024-08-05T13:03:43.700738Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b82b4cd33cdf876 received MsgVoteResp from 6b82b4cd33cdf876 at term 2"}
	{"level":"info","ts":"2024-08-05T13:03:43.700764Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b82b4cd33cdf876 became leader at term 2"}
	{"level":"info","ts":"2024-08-05T13:03:43.700793Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6b82b4cd33cdf876 elected leader 6b82b4cd33cdf876 at term 2"}
	{"level":"info","ts":"2024-08-05T13:03:43.70468Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"6b82b4cd33cdf876","local-member-attributes":"{Name:default-k8s-diff-port-371585 ClientURLs:[https://192.168.50.228:2379]}","request-path":"/0/members/6b82b4cd33cdf876/attributes","cluster-id":"d09f5fd90ccfd668","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-05T13:03:43.704762Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T13:03:43.704798Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T13:03:43.713046Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-05T13:03:43.704819Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T13:03:43.713429Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d09f5fd90ccfd668","local-member-id":"6b82b4cd33cdf876","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T13:03:43.7136Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T13:03:43.713643Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T13:03:43.719541Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-05T13:03:43.719603Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-05T13:03:43.75675Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.228:2379"}
	
	
	==> kernel <==
	 13:13:08 up 14 min,  0 users,  load average: 0.15, 0.17, 0.11
	Linux default-k8s-diff-port-371585 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [dc1d9ae10f71a4ca57796b2ebb02b9ab1d598c3d4ce6aafd0b5e9d143ecbe2c9] <==
	I0805 13:07:04.493561       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0805 13:08:45.493217       1 handler_proxy.go:93] no RequestInfo found in the context
	E0805 13:08:45.493331       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0805 13:08:46.494344       1 handler_proxy.go:93] no RequestInfo found in the context
	E0805 13:08:46.494403       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0805 13:08:46.494418       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0805 13:08:46.494515       1 handler_proxy.go:93] no RequestInfo found in the context
	E0805 13:08:46.494590       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0805 13:08:46.495827       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0805 13:09:46.495584       1 handler_proxy.go:93] no RequestInfo found in the context
	E0805 13:09:46.495661       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0805 13:09:46.495670       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0805 13:09:46.496917       1 handler_proxy.go:93] no RequestInfo found in the context
	E0805 13:09:46.496986       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0805 13:09:46.497015       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0805 13:11:46.496703       1 handler_proxy.go:93] no RequestInfo found in the context
	E0805 13:11:46.497006       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0805 13:11:46.497053       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0805 13:11:46.497157       1 handler_proxy.go:93] no RequestInfo found in the context
	E0805 13:11:46.497249       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0805 13:11:46.498509       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [aab94bc76b4e689f42e1dcfd7779a9cef2e2cd34d3e887e9847458c0fa130f32] <==
	I0805 13:07:31.452235       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:08:01.037588       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0805 13:08:01.460330       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:08:31.042705       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0805 13:08:31.468390       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:09:01.047710       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0805 13:09:01.476662       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:09:31.054429       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0805 13:09:31.485184       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0805 13:09:52.033288       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="477.537µs"
	E0805 13:10:01.060300       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0805 13:10:01.495108       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0805 13:10:03.030689       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="49.76µs"
	E0805 13:10:31.065960       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0805 13:10:31.503650       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:11:01.070972       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0805 13:11:01.511173       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:11:31.077629       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0805 13:11:31.520319       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:12:01.084047       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0805 13:12:01.530084       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:12:31.089207       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0805 13:12:31.538616       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:13:01.094694       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0805 13:13:01.546730       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [d6a75d2e01ad7329f9fb3c03f149c3f4888aedb2d018815471e53b33eda0c5e1] <==
	I0805 13:04:02.016712       1 server_linux.go:69] "Using iptables proxy"
	I0805 13:04:02.026784       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.228"]
	I0805 13:04:02.177715       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0805 13:04:02.177761       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0805 13:04:02.177777       1 server_linux.go:165] "Using iptables Proxier"
	I0805 13:04:02.190427       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0805 13:04:02.190692       1 server.go:872] "Version info" version="v1.30.3"
	I0805 13:04:02.190704       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 13:04:02.193503       1 config.go:192] "Starting service config controller"
	I0805 13:04:02.193520       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 13:04:02.193556       1 config.go:101] "Starting endpoint slice config controller"
	I0805 13:04:02.193560       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 13:04:02.193873       1 config.go:319] "Starting node config controller"
	I0805 13:04:02.193878       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 13:04:02.294627       1 shared_informer.go:320] Caches are synced for node config
	I0805 13:04:02.294654       1 shared_informer.go:320] Caches are synced for service config
	I0805 13:04:02.294681       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [82e042c530805a04720b16fd04e723152d67c28e78732ac823e9e29ccd368eb5] <==
	W0805 13:03:46.344338       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0805 13:03:46.344525       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0805 13:03:46.346886       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0805 13:03:46.346975       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0805 13:03:46.347160       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0805 13:03:46.347213       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0805 13:03:46.485968       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0805 13:03:46.486057       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0805 13:03:46.500755       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0805 13:03:46.500957       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0805 13:03:46.528303       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0805 13:03:46.528354       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0805 13:03:46.567148       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0805 13:03:46.567692       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0805 13:03:46.589652       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0805 13:03:46.589706       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0805 13:03:46.670338       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0805 13:03:46.670509       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0805 13:03:46.684013       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0805 13:03:46.684171       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0805 13:03:46.764336       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0805 13:03:46.764580       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0805 13:03:46.816399       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0805 13:03:46.816589       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0805 13:03:48.893758       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 05 13:10:48 default-k8s-diff-port-371585 kubelet[3894]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 13:10:48 default-k8s-diff-port-371585 kubelet[3894]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 13:10:48 default-k8s-diff-port-371585 kubelet[3894]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 13:10:48 default-k8s-diff-port-371585 kubelet[3894]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 13:10:51 default-k8s-diff-port-371585 kubelet[3894]: E0805 13:10:51.015191    3894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xf92r" podUID="edb560ac-ddb1-4afa-b3a3-aa054ea38162"
	Aug 05 13:11:02 default-k8s-diff-port-371585 kubelet[3894]: E0805 13:11:02.015370    3894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xf92r" podUID="edb560ac-ddb1-4afa-b3a3-aa054ea38162"
	Aug 05 13:11:13 default-k8s-diff-port-371585 kubelet[3894]: E0805 13:11:13.014735    3894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xf92r" podUID="edb560ac-ddb1-4afa-b3a3-aa054ea38162"
	Aug 05 13:11:26 default-k8s-diff-port-371585 kubelet[3894]: E0805 13:11:26.015680    3894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xf92r" podUID="edb560ac-ddb1-4afa-b3a3-aa054ea38162"
	Aug 05 13:11:37 default-k8s-diff-port-371585 kubelet[3894]: E0805 13:11:37.016000    3894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xf92r" podUID="edb560ac-ddb1-4afa-b3a3-aa054ea38162"
	Aug 05 13:11:48 default-k8s-diff-port-371585 kubelet[3894]: E0805 13:11:48.031296    3894 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 13:11:48 default-k8s-diff-port-371585 kubelet[3894]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 13:11:48 default-k8s-diff-port-371585 kubelet[3894]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 13:11:48 default-k8s-diff-port-371585 kubelet[3894]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 13:11:48 default-k8s-diff-port-371585 kubelet[3894]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 13:11:52 default-k8s-diff-port-371585 kubelet[3894]: E0805 13:11:52.016263    3894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xf92r" podUID="edb560ac-ddb1-4afa-b3a3-aa054ea38162"
	Aug 05 13:12:04 default-k8s-diff-port-371585 kubelet[3894]: E0805 13:12:04.014753    3894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xf92r" podUID="edb560ac-ddb1-4afa-b3a3-aa054ea38162"
	Aug 05 13:12:19 default-k8s-diff-port-371585 kubelet[3894]: E0805 13:12:19.015129    3894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xf92r" podUID="edb560ac-ddb1-4afa-b3a3-aa054ea38162"
	Aug 05 13:12:34 default-k8s-diff-port-371585 kubelet[3894]: E0805 13:12:34.014991    3894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xf92r" podUID="edb560ac-ddb1-4afa-b3a3-aa054ea38162"
	Aug 05 13:12:48 default-k8s-diff-port-371585 kubelet[3894]: E0805 13:12:48.019613    3894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xf92r" podUID="edb560ac-ddb1-4afa-b3a3-aa054ea38162"
	Aug 05 13:12:48 default-k8s-diff-port-371585 kubelet[3894]: E0805 13:12:48.030870    3894 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 13:12:48 default-k8s-diff-port-371585 kubelet[3894]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 13:12:48 default-k8s-diff-port-371585 kubelet[3894]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 13:12:48 default-k8s-diff-port-371585 kubelet[3894]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 13:12:48 default-k8s-diff-port-371585 kubelet[3894]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 13:13:00 default-k8s-diff-port-371585 kubelet[3894]: E0805 13:13:00.015706    3894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xf92r" podUID="edb560ac-ddb1-4afa-b3a3-aa054ea38162"
	
	
	==> storage-provisioner [1116fb42f7d411e469d722adffd8ba7bf79322eabd75d66df0f7dc83f8811592] <==
	I0805 13:04:03.950730       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0805 13:04:03.967146       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0805 13:04:03.968129       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0805 13:04:03.985849       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0805 13:04:03.986122       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-371585_c048ac55-484c-4617-9c53-a4047f8fdf69!
	I0805 13:04:03.989891       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6205fb02-bba8-4c3a-9d67-bf47b061a534", APIVersion:"v1", ResourceVersion:"406", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-371585_c048ac55-484c-4617-9c53-a4047f8fdf69 became leader
	I0805 13:04:04.087213       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-371585_c048ac55-484c-4617-9c53-a4047f8fdf69!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-371585 -n default-k8s-diff-port-371585
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-371585 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-xf92r
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-371585 describe pod metrics-server-569cc877fc-xf92r
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-371585 describe pod metrics-server-569cc877fc-xf92r: exit status 1 (64.656445ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-xf92r" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-371585 describe pod metrics-server-569cc877fc-xf92r: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
E0805 13:07:48.267984  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/custom-flannel-119870/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
E0805 13:07:52.926338  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/functional-014296/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
E0805 13:08:32.945465  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/calico-119870/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
E0805 13:08:36.335136  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/enable-default-cni-119870/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
E0805 13:09:06.749171  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/flannel-119870/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
E0805 13:09:11.315103  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/custom-flannel-119870/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
E0805 13:09:59.380641  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/enable-default-cni-119870/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
E0805 13:10:07.007908  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/bridge-119870/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
E0805 13:10:27.753027  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
E0805 13:10:29.795073  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/flannel-119870/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
E0805 13:10:48.987264  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/client.crt: no such file or directory
E0805 13:10:49.458271  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/auto-119870/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
E0805 13:10:55.980347  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/functional-014296/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
E0805 13:11:30.052392  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/bridge-119870/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
E0805 13:12:09.900112  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/calico-119870/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
E0805 13:12:48.267990  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/custom-flannel-119870/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
E0805 13:12:52.927332  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/functional-014296/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
E0805 13:13:36.335880  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/enable-default-cni-119870/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
E0805 13:14:06.748683  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/flannel-119870/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
E0805 13:15:07.007925  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/bridge-119870/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
E0805 13:15:27.753208  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
E0805 13:15:48.987065  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/client.crt: no such file or directory
E0805 13:15:49.458737  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/auto-119870/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-635707 -n old-k8s-version-635707
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-635707 -n old-k8s-version-635707: exit status 2 (233.845962ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-635707" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-635707 -n old-k8s-version-635707
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-635707 -n old-k8s-version-635707: exit status 2 (231.392393ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-635707 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-635707 logs -n 25: (1.660892516s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-119870 sudo cat                              | bridge-119870                | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-119870 sudo                                  | bridge-119870                | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-119870 sudo                                  | bridge-119870                | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-119870 sudo                                  | bridge-119870                | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-119870 sudo find                             | bridge-119870                | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-119870 sudo crio                             | bridge-119870                | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-119870                                       | bridge-119870                | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	| delete  | -p                                                     | disable-driver-mounts-130994 | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | disable-driver-mounts-130994                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-371585 | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:51 UTC |
	|         | default-k8s-diff-port-371585                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-321139            | embed-certs-321139           | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-321139                                  | embed-certs-321139           | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-669469             | no-preload-669469            | jenkins | v1.33.1 | 05 Aug 24 12:51 UTC | 05 Aug 24 12:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-669469                                   | no-preload-669469            | jenkins | v1.33.1 | 05 Aug 24 12:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-371585  | default-k8s-diff-port-371585 | jenkins | v1.33.1 | 05 Aug 24 12:51 UTC | 05 Aug 24 12:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-371585 | jenkins | v1.33.1 | 05 Aug 24 12:51 UTC |                     |
	|         | default-k8s-diff-port-371585                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-321139                 | embed-certs-321139           | jenkins | v1.33.1 | 05 Aug 24 12:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-635707        | old-k8s-version-635707       | jenkins | v1.33.1 | 05 Aug 24 12:53 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-321139                                  | embed-certs-321139           | jenkins | v1.33.1 | 05 Aug 24 12:53 UTC | 05 Aug 24 13:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-669469                  | no-preload-669469            | jenkins | v1.33.1 | 05 Aug 24 12:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-669469                                   | no-preload-669469            | jenkins | v1.33.1 | 05 Aug 24 12:53 UTC | 05 Aug 24 13:03 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-371585       | default-k8s-diff-port-371585 | jenkins | v1.33.1 | 05 Aug 24 12:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-371585 | jenkins | v1.33.1 | 05 Aug 24 12:54 UTC | 05 Aug 24 13:04 UTC |
	|         | default-k8s-diff-port-371585                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-635707                              | old-k8s-version-635707       | jenkins | v1.33.1 | 05 Aug 24 12:55 UTC | 05 Aug 24 12:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-635707             | old-k8s-version-635707       | jenkins | v1.33.1 | 05 Aug 24 12:55 UTC | 05 Aug 24 12:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-635707                              | old-k8s-version-635707       | jenkins | v1.33.1 | 05 Aug 24 12:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 12:55:11
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 12:55:11.960192  451238 out.go:291] Setting OutFile to fd 1 ...
	I0805 12:55:11.960471  451238 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:55:11.960479  451238 out.go:304] Setting ErrFile to fd 2...
	I0805 12:55:11.960484  451238 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:55:11.960646  451238 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
	I0805 12:55:11.961145  451238 out.go:298] Setting JSON to false
	I0805 12:55:11.962063  451238 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":9459,"bootTime":1722853053,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0805 12:55:11.962121  451238 start.go:139] virtualization: kvm guest
	I0805 12:55:11.964372  451238 out.go:177] * [old-k8s-version-635707] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0805 12:55:11.965770  451238 notify.go:220] Checking for updates...
	I0805 12:55:11.965787  451238 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 12:55:11.967106  451238 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 12:55:11.968790  451238 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 12:55:11.970181  451238 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 12:55:11.971500  451238 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0805 12:55:11.973243  451238 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 12:55:11.974825  451238 config.go:182] Loaded profile config "old-k8s-version-635707": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0805 12:55:11.975239  451238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:55:11.975319  451238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:55:11.990296  451238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40583
	I0805 12:55:11.990704  451238 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:55:11.991235  451238 main.go:141] libmachine: Using API Version  1
	I0805 12:55:11.991259  451238 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:55:11.991575  451238 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:55:11.991765  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:55:11.993484  451238 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0805 12:55:11.994687  451238 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 12:55:11.994952  451238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:55:11.994984  451238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:55:12.009528  451238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37395
	I0805 12:55:12.009879  451238 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:55:12.010353  451238 main.go:141] libmachine: Using API Version  1
	I0805 12:55:12.010375  451238 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:55:12.010670  451238 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:55:12.010857  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:55:12.044634  451238 out.go:177] * Using the kvm2 driver based on existing profile
	I0805 12:55:12.045859  451238 start.go:297] selected driver: kvm2
	I0805 12:55:12.045876  451238 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-635707 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-635707 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.41 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:55:12.045987  451238 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 12:55:12.046662  451238 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 12:55:12.046731  451238 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19377-383955/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0805 12:55:12.061918  451238 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0805 12:55:12.062400  451238 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 12:55:12.062484  451238 cni.go:84] Creating CNI manager for ""
	I0805 12:55:12.062502  451238 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:55:12.062572  451238 start.go:340] cluster config:
	{Name:old-k8s-version-635707 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-635707 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.41 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:55:12.062722  451238 iso.go:125] acquiring lock: {Name:mk78a4988ea0dfb86bb6f7367e362683a39fd912 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 12:55:12.064478  451238 out.go:177] * Starting "old-k8s-version-635707" primary control-plane node in "old-k8s-version-635707" cluster
	I0805 12:55:10.820047  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:13.892041  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:12.065640  451238 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0805 12:55:12.065680  451238 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0805 12:55:12.065701  451238 cache.go:56] Caching tarball of preloaded images
	I0805 12:55:12.065786  451238 preload.go:172] Found /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0805 12:55:12.065797  451238 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0805 12:55:12.065897  451238 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/config.json ...
	I0805 12:55:12.066073  451238 start.go:360] acquireMachinesLock for old-k8s-version-635707: {Name:mk3babe91d55c30c0b650587cdec6489eb3a7ed6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 12:55:19.971977  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:23.044092  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:29.124041  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:32.196124  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:38.276045  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:41.348117  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:47.428042  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:50.500022  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:56.580074  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:59.652091  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:05.732072  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:08.804128  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:14.884085  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:17.956073  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:24.036067  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:27.108059  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:33.188012  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:36.260134  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:42.340036  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:45.412038  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:51.492022  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:54.564068  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:00.644018  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:03.716112  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:09.796041  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:12.868080  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:18.948054  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:22.020023  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:28.100099  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:31.172076  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:37.251997  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:40.324080  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:46.404055  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:49.476072  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:55.556045  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:58.627984  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:58:01.632326  450576 start.go:364] duration metric: took 4m17.994768704s to acquireMachinesLock for "no-preload-669469"
	I0805 12:58:01.632391  450576 start.go:96] Skipping create...Using existing machine configuration
	I0805 12:58:01.632403  450576 fix.go:54] fixHost starting: 
	I0805 12:58:01.632845  450576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:58:01.632880  450576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:58:01.648358  450576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43013
	I0805 12:58:01.648860  450576 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:58:01.649387  450576 main.go:141] libmachine: Using API Version  1
	I0805 12:58:01.649410  450576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:58:01.649779  450576 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:58:01.649963  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 12:58:01.650176  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetState
	I0805 12:58:01.651681  450576 fix.go:112] recreateIfNeeded on no-preload-669469: state=Stopped err=<nil>
	I0805 12:58:01.651715  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	W0805 12:58:01.651903  450576 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 12:58:01.653860  450576 out.go:177] * Restarting existing kvm2 VM for "no-preload-669469" ...
	I0805 12:58:01.655338  450576 main.go:141] libmachine: (no-preload-669469) Calling .Start
	I0805 12:58:01.655475  450576 main.go:141] libmachine: (no-preload-669469) Ensuring networks are active...
	I0805 12:58:01.656224  450576 main.go:141] libmachine: (no-preload-669469) Ensuring network default is active
	I0805 12:58:01.656565  450576 main.go:141] libmachine: (no-preload-669469) Ensuring network mk-no-preload-669469 is active
	I0805 12:58:01.656898  450576 main.go:141] libmachine: (no-preload-669469) Getting domain xml...
	I0805 12:58:01.657537  450576 main.go:141] libmachine: (no-preload-669469) Creating domain...
	I0805 12:58:02.879809  450576 main.go:141] libmachine: (no-preload-669469) Waiting to get IP...
	I0805 12:58:02.880800  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:02.881194  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:02.881270  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:02.881175  451829 retry.go:31] will retry after 303.380177ms: waiting for machine to come up
	I0805 12:58:03.185834  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:03.186259  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:03.186288  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:03.186214  451829 retry.go:31] will retry after 263.494141ms: waiting for machine to come up
	I0805 12:58:03.451923  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:03.452263  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:03.452340  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:03.452217  451829 retry.go:31] will retry after 310.615163ms: waiting for machine to come up
	I0805 12:58:01.629832  450393 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 12:58:01.629873  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetMachineName
	I0805 12:58:01.630250  450393 buildroot.go:166] provisioning hostname "embed-certs-321139"
	I0805 12:58:01.630295  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetMachineName
	I0805 12:58:01.630511  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:58:01.632158  450393 machine.go:97] duration metric: took 4m37.422562602s to provisionDockerMachine
	I0805 12:58:01.632208  450393 fix.go:56] duration metric: took 4m37.444588707s for fixHost
	I0805 12:58:01.632226  450393 start.go:83] releasing machines lock for "embed-certs-321139", held for 4m37.44461751s
	W0805 12:58:01.632250  450393 start.go:714] error starting host: provision: host is not running
	W0805 12:58:01.632431  450393 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0805 12:58:01.632445  450393 start.go:729] Will try again in 5 seconds ...
	I0805 12:58:03.764803  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:03.765280  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:03.765305  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:03.765243  451829 retry.go:31] will retry after 570.955722ms: waiting for machine to come up
	I0805 12:58:04.338423  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:04.338863  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:04.338893  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:04.338811  451829 retry.go:31] will retry after 485.490715ms: waiting for machine to come up
	I0805 12:58:04.825511  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:04.825882  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:04.825911  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:04.825823  451829 retry.go:31] will retry after 671.109731ms: waiting for machine to come up
	I0805 12:58:05.498113  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:05.498529  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:05.498557  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:05.498467  451829 retry.go:31] will retry after 997.668856ms: waiting for machine to come up
	I0805 12:58:06.497843  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:06.498144  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:06.498161  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:06.498120  451829 retry.go:31] will retry after 996.614411ms: waiting for machine to come up
	I0805 12:58:07.496801  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:07.497298  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:07.497334  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:07.497249  451829 retry.go:31] will retry after 1.384682595s: waiting for machine to come up
	I0805 12:58:06.634410  450393 start.go:360] acquireMachinesLock for embed-certs-321139: {Name:mk3babe91d55c30c0b650587cdec6489eb3a7ed6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 12:58:08.883309  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:08.883701  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:08.883732  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:08.883642  451829 retry.go:31] will retry after 2.017073843s: waiting for machine to come up
	I0805 12:58:10.903852  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:10.904279  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:10.904310  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:10.904233  451829 retry.go:31] will retry after 2.485880433s: waiting for machine to come up
	I0805 12:58:13.392693  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:13.393169  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:13.393199  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:13.393116  451829 retry.go:31] will retry after 2.986076236s: waiting for machine to come up
	I0805 12:58:16.380921  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:16.381475  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:16.381508  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:16.381432  451829 retry.go:31] will retry after 4.291617536s: waiting for machine to come up
	I0805 12:58:21.948770  450884 start.go:364] duration metric: took 4m4.773878111s to acquireMachinesLock for "default-k8s-diff-port-371585"
	I0805 12:58:21.948843  450884 start.go:96] Skipping create...Using existing machine configuration
	I0805 12:58:21.948851  450884 fix.go:54] fixHost starting: 
	I0805 12:58:21.949291  450884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:58:21.949337  450884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:58:21.966933  450884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34223
	I0805 12:58:21.967356  450884 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:58:21.967874  450884 main.go:141] libmachine: Using API Version  1
	I0805 12:58:21.967899  450884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:58:21.968326  450884 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:58:21.968638  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 12:58:21.968874  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetState
	I0805 12:58:21.970608  450884 fix.go:112] recreateIfNeeded on default-k8s-diff-port-371585: state=Stopped err=<nil>
	I0805 12:58:21.970631  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	W0805 12:58:21.970789  450884 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 12:58:21.973235  450884 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-371585" ...
	I0805 12:58:21.974564  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .Start
	I0805 12:58:21.974751  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Ensuring networks are active...
	I0805 12:58:21.975581  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Ensuring network default is active
	I0805 12:58:21.976001  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Ensuring network mk-default-k8s-diff-port-371585 is active
	I0805 12:58:21.976376  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Getting domain xml...
	I0805 12:58:21.977078  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Creating domain...
	I0805 12:58:20.678231  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.678743  450576 main.go:141] libmachine: (no-preload-669469) Found IP for machine: 192.168.72.223
	I0805 12:58:20.678771  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has current primary IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.678786  450576 main.go:141] libmachine: (no-preload-669469) Reserving static IP address...
	I0805 12:58:20.679230  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "no-preload-669469", mac: "52:54:00:55:38:0a", ip: "192.168.72.223"} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:20.679266  450576 main.go:141] libmachine: (no-preload-669469) Reserved static IP address: 192.168.72.223
	I0805 12:58:20.679288  450576 main.go:141] libmachine: (no-preload-669469) DBG | skip adding static IP to network mk-no-preload-669469 - found existing host DHCP lease matching {name: "no-preload-669469", mac: "52:54:00:55:38:0a", ip: "192.168.72.223"}
	I0805 12:58:20.679302  450576 main.go:141] libmachine: (no-preload-669469) DBG | Getting to WaitForSSH function...
	I0805 12:58:20.679317  450576 main.go:141] libmachine: (no-preload-669469) Waiting for SSH to be available...
	I0805 12:58:20.681864  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.682263  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:20.682297  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.682447  450576 main.go:141] libmachine: (no-preload-669469) DBG | Using SSH client type: external
	I0805 12:58:20.682484  450576 main.go:141] libmachine: (no-preload-669469) DBG | Using SSH private key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa (-rw-------)
	I0805 12:58:20.682539  450576 main.go:141] libmachine: (no-preload-669469) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.223 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 12:58:20.682557  450576 main.go:141] libmachine: (no-preload-669469) DBG | About to run SSH command:
	I0805 12:58:20.682568  450576 main.go:141] libmachine: (no-preload-669469) DBG | exit 0
	I0805 12:58:20.807791  450576 main.go:141] libmachine: (no-preload-669469) DBG | SSH cmd err, output: <nil>: 
	I0805 12:58:20.808168  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetConfigRaw
	I0805 12:58:20.808767  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetIP
	I0805 12:58:20.811170  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.811486  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:20.811517  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.811738  450576 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469/config.json ...
	I0805 12:58:20.811957  450576 machine.go:94] provisionDockerMachine start ...
	I0805 12:58:20.811976  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 12:58:20.812203  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:20.814305  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.814656  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:20.814693  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.814823  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:20.814996  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:20.815156  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:20.815329  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:20.815503  450576 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:20.815871  450576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.223 22 <nil> <nil>}
	I0805 12:58:20.815887  450576 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 12:58:20.920311  450576 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 12:58:20.920344  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetMachineName
	I0805 12:58:20.920642  450576 buildroot.go:166] provisioning hostname "no-preload-669469"
	I0805 12:58:20.920695  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetMachineName
	I0805 12:58:20.920951  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:20.924029  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.924583  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:20.924611  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.924770  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:20.925001  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:20.925190  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:20.925334  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:20.925514  450576 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:20.925755  450576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.223 22 <nil> <nil>}
	I0805 12:58:20.925774  450576 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-669469 && echo "no-preload-669469" | sudo tee /etc/hostname
	I0805 12:58:21.046579  450576 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-669469
	
	I0805 12:58:21.046614  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:21.049322  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.049657  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.049687  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.049851  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:21.050049  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.050239  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.050412  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:21.050588  450576 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:21.050755  450576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.223 22 <nil> <nil>}
	I0805 12:58:21.050771  450576 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-669469' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-669469/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-669469' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 12:58:21.165100  450576 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 12:58:21.165134  450576 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19377-383955/.minikube CaCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19377-383955/.minikube}
	I0805 12:58:21.165170  450576 buildroot.go:174] setting up certificates
	I0805 12:58:21.165180  450576 provision.go:84] configureAuth start
	I0805 12:58:21.165191  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetMachineName
	I0805 12:58:21.165477  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetIP
	I0805 12:58:21.168018  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.168399  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.168443  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.168703  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:21.171168  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.171536  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.171565  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.171638  450576 provision.go:143] copyHostCerts
	I0805 12:58:21.171713  450576 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem, removing ...
	I0805 12:58:21.171724  450576 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 12:58:21.171807  450576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem (1082 bytes)
	I0805 12:58:21.171920  450576 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem, removing ...
	I0805 12:58:21.171930  450576 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 12:58:21.171955  450576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem (1123 bytes)
	I0805 12:58:21.172010  450576 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem, removing ...
	I0805 12:58:21.172016  450576 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 12:58:21.172037  450576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem (1675 bytes)
	I0805 12:58:21.172095  450576 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem org=jenkins.no-preload-669469 san=[127.0.0.1 192.168.72.223 localhost minikube no-preload-669469]
	I0805 12:58:21.287395  450576 provision.go:177] copyRemoteCerts
	I0805 12:58:21.287463  450576 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 12:58:21.287505  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:21.290416  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.290765  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.290796  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.290962  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:21.291169  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.291323  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:21.291460  450576 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa Username:docker}
	I0805 12:58:21.373992  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0805 12:58:21.398249  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 12:58:21.422950  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0805 12:58:21.446469  450576 provision.go:87] duration metric: took 281.275299ms to configureAuth
	I0805 12:58:21.446500  450576 buildroot.go:189] setting minikube options for container-runtime
	I0805 12:58:21.446688  450576 config.go:182] Loaded profile config "no-preload-669469": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0805 12:58:21.446813  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:21.449833  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.450219  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.450235  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.450526  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:21.450814  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.450993  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.451168  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:21.451342  450576 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:21.451515  450576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.223 22 <nil> <nil>}
	I0805 12:58:21.451532  450576 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 12:58:21.714813  450576 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 12:58:21.714842  450576 machine.go:97] duration metric: took 902.872257ms to provisionDockerMachine
	I0805 12:58:21.714858  450576 start.go:293] postStartSetup for "no-preload-669469" (driver="kvm2")
	I0805 12:58:21.714889  450576 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 12:58:21.714940  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 12:58:21.715304  450576 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 12:58:21.715333  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:21.717989  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.718405  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.718427  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.718597  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:21.718832  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.718993  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:21.719152  450576 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa Username:docker}
	I0805 12:58:21.802634  450576 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 12:58:21.806957  450576 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 12:58:21.806985  450576 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/addons for local assets ...
	I0805 12:58:21.807079  450576 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/files for local assets ...
	I0805 12:58:21.807186  450576 filesync.go:149] local asset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> 3912192.pem in /etc/ssl/certs
	I0805 12:58:21.807293  450576 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 12:58:21.816690  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:58:21.839848  450576 start.go:296] duration metric: took 124.973515ms for postStartSetup
	I0805 12:58:21.839903  450576 fix.go:56] duration metric: took 20.207499572s for fixHost
	I0805 12:58:21.839934  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:21.842548  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.842869  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.842893  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.843090  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:21.843310  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.843502  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.843640  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:21.843815  450576 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:21.844015  450576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.223 22 <nil> <nil>}
	I0805 12:58:21.844029  450576 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 12:58:21.948584  450576 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722862701.921979093
	
	I0805 12:58:21.948613  450576 fix.go:216] guest clock: 1722862701.921979093
	I0805 12:58:21.948623  450576 fix.go:229] Guest: 2024-08-05 12:58:21.921979093 +0000 UTC Remote: 2024-08-05 12:58:21.83991063 +0000 UTC m=+278.340267839 (delta=82.068463ms)
	I0805 12:58:21.948671  450576 fix.go:200] guest clock delta is within tolerance: 82.068463ms
	I0805 12:58:21.948680  450576 start.go:83] releasing machines lock for "no-preload-669469", held for 20.316310092s
	I0805 12:58:21.948713  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 12:58:21.948990  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetIP
	I0805 12:58:21.951624  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.952086  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.952136  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.952256  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 12:58:21.952797  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 12:58:21.952984  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 12:58:21.953065  450576 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 12:58:21.953113  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:21.953227  450576 ssh_runner.go:195] Run: cat /version.json
	I0805 12:58:21.953255  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:21.955837  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.956081  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.956200  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.956227  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.956370  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:21.956504  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.956528  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.956568  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.956670  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:21.956760  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:21.956857  450576 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa Username:docker}
	I0805 12:58:21.956906  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.957058  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:21.957205  450576 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa Username:docker}
	I0805 12:58:22.058847  450576 ssh_runner.go:195] Run: systemctl --version
	I0805 12:58:22.065110  450576 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 12:58:22.211415  450576 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 12:58:22.219405  450576 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 12:58:22.219492  450576 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 12:58:22.240631  450576 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 12:58:22.240659  450576 start.go:495] detecting cgroup driver to use...
	I0805 12:58:22.240764  450576 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 12:58:22.258777  450576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 12:58:22.273312  450576 docker.go:217] disabling cri-docker service (if available) ...
	I0805 12:58:22.273400  450576 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 12:58:22.288455  450576 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 12:58:22.305028  450576 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 12:58:22.428098  450576 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 12:58:22.586232  450576 docker.go:233] disabling docker service ...
	I0805 12:58:22.586318  450576 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 12:58:22.611888  450576 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 12:58:22.627393  450576 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 12:58:22.757335  450576 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 12:58:22.878168  450576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 12:58:22.896174  450576 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 12:58:22.914395  450576 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0805 12:58:23.229202  450576 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0805 12:58:23.229300  450576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:23.242180  450576 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 12:58:23.242262  450576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:23.254577  450576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:23.265805  450576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:23.276522  450576 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 12:58:23.287288  450576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:23.297863  450576 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:23.314322  450576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:23.324662  450576 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 12:58:23.334125  450576 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0805 12:58:23.334192  450576 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0805 12:58:23.346701  450576 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 12:58:23.356256  450576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:58:23.474046  450576 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 12:58:23.617276  450576 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 12:58:23.617363  450576 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 12:58:23.622001  450576 start.go:563] Will wait 60s for crictl version
	I0805 12:58:23.622047  450576 ssh_runner.go:195] Run: which crictl
	I0805 12:58:23.626041  450576 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 12:58:23.670186  450576 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 12:58:23.670267  450576 ssh_runner.go:195] Run: crio --version
	I0805 12:58:23.700616  450576 ssh_runner.go:195] Run: crio --version
	I0805 12:58:23.733411  450576 out.go:177] * Preparing Kubernetes v1.31.0-rc.0 on CRI-O 1.29.1 ...
	I0805 12:58:23.254293  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting to get IP...
	I0805 12:58:23.255331  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:23.255802  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:23.255880  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:23.255773  451963 retry.go:31] will retry after 245.269435ms: waiting for machine to come up
	I0805 12:58:23.502617  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:23.503105  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:23.503130  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:23.503068  451963 retry.go:31] will retry after 243.155673ms: waiting for machine to come up
	I0805 12:58:23.747498  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:23.747913  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:23.747950  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:23.747867  451963 retry.go:31] will retry after 459.286566ms: waiting for machine to come up
	I0805 12:58:24.208594  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:24.209076  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:24.209127  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:24.209003  451963 retry.go:31] will retry after 499.069946ms: waiting for machine to come up
	I0805 12:58:24.709128  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:24.709554  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:24.709577  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:24.709512  451963 retry.go:31] will retry after 732.735525ms: waiting for machine to come up
	I0805 12:58:25.443632  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:25.444185  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:25.444216  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:25.444125  451963 retry.go:31] will retry after 883.69375ms: waiting for machine to come up
	I0805 12:58:26.329477  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:26.330010  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:26.330045  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:26.329947  451963 retry.go:31] will retry after 1.157298734s: waiting for machine to come up
	I0805 12:58:23.734875  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetIP
	I0805 12:58:23.737945  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:23.738460  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:23.738487  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:23.738646  450576 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0805 12:58:23.742894  450576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:58:23.756164  450576 kubeadm.go:883] updating cluster {Name:no-preload-669469 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-rc.0 ClusterName:no-preload-669469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.223 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 12:58:23.756435  450576 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0805 12:58:24.035575  450576 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0805 12:58:24.352144  450576 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0805 12:58:24.657175  450576 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0805 12:58:24.657266  450576 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:58:24.694685  450576 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-rc.0". assuming images are not preloaded.
	I0805 12:58:24.694720  450576 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-rc.0 registry.k8s.io/kube-controller-manager:v1.31.0-rc.0 registry.k8s.io/kube-scheduler:v1.31.0-rc.0 registry.k8s.io/kube-proxy:v1.31.0-rc.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0805 12:58:24.694809  450576 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0805 12:58:24.694831  450576 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0805 12:58:24.694845  450576 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0805 12:58:24.694867  450576 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0805 12:58:24.694835  450576 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:58:24.694815  450576 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0805 12:58:24.694801  450576 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0805 12:58:24.694917  450576 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0805 12:58:24.696852  450576 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0805 12:58:24.696859  450576 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0805 12:58:24.696860  450576 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0805 12:58:24.696902  450576 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0805 12:58:24.696904  450576 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:58:24.696852  450576 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0805 12:58:24.696881  450576 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0805 12:58:24.696852  450576 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0805 12:58:24.864249  450576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0805 12:58:24.867334  450576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0805 12:58:24.905018  450576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0805 12:58:24.920294  450576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0805 12:58:24.925405  450576 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" does not exist at hash "fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c" in container runtime
	I0805 12:58:24.925440  450576 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" does not exist at hash "c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0" in container runtime
	I0805 12:58:24.925456  450576 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0805 12:58:24.925476  450576 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0805 12:58:24.925508  450576 ssh_runner.go:195] Run: which crictl
	I0805 12:58:24.925520  450576 ssh_runner.go:195] Run: which crictl
	I0805 12:58:24.973191  450576 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-rc.0" does not exist at hash "41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318" in container runtime
	I0805 12:58:24.973240  450576 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0805 12:58:24.973304  450576 ssh_runner.go:195] Run: which crictl
	I0805 12:58:24.986642  450576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0805 12:58:24.986685  450576 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0805 12:58:24.986706  450576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0805 12:58:24.986723  450576 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0805 12:58:24.986642  450576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0805 12:58:24.986772  450576 ssh_runner.go:195] Run: which crictl
	I0805 12:58:25.037012  450576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0
	I0805 12:58:25.037066  450576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0805 12:58:25.037132  450576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0805 12:58:25.067311  450576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0805 12:58:25.068528  450576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0805 12:58:25.073769  450576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0
	I0805 12:58:25.073831  450576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-rc.0
	I0805 12:58:25.073872  450576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0805 12:58:25.073933  450576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0805 12:58:25.082476  450576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0805 12:58:25.126044  450576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0 (exists)
	I0805 12:58:25.126080  450576 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0805 12:58:25.126127  450576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0805 12:58:25.126144  450576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0805 12:58:25.126230  450576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0805 12:58:25.149903  450576 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0805 12:58:25.149965  450576 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0805 12:58:25.150028  450576 ssh_runner.go:195] Run: which crictl
	I0805 12:58:25.196288  450576 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" does not exist at hash "0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c" in container runtime
	I0805 12:58:25.196336  450576 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0805 12:58:25.196388  450576 ssh_runner.go:195] Run: which crictl
	I0805 12:58:25.196416  450576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0 (exists)
	I0805 12:58:25.196510  450576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0 (exists)
	I0805 12:58:25.651632  450576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:58:27.532922  450576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0: (2.406747514s)
	I0805 12:58:27.532959  450576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 from cache
	I0805 12:58:27.532994  450576 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0805 12:58:27.533010  450576 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.406755032s)
	I0805 12:58:27.533048  450576 ssh_runner.go:235] Completed: which crictl: (2.383000552s)
	I0805 12:58:27.533050  450576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0805 12:58:27.533082  450576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0805 12:58:27.533082  450576 ssh_runner.go:235] Completed: which crictl: (2.336681164s)
	I0805 12:58:27.533095  450576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0805 12:58:27.533117  450576 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.88145852s)
	I0805 12:58:27.533139  450576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0805 12:58:27.533161  450576 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0805 12:58:27.533198  450576 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:58:27.533234  450576 ssh_runner.go:195] Run: which crictl
	I0805 12:58:27.488683  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:27.489080  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:27.489108  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:27.489027  451963 retry.go:31] will retry after 997.566168ms: waiting for machine to come up
	I0805 12:58:28.488397  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:28.488846  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:28.488878  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:28.488794  451963 retry.go:31] will retry after 1.327498575s: waiting for machine to come up
	I0805 12:58:29.818339  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:29.818705  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:29.818735  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:29.818660  451963 retry.go:31] will retry after 2.105158858s: waiting for machine to come up
	I0805 12:58:31.925036  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:31.925564  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:31.925601  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:31.925492  451963 retry.go:31] will retry after 2.860711737s: waiting for machine to come up
	I0805 12:58:29.629896  450576 ssh_runner.go:235] Completed: which crictl: (2.096633143s)
	I0805 12:58:29.630000  450576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:58:29.630084  450576 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0: (2.096969259s)
	I0805 12:58:29.630184  450576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0805 12:58:29.630102  450576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0: (2.09697893s)
	I0805 12:58:29.630255  450576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 from cache
	I0805 12:58:29.630121  450576 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-rc.0: (2.096957841s)
	I0805 12:58:29.630282  450576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.15-0
	I0805 12:58:29.630286  450576 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0805 12:58:29.630313  450576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0
	I0805 12:58:29.630322  450576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0805 12:58:29.630381  450576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0805 12:58:29.675831  450576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0805 12:58:29.675914  450576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0805 12:58:29.676019  450576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0805 12:58:31.695376  450576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0: (2.06501136s)
	I0805 12:58:31.695429  450576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 from cache
	I0805 12:58:31.695458  450576 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0805 12:58:31.695476  450576 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.019437866s)
	I0805 12:58:31.695382  450576 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0: (2.064967299s)
	I0805 12:58:31.695510  450576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0805 12:58:31.695523  450576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0 (exists)
	I0805 12:58:31.695536  450576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0805 12:58:34.789126  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:34.789644  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:34.789673  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:34.789592  451963 retry.go:31] will retry after 2.763937018s: waiting for machine to come up
	I0805 12:58:33.659147  450576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.963588438s)
	I0805 12:58:33.659183  450576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0805 12:58:33.659216  450576 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0805 12:58:33.659263  450576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0805 12:58:37.466579  450576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.807281649s)
	I0805 12:58:37.466623  450576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0805 12:58:37.466657  450576 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0805 12:58:37.466709  450576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0805 12:58:38.111584  450576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0805 12:58:38.111633  450576 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0805 12:58:38.111678  450576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0805 12:58:37.554827  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:37.555233  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:37.555263  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:37.555184  451963 retry.go:31] will retry after 3.143735106s: waiting for machine to come up
	I0805 12:58:40.701139  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.701615  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Found IP for machine: 192.168.50.228
	I0805 12:58:40.701649  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has current primary IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.701660  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Reserving static IP address...
	I0805 12:58:40.702105  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-371585", mac: "52:54:00:f4:9f:83", ip: "192.168.50.228"} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:40.702126  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Reserved static IP address: 192.168.50.228
	I0805 12:58:40.702146  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | skip adding static IP to network mk-default-k8s-diff-port-371585 - found existing host DHCP lease matching {name: "default-k8s-diff-port-371585", mac: "52:54:00:f4:9f:83", ip: "192.168.50.228"}
	I0805 12:58:40.702156  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for SSH to be available...
	I0805 12:58:40.702198  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | Getting to WaitForSSH function...
	I0805 12:58:40.704600  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.704920  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:40.704950  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.705091  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | Using SSH client type: external
	I0805 12:58:40.705129  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | Using SSH private key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa (-rw-------)
	I0805 12:58:40.705179  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.228 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 12:58:40.705200  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | About to run SSH command:
	I0805 12:58:40.705218  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | exit 0
	I0805 12:58:40.836818  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | SSH cmd err, output: <nil>: 
	I0805 12:58:40.837228  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetConfigRaw
	I0805 12:58:40.837884  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetIP
	I0805 12:58:40.840503  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.840843  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:40.840870  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.841129  450884 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585/config.json ...
	I0805 12:58:40.841353  450884 machine.go:94] provisionDockerMachine start ...
	I0805 12:58:40.841373  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 12:58:40.841587  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:40.843943  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.844308  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:40.844336  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.844448  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:40.844614  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:40.844782  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:40.844922  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:40.845067  450884 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:40.845322  450884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.228 22 <nil> <nil>}
	I0805 12:58:40.845333  450884 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 12:58:40.952367  450884 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 12:58:40.952410  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetMachineName
	I0805 12:58:40.952733  450884 buildroot.go:166] provisioning hostname "default-k8s-diff-port-371585"
	I0805 12:58:40.952762  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetMachineName
	I0805 12:58:40.952968  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:40.955642  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.956045  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:40.956077  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.956216  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:40.956493  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:40.956651  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:40.956804  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:40.957027  450884 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:40.957239  450884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.228 22 <nil> <nil>}
	I0805 12:58:40.957255  450884 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-371585 && echo "default-k8s-diff-port-371585" | sudo tee /etc/hostname
	I0805 12:58:41.077775  450884 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-371585
	
	I0805 12:58:41.077808  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:41.080777  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.081230  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:41.081273  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.081406  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:41.081631  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:41.081782  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:41.081963  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:41.082139  450884 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:41.082315  450884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.228 22 <nil> <nil>}
	I0805 12:58:41.082333  450884 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-371585' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-371585/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-371585' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 12:58:41.200835  450884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 12:58:41.200871  450884 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19377-383955/.minikube CaCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19377-383955/.minikube}
	I0805 12:58:41.200923  450884 buildroot.go:174] setting up certificates
	I0805 12:58:41.200934  450884 provision.go:84] configureAuth start
	I0805 12:58:41.200945  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetMachineName
	I0805 12:58:41.201284  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetIP
	I0805 12:58:41.204107  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.204460  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:41.204494  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.204631  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:41.206634  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.206948  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:41.206977  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.207048  450884 provision.go:143] copyHostCerts
	I0805 12:58:41.207139  450884 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem, removing ...
	I0805 12:58:41.207151  450884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 12:58:41.207215  450884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem (1082 bytes)
	I0805 12:58:41.207333  450884 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem, removing ...
	I0805 12:58:41.207345  450884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 12:58:41.207372  450884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem (1123 bytes)
	I0805 12:58:41.207451  450884 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem, removing ...
	I0805 12:58:41.207462  450884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 12:58:41.207502  450884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem (1675 bytes)
	I0805 12:58:41.207573  450884 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-371585 san=[127.0.0.1 192.168.50.228 default-k8s-diff-port-371585 localhost minikube]
	I0805 12:58:41.357243  450884 provision.go:177] copyRemoteCerts
	I0805 12:58:41.357344  450884 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 12:58:41.357386  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:41.360309  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.360697  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:41.360738  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.360933  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:41.361120  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:41.361295  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:41.361474  450884 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa Username:docker}
	I0805 12:58:41.454251  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 12:58:41.480595  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0805 12:58:41.506729  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 12:58:41.533349  450884 provision.go:87] duration metric: took 332.399026ms to configureAuth
	I0805 12:58:41.533402  450884 buildroot.go:189] setting minikube options for container-runtime
	I0805 12:58:41.533575  450884 config.go:182] Loaded profile config "default-k8s-diff-port-371585": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:58:41.533655  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:41.536469  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.536831  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:41.536862  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.537006  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:41.537197  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:41.537386  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:41.537541  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:41.537734  450884 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:41.537946  450884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.228 22 <nil> <nil>}
	I0805 12:58:41.537968  450884 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 12:58:41.827043  450884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 12:58:41.827078  450884 machine.go:97] duration metric: took 985.710155ms to provisionDockerMachine
	I0805 12:58:41.827095  450884 start.go:293] postStartSetup for "default-k8s-diff-port-371585" (driver="kvm2")
	I0805 12:58:41.827109  450884 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 12:58:41.827145  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 12:58:41.827564  450884 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 12:58:41.827605  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:41.830350  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.830724  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:41.830761  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.830853  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:41.831034  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:41.831206  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:41.831329  450884 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa Username:docker}
	I0805 12:58:41.915261  450884 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 12:58:41.919719  450884 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 12:58:41.919760  450884 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/addons for local assets ...
	I0805 12:58:41.919835  450884 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/files for local assets ...
	I0805 12:58:41.919930  450884 filesync.go:149] local asset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> 3912192.pem in /etc/ssl/certs
	I0805 12:58:41.920062  450884 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 12:58:41.929842  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:58:41.958933  450884 start.go:296] duration metric: took 131.820227ms for postStartSetup
	I0805 12:58:41.958981  450884 fix.go:56] duration metric: took 20.010130311s for fixHost
	I0805 12:58:41.959012  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:41.962092  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.962510  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:41.962540  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.962726  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:41.962968  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:41.963153  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:41.963309  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:41.963479  450884 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:41.963687  450884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.228 22 <nil> <nil>}
	I0805 12:58:41.963700  450884 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 12:58:42.080993  451238 start.go:364] duration metric: took 3m30.014883629s to acquireMachinesLock for "old-k8s-version-635707"
	I0805 12:58:42.081066  451238 start.go:96] Skipping create...Using existing machine configuration
	I0805 12:58:42.081076  451238 fix.go:54] fixHost starting: 
	I0805 12:58:42.081569  451238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:58:42.081611  451238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:58:42.101889  451238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43379
	I0805 12:58:42.102366  451238 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:58:42.102910  451238 main.go:141] libmachine: Using API Version  1
	I0805 12:58:42.102947  451238 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:58:42.103310  451238 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:58:42.103552  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:58:42.103718  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetState
	I0805 12:58:42.105465  451238 fix.go:112] recreateIfNeeded on old-k8s-version-635707: state=Stopped err=<nil>
	I0805 12:58:42.105504  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	W0805 12:58:42.105674  451238 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 12:58:42.107563  451238 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-635707" ...
	I0805 12:58:39.567840  450576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0: (1.456137011s)
	I0805 12:58:39.567879  450576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 from cache
	I0805 12:58:39.567905  450576 cache_images.go:123] Successfully loaded all cached images
	I0805 12:58:39.567911  450576 cache_images.go:92] duration metric: took 14.873174481s to LoadCachedImages
	I0805 12:58:39.567921  450576 kubeadm.go:934] updating node { 192.168.72.223 8443 v1.31.0-rc.0 crio true true} ...
	I0805 12:58:39.568053  450576 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-669469 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.223
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-669469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 12:58:39.568137  450576 ssh_runner.go:195] Run: crio config
	I0805 12:58:39.616607  450576 cni.go:84] Creating CNI manager for ""
	I0805 12:58:39.616634  450576 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:58:39.616660  450576 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 12:58:39.616683  450576 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.223 APIServerPort:8443 KubernetesVersion:v1.31.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-669469 NodeName:no-preload-669469 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.223"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.223 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 12:58:39.616822  450576 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.223
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-669469"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.223
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.223"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 12:58:39.616896  450576 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-rc.0
	I0805 12:58:39.627827  450576 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 12:58:39.627901  450576 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 12:58:39.637348  450576 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0805 12:58:39.653917  450576 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0805 12:58:39.670196  450576 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0805 12:58:39.686922  450576 ssh_runner.go:195] Run: grep 192.168.72.223	control-plane.minikube.internal$ /etc/hosts
	I0805 12:58:39.690804  450576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.223	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:58:39.703146  450576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:58:39.834718  450576 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 12:58:39.857015  450576 certs.go:68] Setting up /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469 for IP: 192.168.72.223
	I0805 12:58:39.857036  450576 certs.go:194] generating shared ca certs ...
	I0805 12:58:39.857057  450576 certs.go:226] acquiring lock for ca certs: {Name:mk0abfcaff3883fbb5243c47b487f9200d9166d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:58:39.857229  450576 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key
	I0805 12:58:39.857286  450576 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key
	I0805 12:58:39.857300  450576 certs.go:256] generating profile certs ...
	I0805 12:58:39.857431  450576 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469/client.key
	I0805 12:58:39.857489  450576 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469/apiserver.key.dd0884bb
	I0805 12:58:39.857535  450576 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469/proxy-client.key
	I0805 12:58:39.857683  450576 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem (1338 bytes)
	W0805 12:58:39.857723  450576 certs.go:480] ignoring /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219_empty.pem, impossibly tiny 0 bytes
	I0805 12:58:39.857739  450576 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 12:58:39.857769  450576 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem (1082 bytes)
	I0805 12:58:39.857834  450576 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem (1123 bytes)
	I0805 12:58:39.857872  450576 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem (1675 bytes)
	I0805 12:58:39.857923  450576 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:58:39.858695  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 12:58:39.895944  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 12:58:39.925816  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 12:58:39.960150  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 12:58:39.993307  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0805 12:58:40.027900  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0805 12:58:40.053492  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 12:58:40.077331  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 12:58:40.101010  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /usr/share/ca-certificates/3912192.pem (1708 bytes)
	I0805 12:58:40.123991  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 12:58:40.147563  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem --> /usr/share/ca-certificates/391219.pem (1338 bytes)
	I0805 12:58:40.170414  450576 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 12:58:40.188256  450576 ssh_runner.go:195] Run: openssl version
	I0805 12:58:40.193955  450576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3912192.pem && ln -fs /usr/share/ca-certificates/3912192.pem /etc/ssl/certs/3912192.pem"
	I0805 12:58:40.204793  450576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3912192.pem
	I0805 12:58:40.209061  450576 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 11:39 /usr/share/ca-certificates/3912192.pem
	I0805 12:58:40.209115  450576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3912192.pem
	I0805 12:58:40.214948  450576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3912192.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 12:58:40.226193  450576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 12:58:40.237723  450576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:58:40.241960  450576 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:58:40.242019  450576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:58:40.247502  450576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 12:58:40.258791  450576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391219.pem && ln -fs /usr/share/ca-certificates/391219.pem /etc/ssl/certs/391219.pem"
	I0805 12:58:40.270176  450576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/391219.pem
	I0805 12:58:40.274717  450576 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 11:39 /usr/share/ca-certificates/391219.pem
	I0805 12:58:40.274786  450576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391219.pem
	I0805 12:58:40.280457  450576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391219.pem /etc/ssl/certs/51391683.0"
	I0805 12:58:40.292091  450576 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 12:58:40.296842  450576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 12:58:40.303003  450576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 12:58:40.309009  450576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 12:58:40.314951  450576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 12:58:40.320674  450576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 12:58:40.326433  450576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 12:58:40.331848  450576 kubeadm.go:392] StartCluster: {Name:no-preload-669469 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-rc.0 ClusterName:no-preload-669469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.223 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:58:40.331938  450576 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 12:58:40.331975  450576 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:58:40.374390  450576 cri.go:89] found id: ""
	I0805 12:58:40.374482  450576 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 12:58:40.385467  450576 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0805 12:58:40.385485  450576 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0805 12:58:40.385531  450576 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0805 12:58:40.395411  450576 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0805 12:58:40.396455  450576 kubeconfig.go:125] found "no-preload-669469" server: "https://192.168.72.223:8443"
	I0805 12:58:40.400090  450576 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0805 12:58:40.410942  450576 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.223
	I0805 12:58:40.410971  450576 kubeadm.go:1160] stopping kube-system containers ...
	I0805 12:58:40.410985  450576 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0805 12:58:40.411032  450576 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:58:40.453021  450576 cri.go:89] found id: ""
	I0805 12:58:40.453115  450576 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0805 12:58:40.470389  450576 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 12:58:40.480421  450576 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 12:58:40.480445  450576 kubeadm.go:157] found existing configuration files:
	
	I0805 12:58:40.480502  450576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 12:58:40.489625  450576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 12:58:40.489672  450576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 12:58:40.499261  450576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 12:58:40.508571  450576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 12:58:40.508634  450576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 12:58:40.517811  450576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 12:58:40.526563  450576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 12:58:40.526620  450576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 12:58:40.535753  450576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 12:58:40.544981  450576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 12:58:40.545040  450576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 12:58:40.555237  450576 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 12:58:40.565180  450576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:40.683889  450576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:41.632122  450576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:41.866665  450576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:41.944022  450576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:42.048030  450576 api_server.go:52] waiting for apiserver process to appear ...
	I0805 12:58:42.048127  450576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:58:42.548995  450576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:58:43.048336  450576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:58:43.086457  450576 api_server.go:72] duration metric: took 1.038426772s to wait for apiserver process to appear ...
	I0805 12:58:43.086487  450576 api_server.go:88] waiting for apiserver healthz status ...
	I0805 12:58:43.086509  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:43.086982  450576 api_server.go:269] stopped: https://192.168.72.223:8443/healthz: Get "https://192.168.72.223:8443/healthz": dial tcp 192.168.72.223:8443: connect: connection refused
	I0805 12:58:42.080800  450884 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722862722.053648046
	
	I0805 12:58:42.080828  450884 fix.go:216] guest clock: 1722862722.053648046
	I0805 12:58:42.080839  450884 fix.go:229] Guest: 2024-08-05 12:58:42.053648046 +0000 UTC Remote: 2024-08-05 12:58:41.958987261 +0000 UTC m=+264.923354352 (delta=94.660785ms)
	I0805 12:58:42.080867  450884 fix.go:200] guest clock delta is within tolerance: 94.660785ms
	I0805 12:58:42.080876  450884 start.go:83] releasing machines lock for "default-k8s-diff-port-371585", held for 20.132054114s
	I0805 12:58:42.080916  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 12:58:42.081260  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetIP
	I0805 12:58:42.084196  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:42.084662  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:42.084695  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:42.084867  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 12:58:42.085589  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 12:58:42.085786  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 12:58:42.085875  450884 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 12:58:42.085925  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:42.086064  450884 ssh_runner.go:195] Run: cat /version.json
	I0805 12:58:42.086091  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:42.088693  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:42.089018  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:42.089042  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:42.089197  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:42.089260  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:42.089455  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:42.089729  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:42.089730  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:42.089785  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:42.089881  450884 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa Username:docker}
	I0805 12:58:42.089970  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:42.090128  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:42.090286  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:42.090457  450884 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa Username:docker}
	I0805 12:58:42.193160  450884 ssh_runner.go:195] Run: systemctl --version
	I0805 12:58:42.199341  450884 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 12:58:42.344713  450884 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 12:58:42.350944  450884 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 12:58:42.351026  450884 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 12:58:42.368162  450884 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 12:58:42.368196  450884 start.go:495] detecting cgroup driver to use...
	I0805 12:58:42.368260  450884 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 12:58:42.384477  450884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 12:58:42.401847  450884 docker.go:217] disabling cri-docker service (if available) ...
	I0805 12:58:42.401907  450884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 12:58:42.416318  450884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 12:58:42.430994  450884 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 12:58:42.545944  450884 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 12:58:42.721877  450884 docker.go:233] disabling docker service ...
	I0805 12:58:42.721961  450884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 12:58:42.743504  450884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 12:58:42.763111  450884 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 12:58:42.914270  450884 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 12:58:43.064816  450884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 12:58:43.090748  450884 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 12:58:43.115493  450884 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0805 12:58:43.115565  450884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:43.132497  450884 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 12:58:43.132583  450884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:43.146700  450884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:43.159880  450884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:43.175598  450884 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 12:58:43.191263  450884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:43.207573  450884 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:43.229567  450884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:43.248604  450884 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 12:58:43.261272  450884 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0805 12:58:43.261350  450884 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0805 12:58:43.276740  450884 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 12:58:43.288473  450884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:58:43.436066  450884 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 12:58:43.593264  450884 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 12:58:43.593355  450884 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 12:58:43.599342  450884 start.go:563] Will wait 60s for crictl version
	I0805 12:58:43.599419  450884 ssh_runner.go:195] Run: which crictl
	I0805 12:58:43.603681  450884 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 12:58:43.651181  450884 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 12:58:43.651296  450884 ssh_runner.go:195] Run: crio --version
	I0805 12:58:43.691418  450884 ssh_runner.go:195] Run: crio --version
	I0805 12:58:43.725036  450884 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0805 12:58:42.109016  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .Start
	I0805 12:58:42.109214  451238 main.go:141] libmachine: (old-k8s-version-635707) Ensuring networks are active...
	I0805 12:58:42.110192  451238 main.go:141] libmachine: (old-k8s-version-635707) Ensuring network default is active
	I0805 12:58:42.110686  451238 main.go:141] libmachine: (old-k8s-version-635707) Ensuring network mk-old-k8s-version-635707 is active
	I0805 12:58:42.111108  451238 main.go:141] libmachine: (old-k8s-version-635707) Getting domain xml...
	I0805 12:58:42.112194  451238 main.go:141] libmachine: (old-k8s-version-635707) Creating domain...
	I0805 12:58:43.453015  451238 main.go:141] libmachine: (old-k8s-version-635707) Waiting to get IP...
	I0805 12:58:43.453994  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:43.454435  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:43.454504  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:43.454435  452186 retry.go:31] will retry after 270.355403ms: waiting for machine to come up
	I0805 12:58:43.727101  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:43.727583  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:43.727641  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:43.727568  452186 retry.go:31] will retry after 313.75466ms: waiting for machine to come up
	I0805 12:58:44.043303  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:44.043954  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:44.043981  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:44.043855  452186 retry.go:31] will retry after 308.608573ms: waiting for machine to come up
	I0805 12:58:44.354830  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:44.355396  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:44.355421  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:44.355305  452186 retry.go:31] will retry after 510.256657ms: waiting for machine to come up
	I0805 12:58:44.866970  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:44.867534  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:44.867559  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:44.867424  452186 retry.go:31] will retry after 668.55006ms: waiting for machine to come up
	I0805 12:58:45.537377  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:45.537959  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:45.537989  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:45.537909  452186 retry.go:31] will retry after 677.549944ms: waiting for machine to come up
	I0805 12:58:46.217077  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:46.217591  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:46.217625  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:46.217483  452186 retry.go:31] will retry after 847.636867ms: waiting for machine to come up
	I0805 12:58:43.726277  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetIP
	I0805 12:58:43.729689  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:43.730162  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:43.730195  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:43.730391  450884 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0805 12:58:43.735448  450884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:58:43.749640  450884 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-371585 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-371585 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.228 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 12:58:43.749808  450884 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 12:58:43.749886  450884 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:58:43.798507  450884 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0805 12:58:43.798584  450884 ssh_runner.go:195] Run: which lz4
	I0805 12:58:43.803306  450884 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0805 12:58:43.809104  450884 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 12:58:43.809144  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0805 12:58:45.333758  450884 crio.go:462] duration metric: took 1.530500213s to copy over tarball
	I0805 12:58:45.333831  450884 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 12:58:43.587275  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:46.303995  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:46.304038  450576 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:46.304057  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:46.308815  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:46.308849  450576 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:46.587239  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:46.595116  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:46.595151  450576 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:47.087372  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:47.094319  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:47.094363  450576 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:47.586909  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:47.592210  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:47.592252  450576 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:48.086763  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:48.095151  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:48.095182  450576 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:48.586840  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:48.593834  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:48.593870  450576 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:49.087516  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:49.093647  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:49.093677  450576 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:49.587309  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:49.593592  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 200:
	ok
	I0805 12:58:49.602960  450576 api_server.go:141] control plane version: v1.31.0-rc.0
	I0805 12:58:49.603001  450576 api_server.go:131] duration metric: took 6.516505116s to wait for apiserver health ...
	I0805 12:58:49.603013  450576 cni.go:84] Creating CNI manager for ""
	I0805 12:58:49.603024  450576 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:58:49.851135  450576 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0805 12:58:47.067245  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:47.067895  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:47.067930  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:47.067838  452186 retry.go:31] will retry after 1.275228928s: waiting for machine to come up
	I0805 12:58:48.344881  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:48.345295  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:48.345319  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:48.345258  452186 retry.go:31] will retry after 1.826891386s: waiting for machine to come up
	I0805 12:58:50.174583  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:50.175111  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:50.175138  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:50.175074  452186 retry.go:31] will retry after 1.53756677s: waiting for machine to come up
	I0805 12:58:51.714025  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:51.714529  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:51.714553  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:51.714485  452186 retry.go:31] will retry after 2.762270002s: waiting for machine to come up
	I0805 12:58:47.908896  450884 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.575029516s)
	I0805 12:58:47.908929  450884 crio.go:469] duration metric: took 2.575138566s to extract the tarball
	I0805 12:58:47.908938  450884 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0805 12:58:47.964757  450884 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:58:48.013358  450884 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 12:58:48.013392  450884 cache_images.go:84] Images are preloaded, skipping loading
	I0805 12:58:48.013404  450884 kubeadm.go:934] updating node { 192.168.50.228 8444 v1.30.3 crio true true} ...
	I0805 12:58:48.013533  450884 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-371585 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.228
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-371585 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 12:58:48.013623  450884 ssh_runner.go:195] Run: crio config
	I0805 12:58:48.062183  450884 cni.go:84] Creating CNI manager for ""
	I0805 12:58:48.062219  450884 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:58:48.062238  450884 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 12:58:48.062274  450884 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.228 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-371585 NodeName:default-k8s-diff-port-371585 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.228"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.228 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 12:58:48.062474  450884 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.228
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-371585"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.228
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.228"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 12:58:48.062552  450884 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 12:58:48.076490  450884 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 12:58:48.076583  450884 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 12:58:48.090058  450884 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0805 12:58:48.110202  450884 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 12:58:48.131420  450884 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0805 12:58:48.151774  450884 ssh_runner.go:195] Run: grep 192.168.50.228	control-plane.minikube.internal$ /etc/hosts
	I0805 12:58:48.156904  450884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.228	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:58:48.172398  450884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:58:48.292999  450884 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 12:58:48.310331  450884 certs.go:68] Setting up /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585 for IP: 192.168.50.228
	I0805 12:58:48.310366  450884 certs.go:194] generating shared ca certs ...
	I0805 12:58:48.310389  450884 certs.go:226] acquiring lock for ca certs: {Name:mk0abfcaff3883fbb5243c47b487f9200d9166d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:58:48.310576  450884 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key
	I0805 12:58:48.310640  450884 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key
	I0805 12:58:48.310658  450884 certs.go:256] generating profile certs ...
	I0805 12:58:48.310803  450884 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585/client.key
	I0805 12:58:48.310881  450884 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585/apiserver.key.f7891227
	I0805 12:58:48.310946  450884 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585/proxy-client.key
	I0805 12:58:48.311231  450884 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem (1338 bytes)
	W0805 12:58:48.311317  450884 certs.go:480] ignoring /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219_empty.pem, impossibly tiny 0 bytes
	I0805 12:58:48.311354  450884 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 12:58:48.311408  450884 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem (1082 bytes)
	I0805 12:58:48.311447  450884 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem (1123 bytes)
	I0805 12:58:48.311485  450884 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem (1675 bytes)
	I0805 12:58:48.311545  450884 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:58:48.312365  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 12:58:48.363733  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 12:58:48.395662  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 12:58:48.450822  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 12:58:48.495611  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0805 12:58:48.529393  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0805 12:58:48.557543  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 12:58:48.584777  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 12:58:48.611987  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /usr/share/ca-certificates/3912192.pem (1708 bytes)
	I0805 12:58:48.637500  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 12:58:48.664469  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem --> /usr/share/ca-certificates/391219.pem (1338 bytes)
	I0805 12:58:48.690221  450884 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 12:58:48.709082  450884 ssh_runner.go:195] Run: openssl version
	I0805 12:58:48.716181  450884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3912192.pem && ln -fs /usr/share/ca-certificates/3912192.pem /etc/ssl/certs/3912192.pem"
	I0805 12:58:48.728455  450884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3912192.pem
	I0805 12:58:48.733395  450884 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 11:39 /usr/share/ca-certificates/3912192.pem
	I0805 12:58:48.733456  450884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3912192.pem
	I0805 12:58:48.739295  450884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3912192.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 12:58:48.750515  450884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 12:58:48.761506  450884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:58:48.765995  450884 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:58:48.766052  450884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:58:48.772121  450884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 12:58:48.783123  450884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391219.pem && ln -fs /usr/share/ca-certificates/391219.pem /etc/ssl/certs/391219.pem"
	I0805 12:58:48.794318  450884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/391219.pem
	I0805 12:58:48.798795  450884 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 11:39 /usr/share/ca-certificates/391219.pem
	I0805 12:58:48.798843  450884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391219.pem
	I0805 12:58:48.804878  450884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391219.pem /etc/ssl/certs/51391683.0"
	I0805 12:58:48.816757  450884 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 12:58:48.821686  450884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 12:58:48.828121  450884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 12:58:48.834386  450884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 12:58:48.840425  450884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 12:58:48.846218  450884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 12:58:48.852035  450884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 12:58:48.857997  450884 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-371585 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-371585 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.228 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:58:48.858131  450884 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 12:58:48.858179  450884 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:58:48.908402  450884 cri.go:89] found id: ""
	I0805 12:58:48.908471  450884 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 12:58:48.921185  450884 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0805 12:58:48.921207  450884 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0805 12:58:48.921258  450884 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0805 12:58:48.932907  450884 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0805 12:58:48.933927  450884 kubeconfig.go:125] found "default-k8s-diff-port-371585" server: "https://192.168.50.228:8444"
	I0805 12:58:48.936058  450884 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0805 12:58:48.947233  450884 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.228
	I0805 12:58:48.947262  450884 kubeadm.go:1160] stopping kube-system containers ...
	I0805 12:58:48.947273  450884 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0805 12:58:48.947313  450884 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:58:48.988179  450884 cri.go:89] found id: ""
	I0805 12:58:48.988281  450884 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0805 12:58:49.005901  450884 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 12:58:49.016576  450884 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 12:58:49.016597  450884 kubeadm.go:157] found existing configuration files:
	
	I0805 12:58:49.016648  450884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0805 12:58:49.029718  450884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 12:58:49.029822  450884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 12:58:49.041670  450884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0805 12:58:49.051650  450884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 12:58:49.051724  450884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 12:58:49.061671  450884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0805 12:58:49.071671  450884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 12:58:49.071755  450884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 12:58:49.082022  450884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0805 12:58:49.092013  450884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 12:58:49.092103  450884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 12:58:49.105446  450884 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 12:58:49.118581  450884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:49.233260  450884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:50.199462  450884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:50.418823  450884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:50.500350  450884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:50.594991  450884 api_server.go:52] waiting for apiserver process to appear ...
	I0805 12:58:50.595109  450884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:58:51.096171  450884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:58:51.596111  450884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:58:51.633309  450884 api_server.go:72] duration metric: took 1.038316986s to wait for apiserver process to appear ...
	I0805 12:58:51.633350  450884 api_server.go:88] waiting for apiserver healthz status ...
	I0805 12:58:51.633377  450884 api_server.go:253] Checking apiserver healthz at https://192.168.50.228:8444/healthz ...
	I0805 12:58:51.634005  450884 api_server.go:269] stopped: https://192.168.50.228:8444/healthz: Get "https://192.168.50.228:8444/healthz": dial tcp 192.168.50.228:8444: connect: connection refused
	I0805 12:58:50.021635  450576 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0805 12:58:50.036338  450576 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0805 12:58:50.060746  450576 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 12:58:50.159670  450576 system_pods.go:59] 8 kube-system pods found
	I0805 12:58:50.159724  450576 system_pods.go:61] "coredns-6f6b679f8f-nkv88" [ee7e59fb-2500-4d7a-9537-e38e08fb2445] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0805 12:58:50.159737  450576 system_pods.go:61] "etcd-no-preload-669469" [095df0f1-069a-419f-815b-ddbec3a2291f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0805 12:58:50.159762  450576 system_pods.go:61] "kube-apiserver-no-preload-669469" [20b45902-b807-457a-93b3-d2b9b76d2598] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0805 12:58:50.159772  450576 system_pods.go:61] "kube-controller-manager-no-preload-669469" [122a47ed-7f6f-4b2e-980a-45f41b997dda] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0805 12:58:50.159780  450576 system_pods.go:61] "kube-proxy-cwq69" [78e0333b-a0f4-40a6-a04d-6971bb4d09a8] Running
	I0805 12:58:50.159788  450576 system_pods.go:61] "kube-scheduler-no-preload-669469" [88010c2b-b32f-4fe1-952d-262e881b76dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0805 12:58:50.159796  450576 system_pods.go:61] "metrics-server-6867b74b74-p7b2r" [7e4dd805-07c8-4339-bf1a-57a98fd674cd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 12:58:50.159808  450576 system_pods.go:61] "storage-provisioner" [207c46c5-c3c0-4f0b-b3ea-9b42b9e5f761] Running
	I0805 12:58:50.159817  450576 system_pods.go:74] duration metric: took 99.038765ms to wait for pod list to return data ...
	I0805 12:58:50.159830  450576 node_conditions.go:102] verifying NodePressure condition ...
	I0805 12:58:50.163888  450576 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 12:58:50.163923  450576 node_conditions.go:123] node cpu capacity is 2
	I0805 12:58:50.163956  450576 node_conditions.go:105] duration metric: took 4.11869ms to run NodePressure ...
	I0805 12:58:50.163980  450576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:50.849885  450576 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0805 12:58:50.854483  450576 kubeadm.go:739] kubelet initialised
	I0805 12:58:50.854505  450576 kubeadm.go:740] duration metric: took 4.588388ms waiting for restarted kubelet to initialise ...
	I0805 12:58:50.854514  450576 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 12:58:50.861245  450576 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-nkv88" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:52.869370  450576 pod_ready.go:102] pod "coredns-6f6b679f8f-nkv88" in "kube-system" namespace has status "Ready":"False"
	I0805 12:58:52.134427  450884 api_server.go:253] Checking apiserver healthz at https://192.168.50.228:8444/healthz ...
	I0805 12:58:54.933253  450884 api_server.go:279] https://192.168.50.228:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0805 12:58:54.933288  450884 api_server.go:103] status: https://192.168.50.228:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0805 12:58:54.933305  450884 api_server.go:253] Checking apiserver healthz at https://192.168.50.228:8444/healthz ...
	I0805 12:58:54.970883  450884 api_server.go:279] https://192.168.50.228:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0805 12:58:54.970928  450884 api_server.go:103] status: https://192.168.50.228:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0805 12:58:55.134250  450884 api_server.go:253] Checking apiserver healthz at https://192.168.50.228:8444/healthz ...
	I0805 12:58:55.139762  450884 api_server.go:279] https://192.168.50.228:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:55.139798  450884 api_server.go:103] status: https://192.168.50.228:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:55.634499  450884 api_server.go:253] Checking apiserver healthz at https://192.168.50.228:8444/healthz ...
	I0805 12:58:55.644495  450884 api_server.go:279] https://192.168.50.228:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:55.644532  450884 api_server.go:103] status: https://192.168.50.228:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:56.134123  450884 api_server.go:253] Checking apiserver healthz at https://192.168.50.228:8444/healthz ...
	I0805 12:58:56.141958  450884 api_server.go:279] https://192.168.50.228:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:56.142002  450884 api_server.go:103] status: https://192.168.50.228:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:56.633573  450884 api_server.go:253] Checking apiserver healthz at https://192.168.50.228:8444/healthz ...
	I0805 12:58:56.640578  450884 api_server.go:279] https://192.168.50.228:8444/healthz returned 200:
	ok
	I0805 12:58:56.649624  450884 api_server.go:141] control plane version: v1.30.3
	I0805 12:58:56.649659  450884 api_server.go:131] duration metric: took 5.016299114s to wait for apiserver health ...
	I0805 12:58:56.649671  450884 cni.go:84] Creating CNI manager for ""
	I0805 12:58:56.649681  450884 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:58:56.651587  450884 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0805 12:58:54.478201  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:54.478619  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:54.478650  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:54.478579  452186 retry.go:31] will retry after 2.992766963s: waiting for machine to come up
	I0805 12:58:56.652853  450884 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0805 12:58:56.663878  450884 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0805 12:58:56.699765  450884 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 12:58:56.715040  450884 system_pods.go:59] 8 kube-system pods found
	I0805 12:58:56.715078  450884 system_pods.go:61] "coredns-7db6d8ff4d-8rzb7" [df42e41d-4544-493f-a09d-678df1fb5258] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0805 12:58:56.715085  450884 system_pods.go:61] "etcd-default-k8s-diff-port-371585" [1ab6cd59-432a-44b8-95f2-948c585d9bbf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0805 12:58:56.715092  450884 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-371585" [c9173b98-c77e-4ad0-aea5-c894c045e0c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0805 12:58:56.715101  450884 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-371585" [283737ec-1afa-4994-9cee-b655a8397a37] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0805 12:58:56.715105  450884 system_pods.go:61] "kube-proxy-5dr9v" [767ccb8b-2db0-4b59-b3b0-e099185bc725] Running
	I0805 12:58:56.715111  450884 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-371585" [fb3cfdea-9370-4842-a5ab-5ac24804f59e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0805 12:58:56.715116  450884 system_pods.go:61] "metrics-server-569cc877fc-dsrqr" [0d4c79e4-aa6c-42f5-840b-91b9d714d078] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 12:58:56.715125  450884 system_pods.go:61] "storage-provisioner" [2dba6f50-5cdc-4195-8daf-c19dac38f488] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0805 12:58:56.715133  450884 system_pods.go:74] duration metric: took 15.343284ms to wait for pod list to return data ...
	I0805 12:58:56.715144  450884 node_conditions.go:102] verifying NodePressure condition ...
	I0805 12:58:56.720006  450884 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 12:58:56.720031  450884 node_conditions.go:123] node cpu capacity is 2
	I0805 12:58:56.720042  450884 node_conditions.go:105] duration metric: took 4.893566ms to run NodePressure ...
	I0805 12:58:56.720059  450884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:56.985822  450884 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0805 12:58:56.990461  450884 kubeadm.go:739] kubelet initialised
	I0805 12:58:56.990484  450884 kubeadm.go:740] duration metric: took 4.636814ms waiting for restarted kubelet to initialise ...
	I0805 12:58:56.990493  450884 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 12:58:56.996266  450884 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-8rzb7" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:57.001407  450884 pod_ready.go:97] node "default-k8s-diff-port-371585" hosting pod "coredns-7db6d8ff4d-8rzb7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-371585" has status "Ready":"False"
	I0805 12:58:57.001434  450884 pod_ready.go:81] duration metric: took 5.140963ms for pod "coredns-7db6d8ff4d-8rzb7" in "kube-system" namespace to be "Ready" ...
	E0805 12:58:57.001446  450884 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-371585" hosting pod "coredns-7db6d8ff4d-8rzb7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-371585" has status "Ready":"False"
	I0805 12:58:57.001456  450884 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:57.005437  450884 pod_ready.go:97] node "default-k8s-diff-port-371585" hosting pod "etcd-default-k8s-diff-port-371585" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-371585" has status "Ready":"False"
	I0805 12:58:57.005473  450884 pod_ready.go:81] duration metric: took 3.995646ms for pod "etcd-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	E0805 12:58:57.005486  450884 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-371585" hosting pod "etcd-default-k8s-diff-port-371585" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-371585" has status "Ready":"False"
	I0805 12:58:57.005495  450884 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:57.009923  450884 pod_ready.go:97] node "default-k8s-diff-port-371585" hosting pod "kube-apiserver-default-k8s-diff-port-371585" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-371585" has status "Ready":"False"
	I0805 12:58:57.009943  450884 pod_ready.go:81] duration metric: took 4.439871ms for pod "kube-apiserver-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	E0805 12:58:57.009952  450884 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-371585" hosting pod "kube-apiserver-default-k8s-diff-port-371585" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-371585" has status "Ready":"False"
	I0805 12:58:57.009958  450884 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:54.869534  450576 pod_ready.go:102] pod "coredns-6f6b679f8f-nkv88" in "kube-system" namespace has status "Ready":"False"
	I0805 12:58:56.370007  450576 pod_ready.go:92] pod "coredns-6f6b679f8f-nkv88" in "kube-system" namespace has status "Ready":"True"
	I0805 12:58:56.370035  450576 pod_ready.go:81] duration metric: took 5.508756413s for pod "coredns-6f6b679f8f-nkv88" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:56.370045  450576 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.376357  450576 pod_ready.go:92] pod "etcd-no-preload-669469" in "kube-system" namespace has status "Ready":"True"
	I0805 12:58:58.376386  450576 pod_ready.go:81] duration metric: took 2.006334873s for pod "etcd-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.376396  450576 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:57.473094  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:57.473555  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:57.473587  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:57.473495  452186 retry.go:31] will retry after 4.27138033s: waiting for machine to come up
	I0805 12:59:01.750111  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.750558  451238 main.go:141] libmachine: (old-k8s-version-635707) Found IP for machine: 192.168.61.41
	I0805 12:59:01.750586  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has current primary IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.750593  451238 main.go:141] libmachine: (old-k8s-version-635707) Reserving static IP address...
	I0805 12:59:01.751003  451238 main.go:141] libmachine: (old-k8s-version-635707) Reserved static IP address: 192.168.61.41
	I0805 12:59:01.751061  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "old-k8s-version-635707", mac: "52:54:00:2a:da:c5", ip: "192.168.61.41"} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:01.751081  451238 main.go:141] libmachine: (old-k8s-version-635707) Waiting for SSH to be available...
	I0805 12:59:01.751112  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | skip adding static IP to network mk-old-k8s-version-635707 - found existing host DHCP lease matching {name: "old-k8s-version-635707", mac: "52:54:00:2a:da:c5", ip: "192.168.61.41"}
	I0805 12:59:01.751130  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | Getting to WaitForSSH function...
	I0805 12:59:01.753240  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.753634  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:01.753672  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.753810  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | Using SSH client type: external
	I0805 12:59:01.753854  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | Using SSH private key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/id_rsa (-rw-------)
	I0805 12:59:01.753900  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.41 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 12:59:01.753919  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | About to run SSH command:
	I0805 12:59:01.753933  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | exit 0
	I0805 12:59:01.875919  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | SSH cmd err, output: <nil>: 
	I0805 12:59:01.876298  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetConfigRaw
	I0805 12:59:01.877028  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetIP
	I0805 12:59:01.879644  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.880120  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:01.880164  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.880508  451238 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/config.json ...
	I0805 12:59:01.880778  451238 machine.go:94] provisionDockerMachine start ...
	I0805 12:59:01.880805  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:59:01.881039  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:01.882998  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.883362  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:01.883389  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.883553  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:01.883755  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:01.883900  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:01.884012  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:01.884248  451238 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:01.884496  451238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I0805 12:59:01.884511  451238 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 12:58:57.103049  450884 pod_ready.go:97] node "default-k8s-diff-port-371585" hosting pod "kube-controller-manager-default-k8s-diff-port-371585" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-371585" has status "Ready":"False"
	I0805 12:58:57.103095  450884 pod_ready.go:81] duration metric: took 93.113727ms for pod "kube-controller-manager-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	E0805 12:58:57.103109  450884 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-371585" hosting pod "kube-controller-manager-default-k8s-diff-port-371585" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-371585" has status "Ready":"False"
	I0805 12:58:57.103116  450884 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5dr9v" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:57.503531  450884 pod_ready.go:92] pod "kube-proxy-5dr9v" in "kube-system" namespace has status "Ready":"True"
	I0805 12:58:57.503556  450884 pod_ready.go:81] duration metric: took 400.433562ms for pod "kube-proxy-5dr9v" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:57.503565  450884 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:59.514591  450884 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:02.011308  450884 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:03.148902  450393 start.go:364] duration metric: took 56.514427046s to acquireMachinesLock for "embed-certs-321139"
	I0805 12:59:03.148967  450393 start.go:96] Skipping create...Using existing machine configuration
	I0805 12:59:03.148976  450393 fix.go:54] fixHost starting: 
	I0805 12:59:03.149432  450393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:59:03.149473  450393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:59:03.166485  450393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43007
	I0805 12:59:03.166934  450393 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:59:03.167443  450393 main.go:141] libmachine: Using API Version  1
	I0805 12:59:03.167469  450393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:59:03.167808  450393 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:59:03.168062  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:03.168258  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetState
	I0805 12:59:03.170011  450393 fix.go:112] recreateIfNeeded on embed-certs-321139: state=Stopped err=<nil>
	I0805 12:59:03.170036  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	W0805 12:59:03.170221  450393 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 12:59:03.172109  450393 out.go:177] * Restarting existing kvm2 VM for "embed-certs-321139" ...
	I0805 12:58:58.886766  450576 pod_ready.go:92] pod "kube-apiserver-no-preload-669469" in "kube-system" namespace has status "Ready":"True"
	I0805 12:58:58.886792  450576 pod_ready.go:81] duration metric: took 510.389529ms for pod "kube-apiserver-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.886804  450576 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.891878  450576 pod_ready.go:92] pod "kube-controller-manager-no-preload-669469" in "kube-system" namespace has status "Ready":"True"
	I0805 12:58:58.891907  450576 pod_ready.go:81] duration metric: took 5.094036ms for pod "kube-controller-manager-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.891919  450576 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cwq69" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.896953  450576 pod_ready.go:92] pod "kube-proxy-cwq69" in "kube-system" namespace has status "Ready":"True"
	I0805 12:58:58.896981  450576 pod_ready.go:81] duration metric: took 5.054422ms for pod "kube-proxy-cwq69" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.896995  450576 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.902437  450576 pod_ready.go:92] pod "kube-scheduler-no-preload-669469" in "kube-system" namespace has status "Ready":"True"
	I0805 12:58:58.902456  450576 pod_ready.go:81] duration metric: took 5.453487ms for pod "kube-scheduler-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.902465  450576 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:00.909633  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:03.410487  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:03.173728  450393 main.go:141] libmachine: (embed-certs-321139) Calling .Start
	I0805 12:59:03.173932  450393 main.go:141] libmachine: (embed-certs-321139) Ensuring networks are active...
	I0805 12:59:03.174932  450393 main.go:141] libmachine: (embed-certs-321139) Ensuring network default is active
	I0805 12:59:03.175441  450393 main.go:141] libmachine: (embed-certs-321139) Ensuring network mk-embed-certs-321139 is active
	I0805 12:59:03.176102  450393 main.go:141] libmachine: (embed-certs-321139) Getting domain xml...
	I0805 12:59:03.176848  450393 main.go:141] libmachine: (embed-certs-321139) Creating domain...
	I0805 12:59:01.984198  451238 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 12:59:01.984237  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetMachineName
	I0805 12:59:01.984501  451238 buildroot.go:166] provisioning hostname "old-k8s-version-635707"
	I0805 12:59:01.984534  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetMachineName
	I0805 12:59:01.984750  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:01.987690  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.988085  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:01.988115  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.988240  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:01.988470  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:01.988782  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:01.988945  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:01.989173  451238 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:01.989407  451238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I0805 12:59:01.989425  451238 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-635707 && echo "old-k8s-version-635707" | sudo tee /etc/hostname
	I0805 12:59:02.108368  451238 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-635707
	
	I0805 12:59:02.108406  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:02.111301  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.111669  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:02.111712  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.111837  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:02.112027  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:02.112212  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:02.112393  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:02.112563  451238 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:02.112797  451238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I0805 12:59:02.112824  451238 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-635707' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-635707/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-635707' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 12:59:02.225638  451238 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 12:59:02.225681  451238 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19377-383955/.minikube CaCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19377-383955/.minikube}
	I0805 12:59:02.225731  451238 buildroot.go:174] setting up certificates
	I0805 12:59:02.225745  451238 provision.go:84] configureAuth start
	I0805 12:59:02.225760  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetMachineName
	I0805 12:59:02.226099  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetIP
	I0805 12:59:02.229252  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.229643  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:02.229671  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.229885  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:02.232479  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.232912  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:02.232951  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.233125  451238 provision.go:143] copyHostCerts
	I0805 12:59:02.233188  451238 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem, removing ...
	I0805 12:59:02.233201  451238 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 12:59:02.233271  451238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem (1123 bytes)
	I0805 12:59:02.233412  451238 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem, removing ...
	I0805 12:59:02.233426  451238 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 12:59:02.233459  451238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem (1675 bytes)
	I0805 12:59:02.233543  451238 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem, removing ...
	I0805 12:59:02.233553  451238 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 12:59:02.233581  451238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem (1082 bytes)
	I0805 12:59:02.233661  451238 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-635707 san=[127.0.0.1 192.168.61.41 localhost minikube old-k8s-version-635707]
	I0805 12:59:02.470213  451238 provision.go:177] copyRemoteCerts
	I0805 12:59:02.470328  451238 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 12:59:02.470369  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:02.473450  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.473791  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:02.473829  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.473964  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:02.474173  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:02.474313  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:02.474429  451238 sshutil.go:53] new ssh client: &{IP:192.168.61.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/id_rsa Username:docker}
	I0805 12:59:02.558831  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 12:59:02.583652  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0805 12:59:02.609154  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0805 12:59:02.635827  451238 provision.go:87] duration metric: took 410.067115ms to configureAuth
	I0805 12:59:02.635862  451238 buildroot.go:189] setting minikube options for container-runtime
	I0805 12:59:02.636109  451238 config.go:182] Loaded profile config "old-k8s-version-635707": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0805 12:59:02.636357  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:02.638964  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.639466  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:02.639489  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.639644  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:02.639953  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:02.640197  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:02.640454  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:02.640733  451238 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:02.640975  451238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I0805 12:59:02.641000  451238 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 12:59:02.917466  451238 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 12:59:02.917499  451238 machine.go:97] duration metric: took 1.036701572s to provisionDockerMachine
	I0805 12:59:02.917512  451238 start.go:293] postStartSetup for "old-k8s-version-635707" (driver="kvm2")
	I0805 12:59:02.917522  451238 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 12:59:02.917539  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:59:02.917946  451238 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 12:59:02.917979  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:02.920900  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.921383  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:02.921426  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.921552  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:02.921773  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:02.921958  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:02.922220  451238 sshutil.go:53] new ssh client: &{IP:192.168.61.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/id_rsa Username:docker}
	I0805 12:59:03.003670  451238 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 12:59:03.008348  451238 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 12:59:03.008384  451238 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/addons for local assets ...
	I0805 12:59:03.008468  451238 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/files for local assets ...
	I0805 12:59:03.008588  451238 filesync.go:149] local asset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> 3912192.pem in /etc/ssl/certs
	I0805 12:59:03.008727  451238 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 12:59:03.019098  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:59:03.042969  451238 start.go:296] duration metric: took 125.441712ms for postStartSetup
	I0805 12:59:03.043011  451238 fix.go:56] duration metric: took 20.961935899s for fixHost
	I0805 12:59:03.043034  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:03.045667  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.046030  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:03.046062  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.046254  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:03.046508  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:03.046701  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:03.046824  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:03.047002  451238 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:03.047182  451238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I0805 12:59:03.047192  451238 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 12:59:03.148773  451238 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722862743.120260193
	
	I0805 12:59:03.148798  451238 fix.go:216] guest clock: 1722862743.120260193
	I0805 12:59:03.148807  451238 fix.go:229] Guest: 2024-08-05 12:59:03.120260193 +0000 UTC Remote: 2024-08-05 12:59:03.043015059 +0000 UTC m=+231.118249223 (delta=77.245134ms)
	I0805 12:59:03.148831  451238 fix.go:200] guest clock delta is within tolerance: 77.245134ms
	I0805 12:59:03.148836  451238 start.go:83] releasing machines lock for "old-k8s-version-635707", held for 21.067801046s
	I0805 12:59:03.148857  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:59:03.149131  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetIP
	I0805 12:59:03.152026  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.152444  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:03.152475  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.152645  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:59:03.153237  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:59:03.153423  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:59:03.153495  451238 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 12:59:03.153551  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:03.153860  451238 ssh_runner.go:195] Run: cat /version.json
	I0805 12:59:03.153895  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:03.156566  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.156903  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:03.156963  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.156994  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.157187  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:03.157411  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:03.157479  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:03.157508  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.157594  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:03.157770  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:03.157782  451238 sshutil.go:53] new ssh client: &{IP:192.168.61.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/id_rsa Username:docker}
	I0805 12:59:03.157924  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:03.158107  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:03.158344  451238 sshutil.go:53] new ssh client: &{IP:192.168.61.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/id_rsa Username:docker}
	I0805 12:59:03.254162  451238 ssh_runner.go:195] Run: systemctl --version
	I0805 12:59:03.260684  451238 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 12:59:03.409837  451238 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 12:59:03.416010  451238 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 12:59:03.416093  451238 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 12:59:03.433548  451238 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 12:59:03.433584  451238 start.go:495] detecting cgroup driver to use...
	I0805 12:59:03.433667  451238 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 12:59:03.450756  451238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 12:59:03.467281  451238 docker.go:217] disabling cri-docker service (if available) ...
	I0805 12:59:03.467341  451238 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 12:59:03.482537  451238 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 12:59:03.498623  451238 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 12:59:03.621224  451238 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 12:59:03.781777  451238 docker.go:233] disabling docker service ...
	I0805 12:59:03.781842  451238 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 12:59:03.798020  451238 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 12:59:03.818262  451238 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 12:59:03.940897  451238 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 12:59:04.075622  451238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 12:59:04.092487  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 12:59:04.112699  451238 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0805 12:59:04.112769  451238 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:04.124102  451238 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 12:59:04.124181  451238 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:04.136339  451238 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:04.147689  451238 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:04.158552  451238 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 12:59:04.171412  451238 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 12:59:04.183284  451238 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0805 12:59:04.183336  451238 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0805 12:59:04.199465  451238 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 12:59:04.215571  451238 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:59:04.342540  451238 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 12:59:04.521705  451238 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 12:59:04.521786  451238 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 12:59:04.526734  451238 start.go:563] Will wait 60s for crictl version
	I0805 12:59:04.526795  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:04.530528  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 12:59:04.572468  451238 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 12:59:04.572557  451238 ssh_runner.go:195] Run: crio --version
	I0805 12:59:04.602411  451238 ssh_runner.go:195] Run: crio --version
	I0805 12:59:04.636641  451238 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0805 12:59:04.638062  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetIP
	I0805 12:59:04.641240  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:04.641734  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:04.641763  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:04.641991  451238 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0805 12:59:04.646446  451238 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:59:04.659876  451238 kubeadm.go:883] updating cluster {Name:old-k8s-version-635707 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-635707 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.41 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 12:59:04.660037  451238 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0805 12:59:04.660105  451238 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:59:04.709636  451238 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0805 12:59:04.709725  451238 ssh_runner.go:195] Run: which lz4
	I0805 12:59:04.714439  451238 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0805 12:59:04.719014  451238 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 12:59:04.719047  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0805 12:59:06.414858  451238 crio.go:462] duration metric: took 1.70045694s to copy over tarball
	I0805 12:59:06.414950  451238 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 12:59:04.513198  450884 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:07.018197  450884 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:05.911274  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:07.911405  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:04.478626  450393 main.go:141] libmachine: (embed-certs-321139) Waiting to get IP...
	I0805 12:59:04.479615  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:04.480147  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:04.480209  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:04.480103  452359 retry.go:31] will retry after 236.369287ms: waiting for machine to come up
	I0805 12:59:04.718716  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:04.719184  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:04.719209  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:04.719125  452359 retry.go:31] will retry after 296.553947ms: waiting for machine to come up
	I0805 12:59:05.017667  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:05.018198  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:05.018235  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:05.018143  452359 retry.go:31] will retry after 427.78496ms: waiting for machine to come up
	I0805 12:59:05.447507  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:05.448075  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:05.448105  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:05.448038  452359 retry.go:31] will retry after 469.229133ms: waiting for machine to come up
	I0805 12:59:05.918469  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:05.919013  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:05.919047  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:05.918998  452359 retry.go:31] will retry after 720.005641ms: waiting for machine to come up
	I0805 12:59:06.641103  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:06.641679  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:06.641708  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:06.641634  452359 retry.go:31] will retry after 591.439327ms: waiting for machine to come up
	I0805 12:59:07.234573  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:07.235179  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:07.235207  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:07.235063  452359 retry.go:31] will retry after 1.087958168s: waiting for machine to come up
	I0805 12:59:08.324599  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:08.325179  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:08.325212  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:08.325129  452359 retry.go:31] will retry after 1.316276197s: waiting for machine to come up
	I0805 12:59:09.473711  451238 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.058718584s)
	I0805 12:59:09.473740  451238 crio.go:469] duration metric: took 3.058854233s to extract the tarball
	I0805 12:59:09.473748  451238 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0805 12:59:09.524420  451238 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:59:09.562003  451238 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0805 12:59:09.562035  451238 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0805 12:59:09.562107  451238 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:59:09.562159  451238 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0805 12:59:09.562156  451238 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0805 12:59:09.562194  451238 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0805 12:59:09.562228  451238 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0805 12:59:09.562256  451238 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0805 12:59:09.562374  451238 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0805 12:59:09.562274  451238 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0805 12:59:09.563981  451238 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0805 12:59:09.563993  451238 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0805 12:59:09.564007  451238 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0805 12:59:09.564015  451238 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0805 12:59:09.564032  451238 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0805 12:59:09.564041  451238 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0805 12:59:09.564076  451238 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:59:09.564075  451238 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0805 12:59:09.727888  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0805 12:59:09.732060  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0805 12:59:09.732150  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0805 12:59:09.736408  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0805 12:59:09.748051  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0805 12:59:09.753579  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0805 12:59:09.762561  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0805 12:59:09.822623  451238 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0805 12:59:09.822681  451238 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0805 12:59:09.822742  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:09.824314  451238 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0805 12:59:09.824360  451238 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0805 12:59:09.824404  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:09.905619  451238 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0805 12:59:09.905778  451238 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0805 12:59:09.905738  451238 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0805 12:59:09.905944  451238 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0805 12:59:09.905998  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:09.905851  451238 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0805 12:59:09.906075  451238 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0805 12:59:09.906133  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:09.905861  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:09.916767  451238 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0805 12:59:09.916796  451238 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0805 12:59:09.916812  451238 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0805 12:59:09.916830  451238 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0805 12:59:09.916864  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:09.916868  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:09.916905  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0805 12:59:09.916958  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0805 12:59:09.918683  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0805 12:59:09.918718  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0805 12:59:09.918776  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0805 12:59:10.007687  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0805 12:59:10.007721  451238 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0805 12:59:10.007871  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0805 12:59:10.042432  451238 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0805 12:59:10.061343  451238 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0805 12:59:10.061400  451238 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0805 12:59:10.061469  451238 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0805 12:59:10.073852  451238 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0805 12:59:10.084957  451238 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0805 12:59:10.423355  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:59:10.563992  451238 cache_images.go:92] duration metric: took 1.001937985s to LoadCachedImages
	W0805 12:59:10.564184  451238 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0805 12:59:10.564211  451238 kubeadm.go:934] updating node { 192.168.61.41 8443 v1.20.0 crio true true} ...
	I0805 12:59:10.564345  451238 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-635707 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.41
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-635707 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 12:59:10.564427  451238 ssh_runner.go:195] Run: crio config
	I0805 12:59:10.612146  451238 cni.go:84] Creating CNI manager for ""
	I0805 12:59:10.612180  451238 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:59:10.612197  451238 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 12:59:10.612226  451238 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.41 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-635707 NodeName:old-k8s-version-635707 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.41"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.41 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0805 12:59:10.612415  451238 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.41
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-635707"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.41
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.41"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 12:59:10.612507  451238 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0805 12:59:10.623036  451238 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 12:59:10.623121  451238 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 12:59:10.633484  451238 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0805 12:59:10.652444  451238 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 12:59:10.673192  451238 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0805 12:59:10.694533  451238 ssh_runner.go:195] Run: grep 192.168.61.41	control-plane.minikube.internal$ /etc/hosts
	I0805 12:59:10.699901  451238 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.41	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:59:10.714251  451238 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:59:10.838992  451238 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 12:59:10.857248  451238 certs.go:68] Setting up /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707 for IP: 192.168.61.41
	I0805 12:59:10.857279  451238 certs.go:194] generating shared ca certs ...
	I0805 12:59:10.857303  451238 certs.go:226] acquiring lock for ca certs: {Name:mk0abfcaff3883fbb5243c47b487f9200d9166d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:59:10.857515  451238 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key
	I0805 12:59:10.857587  451238 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key
	I0805 12:59:10.857602  451238 certs.go:256] generating profile certs ...
	I0805 12:59:10.857746  451238 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/client.key
	I0805 12:59:10.857847  451238 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/apiserver.key.3f42c485
	I0805 12:59:10.857907  451238 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/proxy-client.key
	I0805 12:59:10.858072  451238 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem (1338 bytes)
	W0805 12:59:10.858122  451238 certs.go:480] ignoring /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219_empty.pem, impossibly tiny 0 bytes
	I0805 12:59:10.858143  451238 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 12:59:10.858177  451238 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem (1082 bytes)
	I0805 12:59:10.858207  451238 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem (1123 bytes)
	I0805 12:59:10.858235  451238 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem (1675 bytes)
	I0805 12:59:10.858294  451238 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:59:10.859247  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 12:59:10.908518  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 12:59:10.949310  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 12:59:10.981447  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 12:59:11.008085  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0805 12:59:11.035539  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0805 12:59:11.071371  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 12:59:11.099842  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 12:59:11.135629  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 12:59:11.164194  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem --> /usr/share/ca-certificates/391219.pem (1338 bytes)
	I0805 12:59:11.190595  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /usr/share/ca-certificates/3912192.pem (1708 bytes)
	I0805 12:59:11.219765  451238 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 12:59:11.240836  451238 ssh_runner.go:195] Run: openssl version
	I0805 12:59:11.247516  451238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3912192.pem && ln -fs /usr/share/ca-certificates/3912192.pem /etc/ssl/certs/3912192.pem"
	I0805 12:59:11.260736  451238 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3912192.pem
	I0805 12:59:11.266004  451238 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 11:39 /usr/share/ca-certificates/3912192.pem
	I0805 12:59:11.266100  451238 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3912192.pem
	I0805 12:59:11.273012  451238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3912192.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 12:59:11.285453  451238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 12:59:11.296934  451238 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:59:11.301588  451238 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:59:11.301655  451238 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:59:11.307459  451238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 12:59:11.318833  451238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391219.pem && ln -fs /usr/share/ca-certificates/391219.pem /etc/ssl/certs/391219.pem"
	I0805 12:59:11.330224  451238 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/391219.pem
	I0805 12:59:11.334864  451238 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 11:39 /usr/share/ca-certificates/391219.pem
	I0805 12:59:11.334917  451238 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391219.pem
	I0805 12:59:11.341338  451238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391219.pem /etc/ssl/certs/51391683.0"
	I0805 12:59:11.353084  451238 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 12:59:11.358532  451238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 12:59:11.365419  451238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 12:59:11.371581  451238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 12:59:11.378308  451238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 12:59:11.384640  451238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 12:59:11.390622  451238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 12:59:11.397027  451238 kubeadm.go:392] StartCluster: {Name:old-k8s-version-635707 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-635707 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.41 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:59:11.397199  451238 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 12:59:11.397286  451238 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:59:11.436612  451238 cri.go:89] found id: ""
	I0805 12:59:11.436689  451238 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 12:59:11.447906  451238 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0805 12:59:11.447927  451238 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0805 12:59:11.447984  451238 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0805 12:59:11.459282  451238 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0805 12:59:11.460548  451238 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-635707" does not appear in /home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 12:59:11.461355  451238 kubeconfig.go:62] /home/jenkins/minikube-integration/19377-383955/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-635707" cluster setting kubeconfig missing "old-k8s-version-635707" context setting]
	I0805 12:59:11.462324  451238 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/kubeconfig: {Name:mkf2ea766e58530103015ce4ba9d1ed3336f3926 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:59:11.476306  451238 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0805 12:59:11.487869  451238 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.41
	I0805 12:59:11.487911  451238 kubeadm.go:1160] stopping kube-system containers ...
	I0805 12:59:11.487927  451238 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0805 12:59:11.487988  451238 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:59:11.526601  451238 cri.go:89] found id: ""
	I0805 12:59:11.526674  451238 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0805 12:59:11.545429  451238 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 12:59:11.556725  451238 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 12:59:11.556755  451238 kubeadm.go:157] found existing configuration files:
	
	I0805 12:59:11.556820  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 12:59:11.566564  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 12:59:11.566648  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 12:59:11.576859  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 12:59:11.586237  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 12:59:11.586329  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 12:59:11.596721  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 12:59:11.607239  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 12:59:11.607340  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 12:59:11.617626  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 12:59:11.627179  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 12:59:11.627251  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 12:59:11.637566  451238 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 12:59:11.648889  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:11.780270  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:08.018320  450884 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"True"
	I0805 12:59:08.018363  450884 pod_ready.go:81] duration metric: took 10.514788401s for pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:08.018379  450884 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:10.270876  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:10.409419  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:12.410565  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:09.643077  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:09.643655  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:09.643692  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:09.643554  452359 retry.go:31] will retry after 1.473183692s: waiting for machine to come up
	I0805 12:59:11.118468  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:11.119005  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:11.119035  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:11.118943  452359 retry.go:31] will retry after 2.036333626s: waiting for machine to come up
	I0805 12:59:13.156866  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:13.157390  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:13.157419  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:13.157339  452359 retry.go:31] will retry after 2.095065362s: waiting for machine to come up
	I0805 12:59:12.549918  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:12.781853  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:12.877381  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:12.978141  451238 api_server.go:52] waiting for apiserver process to appear ...
	I0805 12:59:12.978250  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:13.479242  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:13.978456  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:14.478575  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:14.978783  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:15.479342  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:15.978307  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:16.479180  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:12.526543  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:15.027362  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:14.909480  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:16.911090  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:15.253589  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:15.254081  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:15.254111  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:15.254020  452359 retry.go:31] will retry after 2.859783781s: waiting for machine to come up
	I0805 12:59:18.116972  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:18.117528  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:18.117559  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:18.117486  452359 retry.go:31] will retry after 4.456427854s: waiting for machine to come up
	I0805 12:59:16.978915  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:17.479019  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:17.978574  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:18.478343  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:18.978820  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:19.478488  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:19.978335  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:20.478945  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:20.979040  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:21.479324  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:17.525332  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:19.525407  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:22.025092  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:19.410416  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:21.908646  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:22.576842  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.577261  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has current primary IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.577291  450393 main.go:141] libmachine: (embed-certs-321139) Found IP for machine: 192.168.39.196
	I0805 12:59:22.577306  450393 main.go:141] libmachine: (embed-certs-321139) Reserving static IP address...
	I0805 12:59:22.577834  450393 main.go:141] libmachine: (embed-certs-321139) Reserved static IP address: 192.168.39.196
	I0805 12:59:22.577877  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "embed-certs-321139", mac: "52:54:00:6c:ad:fd", ip: "192.168.39.196"} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:22.577893  450393 main.go:141] libmachine: (embed-certs-321139) Waiting for SSH to be available...
	I0805 12:59:22.577915  450393 main.go:141] libmachine: (embed-certs-321139) DBG | skip adding static IP to network mk-embed-certs-321139 - found existing host DHCP lease matching {name: "embed-certs-321139", mac: "52:54:00:6c:ad:fd", ip: "192.168.39.196"}
	I0805 12:59:22.577922  450393 main.go:141] libmachine: (embed-certs-321139) DBG | Getting to WaitForSSH function...
	I0805 12:59:22.580080  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.580520  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:22.580552  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.580707  450393 main.go:141] libmachine: (embed-certs-321139) DBG | Using SSH client type: external
	I0805 12:59:22.580742  450393 main.go:141] libmachine: (embed-certs-321139) DBG | Using SSH private key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa (-rw-------)
	I0805 12:59:22.580764  450393 main.go:141] libmachine: (embed-certs-321139) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.196 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 12:59:22.580778  450393 main.go:141] libmachine: (embed-certs-321139) DBG | About to run SSH command:
	I0805 12:59:22.580793  450393 main.go:141] libmachine: (embed-certs-321139) DBG | exit 0
	I0805 12:59:22.703872  450393 main.go:141] libmachine: (embed-certs-321139) DBG | SSH cmd err, output: <nil>: 
	I0805 12:59:22.704333  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetConfigRaw
	I0805 12:59:22.705046  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetIP
	I0805 12:59:22.707544  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.707919  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:22.707951  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.708240  450393 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139/config.json ...
	I0805 12:59:22.708474  450393 machine.go:94] provisionDockerMachine start ...
	I0805 12:59:22.708501  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:22.708755  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:22.711177  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.711488  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:22.711510  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.711639  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:22.711842  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:22.711998  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:22.712157  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:22.712378  450393 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:22.712581  450393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0805 12:59:22.712595  450393 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 12:59:22.816371  450393 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 12:59:22.816433  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetMachineName
	I0805 12:59:22.816708  450393 buildroot.go:166] provisioning hostname "embed-certs-321139"
	I0805 12:59:22.816743  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetMachineName
	I0805 12:59:22.816959  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:22.819715  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.820085  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:22.820108  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.820321  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:22.820510  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:22.820656  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:22.820794  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:22.820952  450393 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:22.821203  450393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0805 12:59:22.821229  450393 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-321139 && echo "embed-certs-321139" | sudo tee /etc/hostname
	I0805 12:59:22.938845  450393 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-321139
	
	I0805 12:59:22.938888  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:22.942264  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.942651  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:22.942684  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.942904  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:22.943161  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:22.943383  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:22.943568  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:22.943777  450393 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:22.943987  450393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0805 12:59:22.944011  450393 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-321139' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-321139/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-321139' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 12:59:23.062700  450393 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 12:59:23.062734  450393 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19377-383955/.minikube CaCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19377-383955/.minikube}
	I0805 12:59:23.062762  450393 buildroot.go:174] setting up certificates
	I0805 12:59:23.062774  450393 provision.go:84] configureAuth start
	I0805 12:59:23.062800  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetMachineName
	I0805 12:59:23.063142  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetIP
	I0805 12:59:23.065839  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.066140  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.066175  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.066359  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:23.069214  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.069562  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.069597  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.069746  450393 provision.go:143] copyHostCerts
	I0805 12:59:23.069813  450393 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem, removing ...
	I0805 12:59:23.069827  450393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 12:59:23.069897  450393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem (1082 bytes)
	I0805 12:59:23.070014  450393 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem, removing ...
	I0805 12:59:23.070025  450393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 12:59:23.070083  450393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem (1123 bytes)
	I0805 12:59:23.070185  450393 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem, removing ...
	I0805 12:59:23.070197  450393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 12:59:23.070226  450393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem (1675 bytes)
	I0805 12:59:23.070308  450393 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem org=jenkins.embed-certs-321139 san=[127.0.0.1 192.168.39.196 embed-certs-321139 localhost minikube]
	I0805 12:59:23.223660  450393 provision.go:177] copyRemoteCerts
	I0805 12:59:23.223759  450393 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 12:59:23.223799  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:23.226548  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.226980  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.227014  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.227195  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:23.227449  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:23.227624  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:23.227801  450393 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa Username:docker}
	I0805 12:59:23.311952  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0805 12:59:23.336888  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0805 12:59:23.363397  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 12:59:23.388197  450393 provision.go:87] duration metric: took 325.408192ms to configureAuth
	I0805 12:59:23.388234  450393 buildroot.go:189] setting minikube options for container-runtime
	I0805 12:59:23.388470  450393 config.go:182] Loaded profile config "embed-certs-321139": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:59:23.388596  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:23.391247  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.391597  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.391626  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.391843  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:23.392054  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:23.392240  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:23.392371  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:23.392528  450393 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:23.392825  450393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0805 12:59:23.392853  450393 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 12:59:23.675427  450393 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 12:59:23.675459  450393 machine.go:97] duration metric: took 966.969142ms to provisionDockerMachine
	I0805 12:59:23.675472  450393 start.go:293] postStartSetup for "embed-certs-321139" (driver="kvm2")
	I0805 12:59:23.675484  450393 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 12:59:23.675515  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:23.675885  450393 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 12:59:23.675912  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:23.678780  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.679100  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.679152  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.679333  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:23.679524  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:23.679657  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:23.679860  450393 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa Username:docker}
	I0805 12:59:23.764372  450393 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 12:59:23.769059  450393 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 12:59:23.769088  450393 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/addons for local assets ...
	I0805 12:59:23.769162  450393 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/files for local assets ...
	I0805 12:59:23.769231  450393 filesync.go:149] local asset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> 3912192.pem in /etc/ssl/certs
	I0805 12:59:23.769334  450393 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 12:59:23.781287  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:59:23.808609  450393 start.go:296] duration metric: took 133.117086ms for postStartSetup
	I0805 12:59:23.808665  450393 fix.go:56] duration metric: took 20.659690035s for fixHost
	I0805 12:59:23.808694  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:23.811519  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.811948  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.811978  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.812164  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:23.812366  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:23.812539  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:23.812708  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:23.812897  450393 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:23.813137  450393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0805 12:59:23.813151  450393 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 12:59:23.916498  450393 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722862763.883942670
	
	I0805 12:59:23.916521  450393 fix.go:216] guest clock: 1722862763.883942670
	I0805 12:59:23.916536  450393 fix.go:229] Guest: 2024-08-05 12:59:23.88394267 +0000 UTC Remote: 2024-08-05 12:59:23.8086712 +0000 UTC m=+359.764794687 (delta=75.27147ms)
	I0805 12:59:23.916570  450393 fix.go:200] guest clock delta is within tolerance: 75.27147ms
	I0805 12:59:23.916578  450393 start.go:83] releasing machines lock for "embed-certs-321139", held for 20.767637373s
	I0805 12:59:23.916598  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:23.916867  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetIP
	I0805 12:59:23.919570  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.919972  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.919999  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.920142  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:23.920666  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:23.920837  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:23.920930  450393 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 12:59:23.920981  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:23.921063  450393 ssh_runner.go:195] Run: cat /version.json
	I0805 12:59:23.921083  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:23.924176  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.924209  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.924557  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.924588  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.924613  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.924635  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.924749  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:23.924936  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:23.925021  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:23.925127  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:23.925219  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:23.925286  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:23.925369  450393 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa Username:docker}
	I0805 12:59:23.925454  450393 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa Username:docker}
	I0805 12:59:24.000693  450393 ssh_runner.go:195] Run: systemctl --version
	I0805 12:59:24.023194  450393 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 12:59:24.178807  450393 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 12:59:24.184954  450393 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 12:59:24.185031  450393 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 12:59:24.201420  450393 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 12:59:24.201453  450393 start.go:495] detecting cgroup driver to use...
	I0805 12:59:24.201543  450393 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 12:59:24.218603  450393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 12:59:24.233928  450393 docker.go:217] disabling cri-docker service (if available) ...
	I0805 12:59:24.233999  450393 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 12:59:24.248455  450393 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 12:59:24.263355  450393 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 12:59:24.386806  450393 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 12:59:24.565128  450393 docker.go:233] disabling docker service ...
	I0805 12:59:24.565229  450393 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 12:59:24.581053  450393 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 12:59:24.594297  450393 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 12:59:24.716615  450393 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 12:59:24.835687  450393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 12:59:24.850666  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 12:59:24.870993  450393 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0805 12:59:24.871055  450393 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:24.881731  450393 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 12:59:24.881815  450393 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:24.893156  450393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:24.903802  450393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:24.915189  450393 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 12:59:24.926967  450393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:24.938008  450393 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:24.956033  450393 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:24.967863  450393 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 12:59:24.977758  450393 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0805 12:59:24.977822  450393 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0805 12:59:24.993837  450393 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 12:59:25.005009  450393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:59:25.135856  450393 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 12:59:25.277425  450393 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 12:59:25.277513  450393 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 12:59:25.282628  450393 start.go:563] Will wait 60s for crictl version
	I0805 12:59:25.282704  450393 ssh_runner.go:195] Run: which crictl
	I0805 12:59:25.287324  450393 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 12:59:25.335315  450393 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 12:59:25.335396  450393 ssh_runner.go:195] Run: crio --version
	I0805 12:59:25.367574  450393 ssh_runner.go:195] Run: crio --version
	I0805 12:59:25.398926  450393 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0805 12:59:21.979289  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:22.478367  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:22.978424  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:23.478877  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:23.978841  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:24.478635  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:24.978824  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:25.479076  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:25.979222  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:26.478928  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:24.025234  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:26.028817  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:23.909428  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:25.910877  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:27.911235  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:25.400219  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetIP
	I0805 12:59:25.403052  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:25.403508  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:25.403552  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:25.403849  450393 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0805 12:59:25.408402  450393 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:59:25.423146  450393 kubeadm.go:883] updating cluster {Name:embed-certs-321139 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-321139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 12:59:25.423301  450393 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 12:59:25.423368  450393 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:59:25.460713  450393 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0805 12:59:25.460795  450393 ssh_runner.go:195] Run: which lz4
	I0805 12:59:25.464997  450393 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0805 12:59:25.469397  450393 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 12:59:25.469452  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0805 12:59:26.966110  450393 crio.go:462] duration metric: took 1.501152522s to copy over tarball
	I0805 12:59:26.966207  450393 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 12:59:26.978648  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:27.478951  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:27.978405  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:28.479008  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:28.978521  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:29.479199  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:29.979288  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:30.479030  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:30.978372  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:31.479194  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:28.525888  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:31.025690  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:30.410973  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:32.910889  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:29.287605  450393 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.321364872s)
	I0805 12:59:29.287636  450393 crio.go:469] duration metric: took 2.321487153s to extract the tarball
	I0805 12:59:29.287647  450393 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0805 12:59:29.329182  450393 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:59:29.372183  450393 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 12:59:29.372211  450393 cache_images.go:84] Images are preloaded, skipping loading
	I0805 12:59:29.372220  450393 kubeadm.go:934] updating node { 192.168.39.196 8443 v1.30.3 crio true true} ...
	I0805 12:59:29.372349  450393 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-321139 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.196
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-321139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 12:59:29.372433  450393 ssh_runner.go:195] Run: crio config
	I0805 12:59:29.426003  450393 cni.go:84] Creating CNI manager for ""
	I0805 12:59:29.426025  450393 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:59:29.426036  450393 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 12:59:29.426059  450393 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.196 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-321139 NodeName:embed-certs-321139 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.196"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.196 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 12:59:29.426192  450393 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.196
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-321139"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.196
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.196"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 12:59:29.426250  450393 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 12:59:29.436248  450393 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 12:59:29.436315  450393 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 12:59:29.445844  450393 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0805 12:59:29.463125  450393 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 12:59:29.479685  450393 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0805 12:59:29.499033  450393 ssh_runner.go:195] Run: grep 192.168.39.196	control-plane.minikube.internal$ /etc/hosts
	I0805 12:59:29.503175  450393 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.196	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:59:29.516141  450393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:59:29.645914  450393 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 12:59:29.664578  450393 certs.go:68] Setting up /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139 for IP: 192.168.39.196
	I0805 12:59:29.664608  450393 certs.go:194] generating shared ca certs ...
	I0805 12:59:29.664626  450393 certs.go:226] acquiring lock for ca certs: {Name:mk0abfcaff3883fbb5243c47b487f9200d9166d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:59:29.664853  450393 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key
	I0805 12:59:29.664922  450393 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key
	I0805 12:59:29.664939  450393 certs.go:256] generating profile certs ...
	I0805 12:59:29.665058  450393 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139/client.key
	I0805 12:59:29.665143  450393 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139/apiserver.key.ce53eda3
	I0805 12:59:29.665183  450393 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139/proxy-client.key
	I0805 12:59:29.665293  450393 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem (1338 bytes)
	W0805 12:59:29.665324  450393 certs.go:480] ignoring /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219_empty.pem, impossibly tiny 0 bytes
	I0805 12:59:29.665331  450393 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 12:59:29.665360  450393 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem (1082 bytes)
	I0805 12:59:29.665382  450393 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem (1123 bytes)
	I0805 12:59:29.665405  450393 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem (1675 bytes)
	I0805 12:59:29.665442  450393 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:59:29.666287  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 12:59:29.705969  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 12:59:29.752700  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 12:59:29.779819  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 12:59:29.806578  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0805 12:59:29.832277  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0805 12:59:29.861682  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 12:59:29.888113  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0805 12:59:29.915023  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem --> /usr/share/ca-certificates/391219.pem (1338 bytes)
	I0805 12:59:29.942582  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /usr/share/ca-certificates/3912192.pem (1708 bytes)
	I0805 12:59:29.971225  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 12:59:29.999278  450393 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 12:59:30.018294  450393 ssh_runner.go:195] Run: openssl version
	I0805 12:59:30.024645  450393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 12:59:30.035446  450393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:59:30.040216  450393 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:59:30.040279  450393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:59:30.046151  450393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 12:59:30.057664  450393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391219.pem && ln -fs /usr/share/ca-certificates/391219.pem /etc/ssl/certs/391219.pem"
	I0805 12:59:30.068822  450393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/391219.pem
	I0805 12:59:30.074073  450393 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 11:39 /usr/share/ca-certificates/391219.pem
	I0805 12:59:30.074138  450393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391219.pem
	I0805 12:59:30.080126  450393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391219.pem /etc/ssl/certs/51391683.0"
	I0805 12:59:30.091168  450393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3912192.pem && ln -fs /usr/share/ca-certificates/3912192.pem /etc/ssl/certs/3912192.pem"
	I0805 12:59:30.103171  450393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3912192.pem
	I0805 12:59:30.108840  450393 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 11:39 /usr/share/ca-certificates/3912192.pem
	I0805 12:59:30.108924  450393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3912192.pem
	I0805 12:59:30.115469  450393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3912192.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 12:59:30.126742  450393 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 12:59:30.132008  450393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 12:59:30.138285  450393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 12:59:30.144251  450393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 12:59:30.150718  450393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 12:59:30.157183  450393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 12:59:30.163709  450393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 12:59:30.170852  450393 kubeadm.go:392] StartCluster: {Name:embed-certs-321139 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-321139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:59:30.170987  450393 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 12:59:30.171055  450393 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:59:30.216014  450393 cri.go:89] found id: ""
	I0805 12:59:30.216103  450393 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 12:59:30.234046  450393 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0805 12:59:30.234076  450393 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0805 12:59:30.234151  450393 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0805 12:59:30.245861  450393 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0805 12:59:30.247434  450393 kubeconfig.go:125] found "embed-certs-321139" server: "https://192.168.39.196:8443"
	I0805 12:59:30.250024  450393 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0805 12:59:30.261066  450393 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.196
	I0805 12:59:30.261116  450393 kubeadm.go:1160] stopping kube-system containers ...
	I0805 12:59:30.261140  450393 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0805 12:59:30.261201  450393 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:59:30.306587  450393 cri.go:89] found id: ""
	I0805 12:59:30.306678  450393 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0805 12:59:30.326818  450393 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 12:59:30.336908  450393 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 12:59:30.336931  450393 kubeadm.go:157] found existing configuration files:
	
	I0805 12:59:30.336984  450393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 12:59:30.346004  450393 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 12:59:30.346105  450393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 12:59:30.355979  450393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 12:59:30.366124  450393 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 12:59:30.366185  450393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 12:59:30.376923  450393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 12:59:30.386526  450393 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 12:59:30.386599  450393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 12:59:30.396661  450393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 12:59:30.406693  450393 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 12:59:30.406765  450393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 12:59:30.417789  450393 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 12:59:30.428214  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:30.554777  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:31.703579  450393 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.14876196s)
	I0805 12:59:31.703620  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:31.925724  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:31.999840  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:32.089948  450393 api_server.go:52] waiting for apiserver process to appear ...
	I0805 12:59:32.090084  450393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:32.590152  450393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:33.090222  450393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:33.115351  450393 api_server.go:72] duration metric: took 1.025404322s to wait for apiserver process to appear ...
	I0805 12:59:33.115385  450393 api_server.go:88] waiting for apiserver healthz status ...
	I0805 12:59:33.115411  450393 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0805 12:59:33.115983  450393 api_server.go:269] stopped: https://192.168.39.196:8443/healthz: Get "https://192.168.39.196:8443/healthz": dial tcp 192.168.39.196:8443: connect: connection refused
	I0805 12:59:33.616210  450393 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0805 12:59:31.978481  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:32.479031  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:32.978796  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:33.478677  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:33.979377  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:34.478595  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:34.979227  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:35.478695  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:35.978911  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:36.479327  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:33.027363  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:35.525528  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:36.274855  450393 api_server.go:279] https://192.168.39.196:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0805 12:59:36.274895  450393 api_server.go:103] status: https://192.168.39.196:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0805 12:59:36.274912  450393 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0805 12:59:36.314290  450393 api_server.go:279] https://192.168.39.196:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0805 12:59:36.314325  450393 api_server.go:103] status: https://192.168.39.196:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0805 12:59:36.615566  450393 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0805 12:59:36.620594  450393 api_server.go:279] https://192.168.39.196:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:59:36.620626  450393 api_server.go:103] status: https://192.168.39.196:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:59:37.116251  450393 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0805 12:59:37.120719  450393 api_server.go:279] https://192.168.39.196:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:59:37.120749  450393 api_server.go:103] status: https://192.168.39.196:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:59:37.616330  450393 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0805 12:59:37.620778  450393 api_server.go:279] https://192.168.39.196:8443/healthz returned 200:
	ok
	I0805 12:59:37.627608  450393 api_server.go:141] control plane version: v1.30.3
	I0805 12:59:37.627640  450393 api_server.go:131] duration metric: took 4.512246076s to wait for apiserver health ...
	I0805 12:59:37.627652  450393 cni.go:84] Creating CNI manager for ""
	I0805 12:59:37.627661  450393 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:59:37.628987  450393 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0805 12:59:35.410070  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:37.411719  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:37.630068  450393 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0805 12:59:37.650034  450393 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0805 12:59:37.691891  450393 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 12:59:37.704810  450393 system_pods.go:59] 8 kube-system pods found
	I0805 12:59:37.704855  450393 system_pods.go:61] "coredns-7db6d8ff4d-wm7lh" [e3851d79-431c-4629-bfdc-ed9615cd46aa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0805 12:59:37.704866  450393 system_pods.go:61] "etcd-embed-certs-321139" [98de664b-92d7-432d-9881-496dd8edd9f3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0805 12:59:37.704887  450393 system_pods.go:61] "kube-apiserver-embed-certs-321139" [2d93e6df-1933-4ac1-82f6-d0d8f74f6d4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0805 12:59:37.704900  450393 system_pods.go:61] "kube-controller-manager-embed-certs-321139" [84165f78-f74b-4714-81b9-eeac2771b86b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0805 12:59:37.704916  450393 system_pods.go:61] "kube-proxy-shgv2" [a19c5991-505f-4105-8c20-7afd63dd8e61] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0805 12:59:37.704928  450393 system_pods.go:61] "kube-scheduler-embed-certs-321139" [961a5013-fd55-48a2-adc2-acde33f6aed5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0805 12:59:37.704946  450393 system_pods.go:61] "metrics-server-569cc877fc-k8mrt" [6d400b20-5de5-4046-b773-39766c67cdb4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 12:59:37.704956  450393 system_pods.go:61] "storage-provisioner" [8b2db057-5262-4648-93ea-f2f0ed51a19b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0805 12:59:37.704967  450393 system_pods.go:74] duration metric: took 13.04358ms to wait for pod list to return data ...
	I0805 12:59:37.704980  450393 node_conditions.go:102] verifying NodePressure condition ...
	I0805 12:59:37.710340  450393 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 12:59:37.710367  450393 node_conditions.go:123] node cpu capacity is 2
	I0805 12:59:37.710382  450393 node_conditions.go:105] duration metric: took 5.392102ms to run NodePressure ...
	I0805 12:59:37.710402  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:37.995945  450393 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0805 12:59:38.000274  450393 kubeadm.go:739] kubelet initialised
	I0805 12:59:38.000295  450393 kubeadm.go:740] duration metric: took 4.323835ms waiting for restarted kubelet to initialise ...
	I0805 12:59:38.000302  450393 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 12:59:38.006122  450393 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-wm7lh" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:38.012368  450393 pod_ready.go:97] node "embed-certs-321139" hosting pod "coredns-7db6d8ff4d-wm7lh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.012392  450393 pod_ready.go:81] duration metric: took 6.243837ms for pod "coredns-7db6d8ff4d-wm7lh" in "kube-system" namespace to be "Ready" ...
	E0805 12:59:38.012400  450393 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-321139" hosting pod "coredns-7db6d8ff4d-wm7lh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.012406  450393 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:38.016338  450393 pod_ready.go:97] node "embed-certs-321139" hosting pod "etcd-embed-certs-321139" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.016357  450393 pod_ready.go:81] duration metric: took 3.943012ms for pod "etcd-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	E0805 12:59:38.016364  450393 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-321139" hosting pod "etcd-embed-certs-321139" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.016369  450393 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:38.021019  450393 pod_ready.go:97] node "embed-certs-321139" hosting pod "kube-apiserver-embed-certs-321139" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.021044  450393 pod_ready.go:81] duration metric: took 4.667242ms for pod "kube-apiserver-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	E0805 12:59:38.021055  450393 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-321139" hosting pod "kube-apiserver-embed-certs-321139" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.021063  450393 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:38.096303  450393 pod_ready.go:97] node "embed-certs-321139" hosting pod "kube-controller-manager-embed-certs-321139" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.096334  450393 pod_ready.go:81] duration metric: took 75.253785ms for pod "kube-controller-manager-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	E0805 12:59:38.096345  450393 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-321139" hosting pod "kube-controller-manager-embed-certs-321139" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.096351  450393 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-shgv2" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:38.495648  450393 pod_ready.go:97] node "embed-certs-321139" hosting pod "kube-proxy-shgv2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.495677  450393 pod_ready.go:81] duration metric: took 399.318117ms for pod "kube-proxy-shgv2" in "kube-system" namespace to be "Ready" ...
	E0805 12:59:38.495687  450393 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-321139" hosting pod "kube-proxy-shgv2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.495694  450393 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:38.896066  450393 pod_ready.go:97] node "embed-certs-321139" hosting pod "kube-scheduler-embed-certs-321139" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.896091  450393 pod_ready.go:81] duration metric: took 400.39101ms for pod "kube-scheduler-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	E0805 12:59:38.896101  450393 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-321139" hosting pod "kube-scheduler-embed-certs-321139" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.896108  450393 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:39.295587  450393 pod_ready.go:97] node "embed-certs-321139" hosting pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:39.295618  450393 pod_ready.go:81] duration metric: took 399.499354ms for pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace to be "Ready" ...
	E0805 12:59:39.295632  450393 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-321139" hosting pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:39.295653  450393 pod_ready.go:38] duration metric: took 1.295340252s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 12:59:39.295675  450393 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 12:59:39.308136  450393 ops.go:34] apiserver oom_adj: -16
	I0805 12:59:39.308161  450393 kubeadm.go:597] duration metric: took 9.07407738s to restartPrimaryControlPlane
	I0805 12:59:39.308170  450393 kubeadm.go:394] duration metric: took 9.137335392s to StartCluster
	I0805 12:59:39.308188  450393 settings.go:142] acquiring lock: {Name:mkef693333292ed53a03690c72ec170ce2e26d3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:59:39.308272  450393 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 12:59:39.310750  450393 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/kubeconfig: {Name:mkf2ea766e58530103015ce4ba9d1ed3336f3926 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:59:39.311015  450393 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 12:59:39.311149  450393 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 12:59:39.311240  450393 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-321139"
	I0805 12:59:39.311289  450393 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-321139"
	W0805 12:59:39.311303  450393 addons.go:243] addon storage-provisioner should already be in state true
	I0805 12:59:39.311301  450393 addons.go:69] Setting metrics-server=true in profile "embed-certs-321139"
	I0805 12:59:39.311305  450393 addons.go:69] Setting default-storageclass=true in profile "embed-certs-321139"
	I0805 12:59:39.311351  450393 host.go:66] Checking if "embed-certs-321139" exists ...
	I0805 12:59:39.311360  450393 addons.go:234] Setting addon metrics-server=true in "embed-certs-321139"
	W0805 12:59:39.311371  450393 addons.go:243] addon metrics-server should already be in state true
	I0805 12:59:39.311371  450393 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-321139"
	I0805 12:59:39.311454  450393 host.go:66] Checking if "embed-certs-321139" exists ...
	I0805 12:59:39.311287  450393 config.go:182] Loaded profile config "embed-certs-321139": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:59:39.311848  450393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:59:39.311897  450393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:59:39.311906  450393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:59:39.311912  450393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:59:39.311964  450393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:59:39.312115  450393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:59:39.313050  450393 out.go:177] * Verifying Kubernetes components...
	I0805 12:59:39.314390  450393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:59:39.327427  450393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36355
	I0805 12:59:39.327687  450393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39217
	I0805 12:59:39.328016  450393 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:59:39.328155  450393 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:59:39.328609  450393 main.go:141] libmachine: Using API Version  1
	I0805 12:59:39.328649  450393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:59:39.328735  450393 main.go:141] libmachine: Using API Version  1
	I0805 12:59:39.328786  450393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:59:39.329013  450393 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:59:39.329086  450393 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:59:39.329560  450393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:59:39.329599  450393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:59:39.329676  450393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:59:39.329721  450393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:59:39.330884  450393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34247
	I0805 12:59:39.331381  450393 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:59:39.331878  450393 main.go:141] libmachine: Using API Version  1
	I0805 12:59:39.331902  450393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:59:39.332289  450393 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:59:39.332529  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetState
	I0805 12:59:39.336244  450393 addons.go:234] Setting addon default-storageclass=true in "embed-certs-321139"
	W0805 12:59:39.336269  450393 addons.go:243] addon default-storageclass should already be in state true
	I0805 12:59:39.336305  450393 host.go:66] Checking if "embed-certs-321139" exists ...
	I0805 12:59:39.336688  450393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:59:39.336735  450393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:59:39.347255  450393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41715
	I0805 12:59:39.347411  450393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43729
	I0805 12:59:39.347776  450393 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:59:39.347910  450393 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:59:39.348271  450393 main.go:141] libmachine: Using API Version  1
	I0805 12:59:39.348291  450393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:59:39.348464  450393 main.go:141] libmachine: Using API Version  1
	I0805 12:59:39.348476  450393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:59:39.348603  450393 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:59:39.348760  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetState
	I0805 12:59:39.348817  450393 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:59:39.348955  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetState
	I0805 12:59:39.350697  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:39.350906  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:39.352896  450393 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:59:39.352895  450393 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0805 12:59:39.354185  450393 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0805 12:59:39.354207  450393 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0805 12:59:39.354224  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:39.354266  450393 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 12:59:39.354277  450393 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 12:59:39.354292  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:39.356641  450393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41381
	I0805 12:59:39.357213  450393 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:59:39.357546  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:39.357791  450393 main.go:141] libmachine: Using API Version  1
	I0805 12:59:39.357814  450393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:59:39.357867  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:39.358001  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:39.358020  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:39.359294  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:39.359322  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:39.359337  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:39.359345  450393 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:59:39.359353  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:39.359488  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:39.359624  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:39.359669  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:39.359783  450393 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa Username:docker}
	I0805 12:59:39.359977  450393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:59:39.360009  450393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:59:39.360077  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:39.360210  450393 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa Username:docker}
	I0805 12:59:39.380935  450393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33787
	I0805 12:59:39.381394  450393 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:59:39.381987  450393 main.go:141] libmachine: Using API Version  1
	I0805 12:59:39.382029  450393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:59:39.382362  450393 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:59:39.382603  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetState
	I0805 12:59:39.384225  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:39.384497  450393 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 12:59:39.384515  450393 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 12:59:39.384536  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:39.389471  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:39.389972  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:39.390001  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:39.390124  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:39.390303  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:39.390604  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:39.390791  450393 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa Username:docker}
	I0805 12:59:39.513696  450393 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 12:59:39.533291  450393 node_ready.go:35] waiting up to 6m0s for node "embed-certs-321139" to be "Ready" ...
	I0805 12:59:39.597816  450393 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 12:59:39.700234  450393 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 12:59:39.719936  450393 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0805 12:59:39.719958  450393 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0805 12:59:39.760405  450393 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0805 12:59:39.760441  450393 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0805 12:59:39.808765  450393 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0805 12:59:39.808794  450393 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0805 12:59:39.833073  450393 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0805 12:59:39.946594  450393 main.go:141] libmachine: Making call to close driver server
	I0805 12:59:39.946633  450393 main.go:141] libmachine: (embed-certs-321139) Calling .Close
	I0805 12:59:39.946968  450393 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:59:39.946995  450393 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:59:39.947052  450393 main.go:141] libmachine: (embed-certs-321139) DBG | Closing plugin on server side
	I0805 12:59:39.947121  450393 main.go:141] libmachine: Making call to close driver server
	I0805 12:59:39.947137  450393 main.go:141] libmachine: (embed-certs-321139) Calling .Close
	I0805 12:59:39.947456  450393 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:59:39.947477  450393 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:59:39.947490  450393 main.go:141] libmachine: (embed-certs-321139) DBG | Closing plugin on server side
	I0805 12:59:39.953919  450393 main.go:141] libmachine: Making call to close driver server
	I0805 12:59:39.953942  450393 main.go:141] libmachine: (embed-certs-321139) Calling .Close
	I0805 12:59:39.954189  450393 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:59:39.954209  450393 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:59:40.636249  450393 main.go:141] libmachine: Making call to close driver server
	I0805 12:59:40.636274  450393 main.go:141] libmachine: (embed-certs-321139) Calling .Close
	I0805 12:59:40.636638  450393 main.go:141] libmachine: (embed-certs-321139) DBG | Closing plugin on server side
	I0805 12:59:40.636715  450393 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:59:40.636729  450393 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:59:40.636745  450393 main.go:141] libmachine: Making call to close driver server
	I0805 12:59:40.636757  450393 main.go:141] libmachine: (embed-certs-321139) Calling .Close
	I0805 12:59:40.636989  450393 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:59:40.637008  450393 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:59:40.671789  450393 main.go:141] libmachine: Making call to close driver server
	I0805 12:59:40.671819  450393 main.go:141] libmachine: (embed-certs-321139) Calling .Close
	I0805 12:59:40.672189  450393 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:59:40.672207  450393 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:59:40.672217  450393 main.go:141] libmachine: Making call to close driver server
	I0805 12:59:40.672225  450393 main.go:141] libmachine: (embed-certs-321139) Calling .Close
	I0805 12:59:40.672468  450393 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:59:40.672485  450393 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:59:40.672499  450393 addons.go:475] Verifying addon metrics-server=true in "embed-certs-321139"
	I0805 12:59:40.674497  450393 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0805 12:59:36.978361  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:37.478380  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:37.978354  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:38.478283  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:38.979257  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:39.478407  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:39.978772  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:40.478395  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:40.979309  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:41.478302  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:38.026001  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:40.026706  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:39.909336  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:41.910240  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:40.675778  450393 addons.go:510] duration metric: took 1.364642066s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0805 12:59:41.537321  450393 node_ready.go:53] node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:44.037571  450393 node_ready.go:53] node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:41.978791  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:42.478841  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:42.979289  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:43.478344  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:43.978613  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:44.478756  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:44.978392  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:45.478363  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:45.978354  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:46.478417  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:42.524568  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:45.024950  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:47.025453  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:44.408846  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:46.410085  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:46.537183  450393 node_ready.go:53] node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:47.037178  450393 node_ready.go:49] node "embed-certs-321139" has status "Ready":"True"
	I0805 12:59:47.037206  450393 node_ready.go:38] duration metric: took 7.503884334s for node "embed-certs-321139" to be "Ready" ...
	I0805 12:59:47.037221  450393 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 12:59:47.043159  450393 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wm7lh" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:47.048037  450393 pod_ready.go:92] pod "coredns-7db6d8ff4d-wm7lh" in "kube-system" namespace has status "Ready":"True"
	I0805 12:59:47.048088  450393 pod_ready.go:81] duration metric: took 4.901694ms for pod "coredns-7db6d8ff4d-wm7lh" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:47.048102  450393 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.055429  450393 pod_ready.go:92] pod "etcd-embed-certs-321139" in "kube-system" namespace has status "Ready":"True"
	I0805 12:59:49.055454  450393 pod_ready.go:81] duration metric: took 2.007345086s for pod "etcd-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.055464  450393 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.060072  450393 pod_ready.go:92] pod "kube-apiserver-embed-certs-321139" in "kube-system" namespace has status "Ready":"True"
	I0805 12:59:49.060095  450393 pod_ready.go:81] duration metric: took 4.624968ms for pod "kube-apiserver-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.060103  450393 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.065663  450393 pod_ready.go:92] pod "kube-controller-manager-embed-certs-321139" in "kube-system" namespace has status "Ready":"True"
	I0805 12:59:49.065689  450393 pod_ready.go:81] duration metric: took 5.578205ms for pod "kube-controller-manager-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.065708  450393 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-shgv2" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.071143  450393 pod_ready.go:92] pod "kube-proxy-shgv2" in "kube-system" namespace has status "Ready":"True"
	I0805 12:59:49.071166  450393 pod_ready.go:81] duration metric: took 5.450104ms for pod "kube-proxy-shgv2" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.071174  450393 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:46.978356  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:47.478322  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:47.978417  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:48.478966  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:48.979317  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:49.478449  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:49.978364  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:50.479294  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:50.978435  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:51.478614  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:49.028075  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:51.524299  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:48.908177  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:50.908490  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:52.909257  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:49.438002  450393 pod_ready.go:92] pod "kube-scheduler-embed-certs-321139" in "kube-system" namespace has status "Ready":"True"
	I0805 12:59:49.438032  450393 pod_ready.go:81] duration metric: took 366.851004ms for pod "kube-scheduler-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.438042  450393 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:51.443490  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:53.444534  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:51.978526  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:52.479187  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:52.979090  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:53.478733  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:53.978571  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:54.478525  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:54.979125  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:55.478711  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:55.979266  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:56.478956  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:53.525369  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:55.526660  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:54.909757  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:57.409489  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:55.445189  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:57.944983  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:56.979226  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:57.479019  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:57.978634  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:58.478338  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:58.978987  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:59.479290  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:59.978383  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:00.478373  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:00.978412  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:01.479312  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:57.527240  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:00.024177  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:02.024749  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:59.908362  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:01.909101  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:00.445471  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:02.944535  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:01.978392  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:02.479119  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:02.978313  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:03.478401  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:03.979029  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:04.478963  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:04.978393  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:05.478418  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:05.978381  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:06.479229  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:04.028522  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:06.525385  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:04.409119  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:06.409863  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:05.444313  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:07.452452  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:06.979172  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:07.479251  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:07.979183  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:08.478722  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:08.979248  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:09.478527  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:09.978581  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:10.478499  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:10.978520  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:11.478843  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:09.025651  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:11.525086  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:08.909528  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:11.408408  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:13.410472  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:09.945614  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:12.443723  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:11.978536  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:12.478504  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:12.979179  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:12.979258  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:13.022653  451238 cri.go:89] found id: ""
	I0805 13:00:13.022680  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.022689  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:13.022696  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:13.022766  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:13.059292  451238 cri.go:89] found id: ""
	I0805 13:00:13.059326  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.059336  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:13.059343  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:13.059399  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:13.098750  451238 cri.go:89] found id: ""
	I0805 13:00:13.098782  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.098793  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:13.098802  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:13.098866  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:13.133307  451238 cri.go:89] found id: ""
	I0805 13:00:13.133338  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.133346  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:13.133353  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:13.133420  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:13.171124  451238 cri.go:89] found id: ""
	I0805 13:00:13.171160  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.171170  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:13.171177  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:13.171237  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:13.209200  451238 cri.go:89] found id: ""
	I0805 13:00:13.209235  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.209247  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:13.209254  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:13.209312  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:13.244261  451238 cri.go:89] found id: ""
	I0805 13:00:13.244302  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.244313  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:13.244324  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:13.244397  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:13.283295  451238 cri.go:89] found id: ""
	I0805 13:00:13.283331  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.283342  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:13.283356  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:13.283372  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:13.344134  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:13.344174  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:13.384084  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:13.384119  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:13.433784  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:13.433821  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:13.449756  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:13.449786  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:13.573090  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:16.074053  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:16.087817  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:16.087900  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:16.130938  451238 cri.go:89] found id: ""
	I0805 13:00:16.130970  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.130981  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:16.130989  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:16.131058  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:16.184208  451238 cri.go:89] found id: ""
	I0805 13:00:16.184245  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.184259  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:16.184269  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:16.184346  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:16.230959  451238 cri.go:89] found id: ""
	I0805 13:00:16.230998  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.231011  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:16.231020  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:16.231100  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:16.282886  451238 cri.go:89] found id: ""
	I0805 13:00:16.282940  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.282954  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:16.282963  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:16.283024  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:16.320345  451238 cri.go:89] found id: ""
	I0805 13:00:16.320381  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.320397  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:16.320404  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:16.320521  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:16.356390  451238 cri.go:89] found id: ""
	I0805 13:00:16.356427  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.356439  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:16.356447  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:16.356503  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:16.400477  451238 cri.go:89] found id: ""
	I0805 13:00:16.400510  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.400529  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:16.400539  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:16.400612  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:16.440634  451238 cri.go:89] found id: ""
	I0805 13:00:16.440662  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.440673  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:16.440685  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:16.440702  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:16.510879  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:16.510922  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:16.554294  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:16.554332  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:16.607798  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:16.607853  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:16.622618  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:16.622655  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:16.702599  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:14.025025  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:16.025182  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:15.909245  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:18.409729  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:14.445222  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:16.445451  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:18.944533  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:19.202789  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:19.215776  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:19.215851  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:19.250503  451238 cri.go:89] found id: ""
	I0805 13:00:19.250540  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.250551  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:19.250558  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:19.250630  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:19.287358  451238 cri.go:89] found id: ""
	I0805 13:00:19.287392  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.287403  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:19.287412  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:19.287484  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:19.322167  451238 cri.go:89] found id: ""
	I0805 13:00:19.322195  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.322203  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:19.322209  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:19.322262  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:19.356874  451238 cri.go:89] found id: ""
	I0805 13:00:19.356905  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.356923  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:19.356931  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:19.357006  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:19.395172  451238 cri.go:89] found id: ""
	I0805 13:00:19.395206  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.395217  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:19.395227  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:19.395294  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:19.438404  451238 cri.go:89] found id: ""
	I0805 13:00:19.438431  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.438439  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:19.438445  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:19.438510  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:19.474727  451238 cri.go:89] found id: ""
	I0805 13:00:19.474755  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.474762  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:19.474769  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:19.474832  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:19.513906  451238 cri.go:89] found id: ""
	I0805 13:00:19.513945  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.513953  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:19.513963  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:19.513977  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:19.528337  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:19.528378  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:19.601135  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:19.601168  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:19.601185  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:19.676792  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:19.676844  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:19.716861  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:19.716894  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:18.025634  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:20.027525  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:20.909150  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:22.910153  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:20.945009  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:23.444529  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:22.266971  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:22.280346  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:22.280422  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:22.314788  451238 cri.go:89] found id: ""
	I0805 13:00:22.314816  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.314824  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:22.314831  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:22.314884  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:22.357357  451238 cri.go:89] found id: ""
	I0805 13:00:22.357394  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.357405  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:22.357414  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:22.357483  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:22.393254  451238 cri.go:89] found id: ""
	I0805 13:00:22.393288  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.393296  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:22.393302  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:22.393366  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:22.434766  451238 cri.go:89] found id: ""
	I0805 13:00:22.434796  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.434807  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:22.434815  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:22.434887  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:22.475649  451238 cri.go:89] found id: ""
	I0805 13:00:22.475676  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.475684  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:22.475690  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:22.475754  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:22.515633  451238 cri.go:89] found id: ""
	I0805 13:00:22.515662  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.515670  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:22.515677  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:22.515757  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:22.550716  451238 cri.go:89] found id: ""
	I0805 13:00:22.550749  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.550759  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:22.550767  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:22.550849  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:22.588537  451238 cri.go:89] found id: ""
	I0805 13:00:22.588571  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.588583  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:22.588595  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:22.588609  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:22.638535  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:22.638577  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:22.654879  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:22.654919  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:22.721482  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:22.721513  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:22.721529  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:22.801442  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:22.801489  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:25.343805  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:25.358068  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:25.358176  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:25.393734  451238 cri.go:89] found id: ""
	I0805 13:00:25.393767  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.393778  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:25.393785  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:25.393849  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:25.428217  451238 cri.go:89] found id: ""
	I0805 13:00:25.428244  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.428252  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:25.428257  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:25.428316  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:25.462826  451238 cri.go:89] found id: ""
	I0805 13:00:25.462858  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.462869  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:25.462877  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:25.462961  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:25.502960  451238 cri.go:89] found id: ""
	I0805 13:00:25.502989  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.502998  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:25.503006  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:25.503072  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:25.538859  451238 cri.go:89] found id: ""
	I0805 13:00:25.538888  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.538897  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:25.538902  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:25.538964  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:25.577850  451238 cri.go:89] found id: ""
	I0805 13:00:25.577883  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.577894  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:25.577901  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:25.577988  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:25.611728  451238 cri.go:89] found id: ""
	I0805 13:00:25.611773  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.611785  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:25.611793  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:25.611865  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:25.654987  451238 cri.go:89] found id: ""
	I0805 13:00:25.655018  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.655027  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:25.655039  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:25.655052  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:25.669124  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:25.669160  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:25.747354  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:25.747380  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:25.747398  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:25.825198  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:25.825241  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:25.865511  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:25.865546  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:22.526638  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:25.024414  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:27.025393  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:25.409361  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:27.411148  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:25.444607  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:27.447460  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:28.418263  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:28.431831  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:28.431895  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:28.470249  451238 cri.go:89] found id: ""
	I0805 13:00:28.470280  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.470291  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:28.470301  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:28.470373  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:28.506935  451238 cri.go:89] found id: ""
	I0805 13:00:28.506968  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.506977  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:28.506985  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:28.507053  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:28.546621  451238 cri.go:89] found id: ""
	I0805 13:00:28.546652  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.546663  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:28.546671  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:28.546749  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:28.584699  451238 cri.go:89] found id: ""
	I0805 13:00:28.584734  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.584745  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:28.584753  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:28.584820  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:28.620693  451238 cri.go:89] found id: ""
	I0805 13:00:28.620726  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.620736  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:28.620744  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:28.620814  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:28.657340  451238 cri.go:89] found id: ""
	I0805 13:00:28.657370  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.657379  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:28.657385  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:28.657438  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:28.695126  451238 cri.go:89] found id: ""
	I0805 13:00:28.695156  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.695166  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:28.695174  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:28.695239  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:28.729757  451238 cri.go:89] found id: ""
	I0805 13:00:28.729808  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.729821  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:28.729834  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:28.729852  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:28.769642  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:28.769675  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:28.818076  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:28.818114  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:28.831466  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:28.831496  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:28.902788  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:28.902818  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:28.902836  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:31.482482  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:31.497767  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:31.497867  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:31.536922  451238 cri.go:89] found id: ""
	I0805 13:00:31.536948  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.536960  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:31.536969  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:31.537040  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:31.572422  451238 cri.go:89] found id: ""
	I0805 13:00:31.572456  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.572466  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:31.572472  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:31.572531  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:31.607961  451238 cri.go:89] found id: ""
	I0805 13:00:31.607996  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.608008  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:31.608016  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:31.608082  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:31.641771  451238 cri.go:89] found id: ""
	I0805 13:00:31.641800  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.641822  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:31.641830  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:31.641904  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:31.681661  451238 cri.go:89] found id: ""
	I0805 13:00:31.681695  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.681707  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:31.681715  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:31.681791  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:31.723777  451238 cri.go:89] found id: ""
	I0805 13:00:31.723814  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.723823  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:31.723829  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:31.723922  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:31.759898  451238 cri.go:89] found id: ""
	I0805 13:00:31.759935  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.759948  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:31.759957  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:31.760022  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:31.798433  451238 cri.go:89] found id: ""
	I0805 13:00:31.798462  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.798470  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:31.798480  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:31.798497  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:31.872005  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:31.872030  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:31.872045  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:31.952201  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:31.952240  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:29.524445  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:31.525646  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:29.909901  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:32.408826  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:29.944170  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:31.944427  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:31.995920  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:31.995955  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:32.047453  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:32.047493  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:34.562369  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:34.576644  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:34.576708  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:34.613002  451238 cri.go:89] found id: ""
	I0805 13:00:34.613036  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.613047  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:34.613056  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:34.613127  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:34.650723  451238 cri.go:89] found id: ""
	I0805 13:00:34.650757  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.650769  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:34.650777  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:34.650851  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:34.689047  451238 cri.go:89] found id: ""
	I0805 13:00:34.689073  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.689081  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:34.689088  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:34.689148  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:34.727552  451238 cri.go:89] found id: ""
	I0805 13:00:34.727592  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.727604  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:34.727612  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:34.727683  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:34.761661  451238 cri.go:89] found id: ""
	I0805 13:00:34.761696  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.761707  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:34.761715  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:34.761791  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:34.800062  451238 cri.go:89] found id: ""
	I0805 13:00:34.800116  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.800128  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:34.800137  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:34.800198  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:34.833536  451238 cri.go:89] found id: ""
	I0805 13:00:34.833566  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.833578  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:34.833586  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:34.833654  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:34.868079  451238 cri.go:89] found id: ""
	I0805 13:00:34.868117  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.868126  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:34.868135  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:34.868149  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:34.920092  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:34.920124  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:34.934484  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:34.934510  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:35.007716  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:35.007751  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:35.007768  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:35.088183  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:35.088233  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:34.024704  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:36.025754  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:34.409917  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:36.409993  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:34.444842  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:36.943985  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:38.944649  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:37.633443  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:37.647405  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:37.647470  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:37.684682  451238 cri.go:89] found id: ""
	I0805 13:00:37.684711  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.684720  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:37.684727  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:37.684779  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:37.723413  451238 cri.go:89] found id: ""
	I0805 13:00:37.723442  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.723449  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:37.723455  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:37.723506  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:37.758388  451238 cri.go:89] found id: ""
	I0805 13:00:37.758418  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.758428  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:37.758437  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:37.758501  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:37.797846  451238 cri.go:89] found id: ""
	I0805 13:00:37.797879  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.797890  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:37.797901  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:37.797971  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:37.837053  451238 cri.go:89] found id: ""
	I0805 13:00:37.837082  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.837092  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:37.837104  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:37.837163  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:37.876185  451238 cri.go:89] found id: ""
	I0805 13:00:37.876211  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.876220  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:37.876226  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:37.876294  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:37.915318  451238 cri.go:89] found id: ""
	I0805 13:00:37.915350  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.915362  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:37.915370  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:37.915429  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:37.953916  451238 cri.go:89] found id: ""
	I0805 13:00:37.953944  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.953954  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:37.953964  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:37.953976  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:37.991116  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:37.991154  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:38.043796  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:38.043838  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:38.058636  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:38.058669  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:38.143022  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:38.143051  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:38.143067  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:40.721468  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:40.735679  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:40.735774  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:40.773583  451238 cri.go:89] found id: ""
	I0805 13:00:40.773609  451238 logs.go:276] 0 containers: []
	W0805 13:00:40.773617  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:40.773626  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:40.773685  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:40.819857  451238 cri.go:89] found id: ""
	I0805 13:00:40.819886  451238 logs.go:276] 0 containers: []
	W0805 13:00:40.819895  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:40.819901  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:40.819963  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:40.857156  451238 cri.go:89] found id: ""
	I0805 13:00:40.857184  451238 logs.go:276] 0 containers: []
	W0805 13:00:40.857192  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:40.857198  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:40.857251  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:40.892933  451238 cri.go:89] found id: ""
	I0805 13:00:40.892970  451238 logs.go:276] 0 containers: []
	W0805 13:00:40.892981  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:40.892990  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:40.893046  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:40.927128  451238 cri.go:89] found id: ""
	I0805 13:00:40.927163  451238 logs.go:276] 0 containers: []
	W0805 13:00:40.927173  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:40.927182  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:40.927237  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:40.961790  451238 cri.go:89] found id: ""
	I0805 13:00:40.961817  451238 logs.go:276] 0 containers: []
	W0805 13:00:40.961826  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:40.961832  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:40.961886  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:40.996249  451238 cri.go:89] found id: ""
	I0805 13:00:40.996282  451238 logs.go:276] 0 containers: []
	W0805 13:00:40.996293  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:40.996300  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:40.996371  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:41.032305  451238 cri.go:89] found id: ""
	I0805 13:00:41.032332  451238 logs.go:276] 0 containers: []
	W0805 13:00:41.032342  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:41.032358  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:41.032375  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:41.075993  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:41.076027  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:41.126020  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:41.126057  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:41.140263  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:41.140288  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:41.216648  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:41.216670  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:41.216683  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:38.524812  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:41.024597  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:38.909518  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:40.910256  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:43.410062  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:41.443930  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:43.945026  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:43.796367  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:43.810086  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:43.810162  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:43.844373  451238 cri.go:89] found id: ""
	I0805 13:00:43.844410  451238 logs.go:276] 0 containers: []
	W0805 13:00:43.844422  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:43.844430  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:43.844502  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:43.880249  451238 cri.go:89] found id: ""
	I0805 13:00:43.880285  451238 logs.go:276] 0 containers: []
	W0805 13:00:43.880295  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:43.880303  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:43.880376  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:43.921279  451238 cri.go:89] found id: ""
	I0805 13:00:43.921313  451238 logs.go:276] 0 containers: []
	W0805 13:00:43.921323  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:43.921329  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:43.921382  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:43.963736  451238 cri.go:89] found id: ""
	I0805 13:00:43.963782  451238 logs.go:276] 0 containers: []
	W0805 13:00:43.963794  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:43.963803  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:43.963869  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:44.009001  451238 cri.go:89] found id: ""
	I0805 13:00:44.009038  451238 logs.go:276] 0 containers: []
	W0805 13:00:44.009050  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:44.009057  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:44.009128  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:44.059484  451238 cri.go:89] found id: ""
	I0805 13:00:44.059514  451238 logs.go:276] 0 containers: []
	W0805 13:00:44.059526  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:44.059534  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:44.059605  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:44.102043  451238 cri.go:89] found id: ""
	I0805 13:00:44.102075  451238 logs.go:276] 0 containers: []
	W0805 13:00:44.102088  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:44.102094  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:44.102170  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:44.137518  451238 cri.go:89] found id: ""
	I0805 13:00:44.137558  451238 logs.go:276] 0 containers: []
	W0805 13:00:44.137569  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:44.137584  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:44.137600  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:44.188139  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:44.188175  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:44.202544  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:44.202588  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:44.278486  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:44.278508  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:44.278521  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:44.363419  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:44.363458  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:46.905665  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:46.922141  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:46.922206  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:43.025461  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:45.523997  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:45.908437  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:48.409410  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:46.445919  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:48.944243  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:46.963468  451238 cri.go:89] found id: ""
	I0805 13:00:46.963494  451238 logs.go:276] 0 containers: []
	W0805 13:00:46.963502  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:46.963508  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:46.963557  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:47.003445  451238 cri.go:89] found id: ""
	I0805 13:00:47.003472  451238 logs.go:276] 0 containers: []
	W0805 13:00:47.003480  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:47.003486  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:47.003537  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:47.043271  451238 cri.go:89] found id: ""
	I0805 13:00:47.043306  451238 logs.go:276] 0 containers: []
	W0805 13:00:47.043318  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:47.043326  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:47.043394  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:47.079843  451238 cri.go:89] found id: ""
	I0805 13:00:47.079874  451238 logs.go:276] 0 containers: []
	W0805 13:00:47.079884  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:47.079893  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:47.079954  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:47.116819  451238 cri.go:89] found id: ""
	I0805 13:00:47.116847  451238 logs.go:276] 0 containers: []
	W0805 13:00:47.116856  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:47.116861  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:47.116917  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:47.156302  451238 cri.go:89] found id: ""
	I0805 13:00:47.156331  451238 logs.go:276] 0 containers: []
	W0805 13:00:47.156340  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:47.156353  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:47.156410  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:47.200419  451238 cri.go:89] found id: ""
	I0805 13:00:47.200449  451238 logs.go:276] 0 containers: []
	W0805 13:00:47.200463  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:47.200469  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:47.200533  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:47.237483  451238 cri.go:89] found id: ""
	I0805 13:00:47.237515  451238 logs.go:276] 0 containers: []
	W0805 13:00:47.237522  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:47.237532  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:47.237545  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:47.251598  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:47.251632  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:47.326457  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:47.326483  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:47.326501  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:47.410413  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:47.410455  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:47.452696  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:47.452732  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:50.005335  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:50.019610  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:50.019679  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:50.057401  451238 cri.go:89] found id: ""
	I0805 13:00:50.057435  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.057447  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:50.057456  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:50.057516  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:50.101710  451238 cri.go:89] found id: ""
	I0805 13:00:50.101743  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.101751  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:50.101758  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:50.101822  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:50.139624  451238 cri.go:89] found id: ""
	I0805 13:00:50.139658  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.139669  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:50.139677  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:50.139761  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:50.176004  451238 cri.go:89] found id: ""
	I0805 13:00:50.176031  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.176039  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:50.176045  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:50.176123  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:50.219319  451238 cri.go:89] found id: ""
	I0805 13:00:50.219352  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.219362  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:50.219369  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:50.219437  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:50.287443  451238 cri.go:89] found id: ""
	I0805 13:00:50.287478  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.287489  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:50.287498  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:50.287582  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:50.321018  451238 cri.go:89] found id: ""
	I0805 13:00:50.321047  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.321056  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:50.321063  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:50.321124  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:50.354559  451238 cri.go:89] found id: ""
	I0805 13:00:50.354597  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.354610  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:50.354625  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:50.354642  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:50.398621  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:50.398657  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:50.451693  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:50.451735  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:50.466810  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:50.466851  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:50.542431  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:50.542461  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:50.542482  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:47.525977  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:50.025280  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:52.025760  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:50.410198  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:52.908466  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:50.946086  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:53.445962  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:53.128466  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:53.144139  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:53.144216  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:53.178383  451238 cri.go:89] found id: ""
	I0805 13:00:53.178427  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.178438  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:53.178447  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:53.178516  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:53.220312  451238 cri.go:89] found id: ""
	I0805 13:00:53.220348  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.220358  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:53.220365  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:53.220432  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:53.255352  451238 cri.go:89] found id: ""
	I0805 13:00:53.255380  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.255390  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:53.255398  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:53.255473  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:53.293254  451238 cri.go:89] found id: ""
	I0805 13:00:53.293292  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.293311  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:53.293320  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:53.293395  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:53.329407  451238 cri.go:89] found id: ""
	I0805 13:00:53.329436  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.329448  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:53.329455  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:53.329523  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:53.362838  451238 cri.go:89] found id: ""
	I0805 13:00:53.362868  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.362876  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:53.362883  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:53.362957  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:53.399283  451238 cri.go:89] found id: ""
	I0805 13:00:53.399313  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.399324  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:53.399332  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:53.399405  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:53.438527  451238 cri.go:89] found id: ""
	I0805 13:00:53.438558  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.438567  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:53.438578  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:53.438597  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:53.492709  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:53.492760  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:53.507522  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:53.507555  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:53.581690  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:53.581710  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:53.581724  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:53.664402  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:53.664451  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:56.209640  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:56.224403  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:56.224487  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:56.266214  451238 cri.go:89] found id: ""
	I0805 13:00:56.266243  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.266254  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:56.266263  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:56.266328  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:56.304034  451238 cri.go:89] found id: ""
	I0805 13:00:56.304070  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.304082  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:56.304091  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:56.304172  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:56.342133  451238 cri.go:89] found id: ""
	I0805 13:00:56.342159  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.342167  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:56.342173  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:56.342225  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:56.378549  451238 cri.go:89] found id: ""
	I0805 13:00:56.378588  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.378599  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:56.378606  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:56.378667  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:56.415613  451238 cri.go:89] found id: ""
	I0805 13:00:56.415641  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.415651  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:56.415657  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:56.415715  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:56.451915  451238 cri.go:89] found id: ""
	I0805 13:00:56.451944  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.451953  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:56.451960  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:56.452021  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:56.492219  451238 cri.go:89] found id: ""
	I0805 13:00:56.492255  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.492267  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:56.492275  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:56.492347  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:56.534564  451238 cri.go:89] found id: ""
	I0805 13:00:56.534606  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.534618  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:56.534632  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:56.534652  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:56.548772  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:56.548813  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:56.625649  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:56.625678  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:56.625695  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:56.716735  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:56.716787  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:56.771881  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:56.771910  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:54.525355  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:57.025659  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:54.908805  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:56.909601  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:55.943885  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:57.945233  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:59.325624  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:59.338796  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:59.338869  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:59.375002  451238 cri.go:89] found id: ""
	I0805 13:00:59.375039  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.375050  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:59.375059  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:59.375138  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:59.410778  451238 cri.go:89] found id: ""
	I0805 13:00:59.410800  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.410810  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:59.410817  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:59.410873  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:59.453728  451238 cri.go:89] found id: ""
	I0805 13:00:59.453760  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.453771  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:59.453779  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:59.453845  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:59.492968  451238 cri.go:89] found id: ""
	I0805 13:00:59.493002  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.493013  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:59.493021  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:59.493091  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:59.533342  451238 cri.go:89] found id: ""
	I0805 13:00:59.533372  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.533383  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:59.533390  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:59.533445  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:59.569677  451238 cri.go:89] found id: ""
	I0805 13:00:59.569705  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.569715  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:59.569722  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:59.569789  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:59.605106  451238 cri.go:89] found id: ""
	I0805 13:00:59.605139  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.605150  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:59.605158  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:59.605228  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:59.639948  451238 cri.go:89] found id: ""
	I0805 13:00:59.639980  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.639989  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:59.640000  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:59.640016  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:59.679926  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:59.679956  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:59.731545  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:59.731591  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:59.746286  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:59.746320  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:59.828398  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:59.828420  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:59.828439  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:59.524365  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:01.525092  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:59.410713  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:01.909619  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:59.945483  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:02.445780  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:02.412560  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:02.429633  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:02.429718  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:02.475916  451238 cri.go:89] found id: ""
	I0805 13:01:02.475951  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.475963  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:02.475971  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:02.476061  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:02.528807  451238 cri.go:89] found id: ""
	I0805 13:01:02.528837  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.528849  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:02.528856  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:02.528924  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:02.575164  451238 cri.go:89] found id: ""
	I0805 13:01:02.575194  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.575210  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:02.575218  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:02.575286  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:02.614709  451238 cri.go:89] found id: ""
	I0805 13:01:02.614800  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.614815  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:02.614824  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:02.614902  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:02.654941  451238 cri.go:89] found id: ""
	I0805 13:01:02.654979  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.654990  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:02.654997  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:02.655069  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:02.690552  451238 cri.go:89] found id: ""
	I0805 13:01:02.690586  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.690595  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:02.690602  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:02.690657  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:02.725607  451238 cri.go:89] found id: ""
	I0805 13:01:02.725644  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.725656  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:02.725665  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:02.725745  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:02.760180  451238 cri.go:89] found id: ""
	I0805 13:01:02.760211  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.760223  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:02.760244  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:02.760262  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:02.813071  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:02.813128  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:02.828633  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:02.828665  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:02.898049  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:02.898074  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:02.898087  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:02.988077  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:02.988124  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:05.532719  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:05.546423  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:05.546489  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:05.590978  451238 cri.go:89] found id: ""
	I0805 13:01:05.591006  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.591013  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:05.591019  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:05.591071  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:05.631251  451238 cri.go:89] found id: ""
	I0805 13:01:05.631287  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.631298  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:05.631306  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:05.631391  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:05.671826  451238 cri.go:89] found id: ""
	I0805 13:01:05.671863  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.671875  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:05.671883  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:05.671951  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:05.708147  451238 cri.go:89] found id: ""
	I0805 13:01:05.708176  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.708186  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:05.708194  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:05.708262  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:05.741962  451238 cri.go:89] found id: ""
	I0805 13:01:05.741994  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.742006  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:05.742015  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:05.742087  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:05.777930  451238 cri.go:89] found id: ""
	I0805 13:01:05.777965  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.777976  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:05.777985  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:05.778061  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:05.813066  451238 cri.go:89] found id: ""
	I0805 13:01:05.813099  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.813111  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:05.813119  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:05.813189  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:05.849382  451238 cri.go:89] found id: ""
	I0805 13:01:05.849410  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.849418  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:05.849428  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:05.849440  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:05.903376  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:05.903423  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:05.918540  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:05.918575  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:05.990608  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:05.990637  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:05.990658  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:06.072524  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:06.072571  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:04.025528  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:06.525325  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:04.409190  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:06.409231  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:04.944649  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:07.445278  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:08.617528  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:08.631637  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:08.631713  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:08.669999  451238 cri.go:89] found id: ""
	I0805 13:01:08.670039  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.670050  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:08.670065  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:08.670147  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:08.705322  451238 cri.go:89] found id: ""
	I0805 13:01:08.705356  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.705365  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:08.705370  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:08.705442  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:08.744884  451238 cri.go:89] found id: ""
	I0805 13:01:08.744915  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.744927  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:08.744936  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:08.745018  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:08.782394  451238 cri.go:89] found id: ""
	I0805 13:01:08.782428  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.782440  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:08.782448  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:08.782518  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:08.816989  451238 cri.go:89] found id: ""
	I0805 13:01:08.817018  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.817027  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:08.817034  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:08.817106  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:08.856389  451238 cri.go:89] found id: ""
	I0805 13:01:08.856420  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.856431  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:08.856439  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:08.856506  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:08.891942  451238 cri.go:89] found id: ""
	I0805 13:01:08.891975  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.891986  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:08.891995  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:08.892064  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:08.930329  451238 cri.go:89] found id: ""
	I0805 13:01:08.930364  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.930375  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:08.930389  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:08.930406  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:08.972574  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:08.972610  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:09.026194  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:09.026228  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:09.040973  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:09.041002  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:09.115094  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:09.115121  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:09.115143  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:11.698322  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:11.711841  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:11.711927  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:11.749152  451238 cri.go:89] found id: ""
	I0805 13:01:11.749187  451238 logs.go:276] 0 containers: []
	W0805 13:01:11.749199  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:11.749207  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:11.749274  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:11.785395  451238 cri.go:89] found id: ""
	I0805 13:01:11.785430  451238 logs.go:276] 0 containers: []
	W0805 13:01:11.785441  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:11.785449  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:11.785516  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:11.822240  451238 cri.go:89] found id: ""
	I0805 13:01:11.822282  451238 logs.go:276] 0 containers: []
	W0805 13:01:11.822293  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:11.822302  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:11.822372  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:11.858755  451238 cri.go:89] found id: ""
	I0805 13:01:11.858794  451238 logs.go:276] 0 containers: []
	W0805 13:01:11.858805  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:11.858814  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:11.858884  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:11.893064  451238 cri.go:89] found id: ""
	I0805 13:01:11.893101  451238 logs.go:276] 0 containers: []
	W0805 13:01:11.893113  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:11.893121  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:11.893195  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:11.930965  451238 cri.go:89] found id: ""
	I0805 13:01:11.931003  451238 logs.go:276] 0 containers: []
	W0805 13:01:11.931015  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:11.931025  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:11.931089  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:09.025566  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:11.525069  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:08.910618  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:11.409157  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:09.944797  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:12.445029  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:11.967594  451238 cri.go:89] found id: ""
	I0805 13:01:11.967620  451238 logs.go:276] 0 containers: []
	W0805 13:01:11.967630  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:11.967638  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:11.967697  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:12.004978  451238 cri.go:89] found id: ""
	I0805 13:01:12.005007  451238 logs.go:276] 0 containers: []
	W0805 13:01:12.005015  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:12.005025  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:12.005037  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:12.087476  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:12.087500  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:12.087515  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:12.177690  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:12.177757  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:12.222858  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:12.222889  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:12.273322  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:12.273362  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:14.788210  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:14.802351  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:14.802426  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:14.837705  451238 cri.go:89] found id: ""
	I0805 13:01:14.837736  451238 logs.go:276] 0 containers: []
	W0805 13:01:14.837746  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:14.837755  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:14.837824  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:14.873389  451238 cri.go:89] found id: ""
	I0805 13:01:14.873420  451238 logs.go:276] 0 containers: []
	W0805 13:01:14.873430  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:14.873438  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:14.873506  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:14.913969  451238 cri.go:89] found id: ""
	I0805 13:01:14.913999  451238 logs.go:276] 0 containers: []
	W0805 13:01:14.914009  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:14.914018  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:14.914081  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:14.953478  451238 cri.go:89] found id: ""
	I0805 13:01:14.953510  451238 logs.go:276] 0 containers: []
	W0805 13:01:14.953521  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:14.953528  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:14.953584  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:14.992166  451238 cri.go:89] found id: ""
	I0805 13:01:14.992197  451238 logs.go:276] 0 containers: []
	W0805 13:01:14.992206  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:14.992212  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:14.992291  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:15.031258  451238 cri.go:89] found id: ""
	I0805 13:01:15.031285  451238 logs.go:276] 0 containers: []
	W0805 13:01:15.031293  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:15.031300  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:15.031353  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:15.068944  451238 cri.go:89] found id: ""
	I0805 13:01:15.068972  451238 logs.go:276] 0 containers: []
	W0805 13:01:15.068980  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:15.068986  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:15.069042  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:15.105413  451238 cri.go:89] found id: ""
	I0805 13:01:15.105443  451238 logs.go:276] 0 containers: []
	W0805 13:01:15.105454  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:15.105467  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:15.105489  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:15.161925  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:15.161969  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:15.177174  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:15.177206  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:15.257950  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:15.257975  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:15.257989  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:15.336672  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:15.336716  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:13.526088  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:16.025513  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:13.908773  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:15.908817  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:17.910431  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:14.945842  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:17.444869  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:17.876314  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:17.889842  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:17.889909  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:17.928050  451238 cri.go:89] found id: ""
	I0805 13:01:17.928077  451238 logs.go:276] 0 containers: []
	W0805 13:01:17.928086  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:17.928092  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:17.928150  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:17.965713  451238 cri.go:89] found id: ""
	I0805 13:01:17.965751  451238 logs.go:276] 0 containers: []
	W0805 13:01:17.965762  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:17.965770  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:17.965837  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:18.002938  451238 cri.go:89] found id: ""
	I0805 13:01:18.002972  451238 logs.go:276] 0 containers: []
	W0805 13:01:18.002984  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:18.002992  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:18.003062  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:18.040140  451238 cri.go:89] found id: ""
	I0805 13:01:18.040178  451238 logs.go:276] 0 containers: []
	W0805 13:01:18.040190  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:18.040198  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:18.040269  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:18.075427  451238 cri.go:89] found id: ""
	I0805 13:01:18.075463  451238 logs.go:276] 0 containers: []
	W0805 13:01:18.075475  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:18.075490  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:18.075558  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:18.113469  451238 cri.go:89] found id: ""
	I0805 13:01:18.113507  451238 logs.go:276] 0 containers: []
	W0805 13:01:18.113521  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:18.113528  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:18.113587  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:18.152626  451238 cri.go:89] found id: ""
	I0805 13:01:18.152662  451238 logs.go:276] 0 containers: []
	W0805 13:01:18.152672  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:18.152678  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:18.152745  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:18.189540  451238 cri.go:89] found id: ""
	I0805 13:01:18.189577  451238 logs.go:276] 0 containers: []
	W0805 13:01:18.189590  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:18.189602  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:18.189618  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:18.244314  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:18.244353  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:18.257912  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:18.257939  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:18.339659  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:18.339682  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:18.339699  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:18.425391  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:18.425449  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:20.975889  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:20.989798  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:20.989868  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:21.030858  451238 cri.go:89] found id: ""
	I0805 13:01:21.030894  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.030906  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:21.030915  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:21.030979  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:21.067367  451238 cri.go:89] found id: ""
	I0805 13:01:21.067402  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.067411  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:21.067419  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:21.067476  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:21.104307  451238 cri.go:89] found id: ""
	I0805 13:01:21.104337  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.104352  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:21.104361  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:21.104424  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:21.141486  451238 cri.go:89] found id: ""
	I0805 13:01:21.141519  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.141531  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:21.141539  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:21.141606  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:21.179247  451238 cri.go:89] found id: ""
	I0805 13:01:21.179305  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.179317  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:21.179330  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:21.179406  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:21.215030  451238 cri.go:89] found id: ""
	I0805 13:01:21.215065  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.215075  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:21.215083  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:21.215152  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:21.252982  451238 cri.go:89] found id: ""
	I0805 13:01:21.253008  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.253016  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:21.253022  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:21.253097  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:21.290256  451238 cri.go:89] found id: ""
	I0805 13:01:21.290292  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.290302  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:21.290325  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:21.290343  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:21.342809  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:21.342855  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:21.357959  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:21.358000  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:21.433087  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:21.433120  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:21.433143  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:21.514261  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:21.514312  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:18.025965  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:20.524832  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:20.409943  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:22.909233  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:19.445074  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:21.445547  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:23.445637  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:24.060402  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:24.076056  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:24.076131  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:24.115976  451238 cri.go:89] found id: ""
	I0805 13:01:24.116009  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.116022  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:24.116031  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:24.116111  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:24.158411  451238 cri.go:89] found id: ""
	I0805 13:01:24.158440  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.158448  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:24.158454  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:24.158520  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:24.194589  451238 cri.go:89] found id: ""
	I0805 13:01:24.194624  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.194635  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:24.194644  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:24.194720  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:24.231528  451238 cri.go:89] found id: ""
	I0805 13:01:24.231562  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.231569  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:24.231576  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:24.231649  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:24.268491  451238 cri.go:89] found id: ""
	I0805 13:01:24.268523  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.268532  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:24.268538  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:24.268602  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:24.306718  451238 cri.go:89] found id: ""
	I0805 13:01:24.306752  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.306763  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:24.306772  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:24.306839  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:24.343552  451238 cri.go:89] found id: ""
	I0805 13:01:24.343578  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.343586  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:24.343593  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:24.343649  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:24.384555  451238 cri.go:89] found id: ""
	I0805 13:01:24.384590  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.384602  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:24.384615  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:24.384633  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:24.430256  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:24.430298  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:24.484616  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:24.484661  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:24.500926  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:24.500958  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:24.581379  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:24.581410  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:24.581424  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:22.525806  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:24.526411  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:27.024452  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:25.408887  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:27.409717  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:25.945113  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:28.444740  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:27.167538  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:27.181959  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:27.182035  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:27.223243  451238 cri.go:89] found id: ""
	I0805 13:01:27.223282  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.223293  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:27.223301  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:27.223374  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:27.257806  451238 cri.go:89] found id: ""
	I0805 13:01:27.257843  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.257856  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:27.257864  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:27.257940  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:27.304306  451238 cri.go:89] found id: ""
	I0805 13:01:27.304342  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.304353  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:27.304370  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:27.304439  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:27.342595  451238 cri.go:89] found id: ""
	I0805 13:01:27.342623  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.342631  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:27.342638  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:27.342707  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:27.385628  451238 cri.go:89] found id: ""
	I0805 13:01:27.385661  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.385670  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:27.385677  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:27.385760  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:27.425059  451238 cri.go:89] found id: ""
	I0805 13:01:27.425091  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.425100  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:27.425106  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:27.425175  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:27.465739  451238 cri.go:89] found id: ""
	I0805 13:01:27.465783  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.465794  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:27.465807  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:27.465869  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:27.506431  451238 cri.go:89] found id: ""
	I0805 13:01:27.506460  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.506468  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:27.506477  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:27.506494  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:27.586440  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:27.586467  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:27.586482  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:27.667826  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:27.667869  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:27.710458  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:27.710496  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:27.763057  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:27.763100  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:30.278799  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:30.293788  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:30.293874  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:30.336209  451238 cri.go:89] found id: ""
	I0805 13:01:30.336240  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.336248  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:30.336255  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:30.336323  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:30.371593  451238 cri.go:89] found id: ""
	I0805 13:01:30.371627  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.371642  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:30.371649  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:30.371714  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:30.408266  451238 cri.go:89] found id: ""
	I0805 13:01:30.408298  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.408317  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:30.408325  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:30.408388  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:30.448841  451238 cri.go:89] found id: ""
	I0805 13:01:30.448864  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.448872  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:30.448878  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:30.448940  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:30.488367  451238 cri.go:89] found id: ""
	I0805 13:01:30.488403  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.488411  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:30.488418  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:30.488485  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:30.527131  451238 cri.go:89] found id: ""
	I0805 13:01:30.527163  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.527173  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:30.527181  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:30.527249  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:30.568089  451238 cri.go:89] found id: ""
	I0805 13:01:30.568122  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.568131  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:30.568138  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:30.568203  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:30.605952  451238 cri.go:89] found id: ""
	I0805 13:01:30.605990  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.606007  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:30.606021  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:30.606041  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:30.656449  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:30.656491  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:30.710124  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:30.710164  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:30.724417  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:30.724455  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:30.820639  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:30.820669  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:30.820687  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:29.025377  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:31.525340  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:29.909043  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:32.410359  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:30.445047  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:32.445931  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:33.403497  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:33.419581  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:33.419651  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:33.462011  451238 cri.go:89] found id: ""
	I0805 13:01:33.462042  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.462051  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:33.462057  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:33.462126  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:33.502476  451238 cri.go:89] found id: ""
	I0805 13:01:33.502509  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.502519  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:33.502527  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:33.502601  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:33.547392  451238 cri.go:89] found id: ""
	I0805 13:01:33.547421  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.547430  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:33.547437  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:33.547490  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:33.584013  451238 cri.go:89] found id: ""
	I0805 13:01:33.584040  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.584048  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:33.584054  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:33.584125  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:33.617325  451238 cri.go:89] found id: ""
	I0805 13:01:33.617359  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.617367  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:33.617374  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:33.617429  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:33.651922  451238 cri.go:89] found id: ""
	I0805 13:01:33.651959  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.651971  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:33.651980  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:33.652049  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:33.689487  451238 cri.go:89] found id: ""
	I0805 13:01:33.689515  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.689522  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:33.689529  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:33.689580  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:33.723220  451238 cri.go:89] found id: ""
	I0805 13:01:33.723251  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.723260  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:33.723270  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:33.723282  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:33.777271  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:33.777311  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:33.792497  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:33.792532  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:33.866801  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:33.866826  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:33.866842  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:33.946739  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:33.946774  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:36.486108  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:36.501316  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:36.501397  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:36.542082  451238 cri.go:89] found id: ""
	I0805 13:01:36.542118  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.542130  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:36.542139  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:36.542217  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:36.581005  451238 cri.go:89] found id: ""
	I0805 13:01:36.581047  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.581059  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:36.581068  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:36.581148  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:36.623945  451238 cri.go:89] found id: ""
	I0805 13:01:36.623974  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.623982  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:36.623987  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:36.624041  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:36.661632  451238 cri.go:89] found id: ""
	I0805 13:01:36.661665  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.661673  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:36.661680  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:36.661738  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:36.701808  451238 cri.go:89] found id: ""
	I0805 13:01:36.701839  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.701850  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:36.701857  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:36.701941  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:36.742287  451238 cri.go:89] found id: ""
	I0805 13:01:36.742320  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.742331  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:36.742340  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:36.742410  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:36.794581  451238 cri.go:89] found id: ""
	I0805 13:01:36.794610  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.794621  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:36.794629  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:36.794690  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:36.833271  451238 cri.go:89] found id: ""
	I0805 13:01:36.833301  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.833311  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:36.833325  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:36.833346  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:36.921427  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:36.921467  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:34.024353  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:36.025557  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:34.909401  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:36.909529  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:34.945077  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:36.945632  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:36.965468  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:36.965503  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:37.018475  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:37.018515  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:37.033671  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:37.033697  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:37.105339  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:39.606042  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:39.619215  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:39.619296  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:39.655614  451238 cri.go:89] found id: ""
	I0805 13:01:39.655648  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.655660  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:39.655668  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:39.655760  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:39.691489  451238 cri.go:89] found id: ""
	I0805 13:01:39.691523  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.691535  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:39.691543  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:39.691610  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:39.726394  451238 cri.go:89] found id: ""
	I0805 13:01:39.726427  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.726438  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:39.726446  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:39.726518  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:39.759847  451238 cri.go:89] found id: ""
	I0805 13:01:39.759897  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.759909  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:39.759918  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:39.759988  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:39.795011  451238 cri.go:89] found id: ""
	I0805 13:01:39.795043  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.795051  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:39.795057  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:39.795120  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:39.831302  451238 cri.go:89] found id: ""
	I0805 13:01:39.831336  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.831346  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:39.831356  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:39.831432  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:39.866506  451238 cri.go:89] found id: ""
	I0805 13:01:39.866540  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.866547  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:39.866554  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:39.866622  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:39.898083  451238 cri.go:89] found id: ""
	I0805 13:01:39.898108  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.898115  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:39.898128  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:39.898147  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:39.912192  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:39.912221  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:39.989216  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:39.989246  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:39.989262  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:40.069702  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:40.069746  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:40.118390  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:40.118428  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:38.525929  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:40.527120  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:38.909905  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:41.408953  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:43.409966  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:39.445474  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:41.944704  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:43.944956  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:42.669421  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:42.682287  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:42.682359  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:42.722933  451238 cri.go:89] found id: ""
	I0805 13:01:42.722961  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.722969  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:42.722975  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:42.723037  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:42.757604  451238 cri.go:89] found id: ""
	I0805 13:01:42.757635  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.757646  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:42.757654  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:42.757723  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:42.795825  451238 cri.go:89] found id: ""
	I0805 13:01:42.795852  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.795863  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:42.795871  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:42.795939  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:42.831749  451238 cri.go:89] found id: ""
	I0805 13:01:42.831779  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.831791  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:42.831800  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:42.831862  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:42.866280  451238 cri.go:89] found id: ""
	I0805 13:01:42.866310  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.866322  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:42.866330  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:42.866390  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:42.904393  451238 cri.go:89] found id: ""
	I0805 13:01:42.904427  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.904436  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:42.904445  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:42.904510  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:42.943175  451238 cri.go:89] found id: ""
	I0805 13:01:42.943204  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.943215  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:42.943223  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:42.943292  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:42.979117  451238 cri.go:89] found id: ""
	I0805 13:01:42.979144  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.979152  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:42.979174  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:42.979191  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:43.032032  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:43.032070  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:43.046285  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:43.046315  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:43.120300  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:43.120327  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:43.120347  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:43.209800  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:43.209851  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:45.759057  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:45.771984  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:45.772056  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:45.805421  451238 cri.go:89] found id: ""
	I0805 13:01:45.805451  451238 logs.go:276] 0 containers: []
	W0805 13:01:45.805459  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:45.805466  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:45.805521  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:45.841552  451238 cri.go:89] found id: ""
	I0805 13:01:45.841579  451238 logs.go:276] 0 containers: []
	W0805 13:01:45.841588  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:45.841597  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:45.841672  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:45.878502  451238 cri.go:89] found id: ""
	I0805 13:01:45.878529  451238 logs.go:276] 0 containers: []
	W0805 13:01:45.878537  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:45.878546  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:45.878622  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:45.921145  451238 cri.go:89] found id: ""
	I0805 13:01:45.921187  451238 logs.go:276] 0 containers: []
	W0805 13:01:45.921198  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:45.921207  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:45.921273  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:45.958408  451238 cri.go:89] found id: ""
	I0805 13:01:45.958437  451238 logs.go:276] 0 containers: []
	W0805 13:01:45.958445  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:45.958452  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:45.958521  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:45.994632  451238 cri.go:89] found id: ""
	I0805 13:01:45.994660  451238 logs.go:276] 0 containers: []
	W0805 13:01:45.994669  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:45.994676  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:45.994727  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:46.032930  451238 cri.go:89] found id: ""
	I0805 13:01:46.032961  451238 logs.go:276] 0 containers: []
	W0805 13:01:46.032971  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:46.032978  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:46.033041  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:46.074396  451238 cri.go:89] found id: ""
	I0805 13:01:46.074429  451238 logs.go:276] 0 containers: []
	W0805 13:01:46.074441  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:46.074454  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:46.074475  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:46.131977  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:46.132020  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:46.147924  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:46.147957  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:46.222005  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:46.222038  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:46.222054  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:46.306799  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:46.306842  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:43.024643  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:45.524936  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:45.410385  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:47.909281  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:46.444746  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:48.950198  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:48.856982  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:48.870945  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:48.871025  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:48.930811  451238 cri.go:89] found id: ""
	I0805 13:01:48.930837  451238 logs.go:276] 0 containers: []
	W0805 13:01:48.930852  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:48.930858  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:48.930917  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:48.986604  451238 cri.go:89] found id: ""
	I0805 13:01:48.986629  451238 logs.go:276] 0 containers: []
	W0805 13:01:48.986637  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:48.986643  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:48.986706  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:49.039433  451238 cri.go:89] found id: ""
	I0805 13:01:49.039468  451238 logs.go:276] 0 containers: []
	W0805 13:01:49.039479  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:49.039487  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:49.039555  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:49.079593  451238 cri.go:89] found id: ""
	I0805 13:01:49.079625  451238 logs.go:276] 0 containers: []
	W0805 13:01:49.079637  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:49.079645  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:49.079714  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:49.116243  451238 cri.go:89] found id: ""
	I0805 13:01:49.116274  451238 logs.go:276] 0 containers: []
	W0805 13:01:49.116284  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:49.116292  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:49.116360  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:49.158744  451238 cri.go:89] found id: ""
	I0805 13:01:49.158779  451238 logs.go:276] 0 containers: []
	W0805 13:01:49.158790  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:49.158799  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:49.158868  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:49.193747  451238 cri.go:89] found id: ""
	I0805 13:01:49.193778  451238 logs.go:276] 0 containers: []
	W0805 13:01:49.193786  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:49.193792  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:49.193843  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:49.227663  451238 cri.go:89] found id: ""
	I0805 13:01:49.227691  451238 logs.go:276] 0 containers: []
	W0805 13:01:49.227704  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:49.227714  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:49.227727  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:49.281380  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:49.281424  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:49.296286  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:49.296318  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:49.368584  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:49.368609  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:49.368625  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:49.453857  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:49.453909  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:48.024987  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:50.026076  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:50.408363  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:52.410039  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:51.444602  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:53.445118  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:51.993057  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:52.006066  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:52.006148  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:52.043179  451238 cri.go:89] found id: ""
	I0805 13:01:52.043212  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.043223  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:52.043231  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:52.043300  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:52.076469  451238 cri.go:89] found id: ""
	I0805 13:01:52.076502  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.076512  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:52.076520  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:52.076586  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:52.112443  451238 cri.go:89] found id: ""
	I0805 13:01:52.112477  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.112488  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:52.112497  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:52.112569  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:52.147589  451238 cri.go:89] found id: ""
	I0805 13:01:52.147620  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.147631  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:52.147638  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:52.147702  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:52.184016  451238 cri.go:89] found id: ""
	I0805 13:01:52.184053  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.184063  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:52.184072  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:52.184134  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:52.219670  451238 cri.go:89] found id: ""
	I0805 13:01:52.219702  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.219714  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:52.219727  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:52.219820  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:52.258697  451238 cri.go:89] found id: ""
	I0805 13:01:52.258731  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.258744  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:52.258752  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:52.258818  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:52.299599  451238 cri.go:89] found id: ""
	I0805 13:01:52.299636  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.299649  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:52.299665  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:52.299683  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:52.351730  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:52.351772  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:52.365993  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:52.366022  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:52.436019  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:52.436041  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:52.436056  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:52.520082  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:52.520118  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:55.064214  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:55.077358  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:55.077454  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:55.110523  451238 cri.go:89] found id: ""
	I0805 13:01:55.110555  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.110564  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:55.110570  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:55.110630  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:55.147870  451238 cri.go:89] found id: ""
	I0805 13:01:55.147905  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.147916  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:55.147925  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:55.147998  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:55.180769  451238 cri.go:89] found id: ""
	I0805 13:01:55.180803  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.180814  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:55.180822  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:55.180890  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:55.217290  451238 cri.go:89] found id: ""
	I0805 13:01:55.217332  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.217343  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:55.217353  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:55.217420  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:55.254185  451238 cri.go:89] found id: ""
	I0805 13:01:55.254221  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.254232  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:55.254239  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:55.254295  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:55.290633  451238 cri.go:89] found id: ""
	I0805 13:01:55.290662  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.290673  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:55.290681  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:55.290747  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:55.325830  451238 cri.go:89] found id: ""
	I0805 13:01:55.325862  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.325873  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:55.325880  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:55.325947  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:55.359887  451238 cri.go:89] found id: ""
	I0805 13:01:55.359922  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.359931  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:55.359941  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:55.359953  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:55.418251  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:55.418299  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:55.432007  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:55.432038  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:55.507177  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:55.507205  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:55.507219  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:55.586919  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:55.586965  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:52.525480  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:54.525653  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:57.024834  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:54.410408  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:56.909810  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:55.944741  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:57.946654  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:58.128822  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:58.142726  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:58.142799  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:58.178027  451238 cri.go:89] found id: ""
	I0805 13:01:58.178056  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.178067  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:58.178075  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:58.178147  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:58.213309  451238 cri.go:89] found id: ""
	I0805 13:01:58.213340  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.213351  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:58.213358  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:58.213430  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:58.247296  451238 cri.go:89] found id: ""
	I0805 13:01:58.247323  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.247332  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:58.247338  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:58.247393  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:58.280226  451238 cri.go:89] found id: ""
	I0805 13:01:58.280255  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.280266  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:58.280277  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:58.280335  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:58.316934  451238 cri.go:89] found id: ""
	I0805 13:01:58.316969  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.316981  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:58.316989  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:58.317055  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:58.360931  451238 cri.go:89] found id: ""
	I0805 13:01:58.360967  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.360979  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:58.360987  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:58.361055  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:58.399112  451238 cri.go:89] found id: ""
	I0805 13:01:58.399150  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.399163  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:58.399171  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:58.399244  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:58.441903  451238 cri.go:89] found id: ""
	I0805 13:01:58.441930  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.441941  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:58.441952  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:58.441967  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:58.524869  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:58.524908  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:58.562598  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:58.562634  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:58.618274  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:58.618313  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:58.633011  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:58.633039  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:58.706287  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:01.206971  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:01.222277  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:01.222357  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:01.266949  451238 cri.go:89] found id: ""
	I0805 13:02:01.266982  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.266993  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:01.267007  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:01.267108  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:01.306765  451238 cri.go:89] found id: ""
	I0805 13:02:01.306791  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.306799  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:01.306805  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:01.306859  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:01.345108  451238 cri.go:89] found id: ""
	I0805 13:02:01.345145  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.345157  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:01.345164  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:01.345227  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:01.383201  451238 cri.go:89] found id: ""
	I0805 13:02:01.383231  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.383239  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:01.383245  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:01.383307  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:01.419292  451238 cri.go:89] found id: ""
	I0805 13:02:01.419320  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.419331  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:01.419338  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:01.419410  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:01.456447  451238 cri.go:89] found id: ""
	I0805 13:02:01.456482  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.456492  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:01.456500  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:01.456568  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:01.496266  451238 cri.go:89] found id: ""
	I0805 13:02:01.496298  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.496306  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:01.496312  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:01.496375  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:01.541492  451238 cri.go:89] found id: ""
	I0805 13:02:01.541529  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.541541  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:01.541555  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:01.541571  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:01.593140  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:01.593185  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:01.606641  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:01.606670  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:01.681989  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:01.682015  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:01.682030  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:01.765612  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:01.765655  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:59.025355  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:01.025443  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:59.408591  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:01.409368  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:00.445254  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:02.944495  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:04.311066  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:04.326530  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:04.326599  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:04.360091  451238 cri.go:89] found id: ""
	I0805 13:02:04.360124  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.360136  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:04.360142  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:04.360214  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:04.398983  451238 cri.go:89] found id: ""
	I0805 13:02:04.399014  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.399026  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:04.399045  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:04.399122  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:04.433444  451238 cri.go:89] found id: ""
	I0805 13:02:04.433474  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.433483  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:04.433495  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:04.433546  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:04.470113  451238 cri.go:89] found id: ""
	I0805 13:02:04.470145  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.470156  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:04.470167  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:04.470233  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:04.505695  451238 cri.go:89] found id: ""
	I0805 13:02:04.505721  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.505731  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:04.505738  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:04.505801  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:04.544093  451238 cri.go:89] found id: ""
	I0805 13:02:04.544121  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.544129  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:04.544136  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:04.544196  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:04.579663  451238 cri.go:89] found id: ""
	I0805 13:02:04.579702  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.579715  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:04.579724  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:04.579803  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:04.616524  451238 cri.go:89] found id: ""
	I0805 13:02:04.616565  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.616577  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:04.616590  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:04.616607  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:04.693014  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:04.693035  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:04.693048  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:04.772508  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:04.772550  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:04.813014  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:04.813043  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:04.864653  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:04.864702  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:03.525225  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:06.024868  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:03.908365  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:05.908993  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:07.910958  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:05.444593  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:07.444737  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:07.378816  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:07.392347  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:07.392439  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:07.425843  451238 cri.go:89] found id: ""
	I0805 13:02:07.425876  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.425887  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:07.425895  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:07.425958  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:07.461547  451238 cri.go:89] found id: ""
	I0805 13:02:07.461575  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.461584  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:07.461591  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:07.461651  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:07.496461  451238 cri.go:89] found id: ""
	I0805 13:02:07.496500  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.496510  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:07.496521  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:07.496599  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:07.531520  451238 cri.go:89] found id: ""
	I0805 13:02:07.531556  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.531566  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:07.531574  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:07.531642  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:07.571821  451238 cri.go:89] found id: ""
	I0805 13:02:07.571855  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.571866  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:07.571876  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:07.571948  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:07.611111  451238 cri.go:89] found id: ""
	I0805 13:02:07.611151  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.611159  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:07.611165  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:07.611226  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:07.651428  451238 cri.go:89] found id: ""
	I0805 13:02:07.651456  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.651464  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:07.651470  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:07.651520  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:07.689828  451238 cri.go:89] found id: ""
	I0805 13:02:07.689858  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.689866  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:07.689877  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:07.689893  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:07.746381  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:07.746422  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:07.760953  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:07.760989  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:07.834859  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:07.834883  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:07.834901  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:07.915344  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:07.915376  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:10.459232  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:10.472789  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:10.472853  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:10.508434  451238 cri.go:89] found id: ""
	I0805 13:02:10.508462  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.508470  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:10.508477  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:10.508539  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:10.543487  451238 cri.go:89] found id: ""
	I0805 13:02:10.543515  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.543524  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:10.543530  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:10.543582  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:10.588274  451238 cri.go:89] found id: ""
	I0805 13:02:10.588302  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.588310  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:10.588317  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:10.588379  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:10.620810  451238 cri.go:89] found id: ""
	I0805 13:02:10.620851  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.620863  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:10.620871  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:10.620945  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:10.657882  451238 cri.go:89] found id: ""
	I0805 13:02:10.657913  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.657923  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:10.657929  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:10.657993  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:10.696188  451238 cri.go:89] found id: ""
	I0805 13:02:10.696220  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.696229  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:10.696235  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:10.696294  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:10.729942  451238 cri.go:89] found id: ""
	I0805 13:02:10.729977  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.729988  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:10.729996  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:10.730050  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:10.761972  451238 cri.go:89] found id: ""
	I0805 13:02:10.762000  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.762008  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:10.762018  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:10.762032  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:10.816859  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:10.816890  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:10.830348  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:10.830379  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:10.902720  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:10.902753  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:10.902771  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:10.981464  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:10.981505  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:08.024948  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:10.525441  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:10.408841  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:12.409506  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:09.445359  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:11.944853  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:13.528296  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:13.541813  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:13.541887  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:13.575632  451238 cri.go:89] found id: ""
	I0805 13:02:13.575669  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.575681  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:13.575689  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:13.575766  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:13.612646  451238 cri.go:89] found id: ""
	I0805 13:02:13.612680  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.612691  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:13.612699  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:13.612755  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:13.650310  451238 cri.go:89] found id: ""
	I0805 13:02:13.650341  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.650361  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:13.650369  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:13.650439  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:13.686941  451238 cri.go:89] found id: ""
	I0805 13:02:13.686970  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.686981  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:13.686990  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:13.687054  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:13.722250  451238 cri.go:89] found id: ""
	I0805 13:02:13.722285  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.722297  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:13.722306  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:13.722388  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:13.758337  451238 cri.go:89] found id: ""
	I0805 13:02:13.758367  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.758375  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:13.758382  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:13.758443  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:13.792980  451238 cri.go:89] found id: ""
	I0805 13:02:13.793016  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.793028  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:13.793036  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:13.793127  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:13.831511  451238 cri.go:89] found id: ""
	I0805 13:02:13.831539  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.831547  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:13.831558  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:13.831579  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:13.885124  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:13.885169  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:13.899112  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:13.899155  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:13.977058  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:13.977099  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:13.977115  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:14.060873  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:14.060911  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:16.602595  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:16.617557  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:16.617638  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:16.660212  451238 cri.go:89] found id: ""
	I0805 13:02:16.660244  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.660256  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:16.660264  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:16.660323  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:16.695515  451238 cri.go:89] found id: ""
	I0805 13:02:16.695553  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.695564  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:16.695572  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:16.695638  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:16.732844  451238 cri.go:89] found id: ""
	I0805 13:02:16.732875  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.732884  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:16.732891  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:16.732943  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:16.772465  451238 cri.go:89] found id: ""
	I0805 13:02:16.772497  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.772504  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:16.772517  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:16.772582  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:16.809826  451238 cri.go:89] found id: ""
	I0805 13:02:16.809863  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.809875  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:16.809882  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:16.809949  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:16.849480  451238 cri.go:89] found id: ""
	I0805 13:02:16.849512  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.849523  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:16.849531  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:16.849598  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:16.884098  451238 cri.go:89] found id: ""
	I0805 13:02:16.884132  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.884144  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:16.884152  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:16.884222  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:16.920497  451238 cri.go:89] found id: ""
	I0805 13:02:16.920523  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.920530  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:16.920541  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:16.920556  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:13.025299  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:15.525474  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:14.908633  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:16.909254  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:14.445321  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:16.945044  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:18.945630  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:16.975287  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:16.975317  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:16.989524  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:16.989552  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:17.057997  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:17.058022  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:17.058037  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:17.133721  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:17.133763  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:19.672385  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:19.687948  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:19.688017  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:19.724105  451238 cri.go:89] found id: ""
	I0805 13:02:19.724132  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.724140  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:19.724147  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:19.724199  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:19.758263  451238 cri.go:89] found id: ""
	I0805 13:02:19.758296  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.758306  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:19.758314  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:19.758381  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:19.792924  451238 cri.go:89] found id: ""
	I0805 13:02:19.792954  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.792961  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:19.792967  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:19.793023  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:19.826340  451238 cri.go:89] found id: ""
	I0805 13:02:19.826367  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.826375  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:19.826382  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:19.826434  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:19.864289  451238 cri.go:89] found id: ""
	I0805 13:02:19.864323  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.864334  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:19.864343  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:19.864413  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:19.899630  451238 cri.go:89] found id: ""
	I0805 13:02:19.899661  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.899673  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:19.899682  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:19.899786  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:19.935798  451238 cri.go:89] found id: ""
	I0805 13:02:19.935826  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.935836  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:19.935843  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:19.935896  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:19.977984  451238 cri.go:89] found id: ""
	I0805 13:02:19.978019  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.978031  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:19.978044  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:19.978062  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:20.030096  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:20.030131  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:20.043878  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:20.043940  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:20.119251  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:20.119279  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:20.119297  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:20.202445  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:20.202488  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:18.026282  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:20.524225  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:19.408760  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:21.410108  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:21.445045  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:23.944150  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:22.744728  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:22.758606  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:22.758675  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:22.791663  451238 cri.go:89] found id: ""
	I0805 13:02:22.791696  451238 logs.go:276] 0 containers: []
	W0805 13:02:22.791708  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:22.791717  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:22.791821  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:22.826568  451238 cri.go:89] found id: ""
	I0805 13:02:22.826594  451238 logs.go:276] 0 containers: []
	W0805 13:02:22.826603  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:22.826609  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:22.826671  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:22.860430  451238 cri.go:89] found id: ""
	I0805 13:02:22.860459  451238 logs.go:276] 0 containers: []
	W0805 13:02:22.860470  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:22.860479  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:22.860543  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:22.893815  451238 cri.go:89] found id: ""
	I0805 13:02:22.893846  451238 logs.go:276] 0 containers: []
	W0805 13:02:22.893854  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:22.893860  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:22.893929  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:22.929804  451238 cri.go:89] found id: ""
	I0805 13:02:22.929830  451238 logs.go:276] 0 containers: []
	W0805 13:02:22.929840  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:22.929849  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:22.929915  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:22.964918  451238 cri.go:89] found id: ""
	I0805 13:02:22.964950  451238 logs.go:276] 0 containers: []
	W0805 13:02:22.964961  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:22.964969  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:22.965035  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:23.000236  451238 cri.go:89] found id: ""
	I0805 13:02:23.000271  451238 logs.go:276] 0 containers: []
	W0805 13:02:23.000282  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:23.000290  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:23.000354  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:23.052075  451238 cri.go:89] found id: ""
	I0805 13:02:23.052108  451238 logs.go:276] 0 containers: []
	W0805 13:02:23.052117  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:23.052128  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:23.052141  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:23.104213  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:23.104248  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:23.118811  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:23.118851  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:23.188552  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:23.188578  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:23.188595  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:23.272518  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:23.272562  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:25.811116  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:25.825030  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:25.825113  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:25.864282  451238 cri.go:89] found id: ""
	I0805 13:02:25.864318  451238 logs.go:276] 0 containers: []
	W0805 13:02:25.864331  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:25.864339  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:25.864413  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:25.901712  451238 cri.go:89] found id: ""
	I0805 13:02:25.901746  451238 logs.go:276] 0 containers: []
	W0805 13:02:25.901754  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:25.901760  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:25.901822  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:25.937036  451238 cri.go:89] found id: ""
	I0805 13:02:25.937068  451238 logs.go:276] 0 containers: []
	W0805 13:02:25.937077  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:25.937083  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:25.937146  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:25.974598  451238 cri.go:89] found id: ""
	I0805 13:02:25.974627  451238 logs.go:276] 0 containers: []
	W0805 13:02:25.974638  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:25.974646  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:25.974713  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:26.011083  451238 cri.go:89] found id: ""
	I0805 13:02:26.011116  451238 logs.go:276] 0 containers: []
	W0805 13:02:26.011124  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:26.011130  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:26.011190  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:26.050187  451238 cri.go:89] found id: ""
	I0805 13:02:26.050219  451238 logs.go:276] 0 containers: []
	W0805 13:02:26.050231  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:26.050242  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:26.050317  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:26.085038  451238 cri.go:89] found id: ""
	I0805 13:02:26.085067  451238 logs.go:276] 0 containers: []
	W0805 13:02:26.085077  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:26.085086  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:26.085151  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:26.122121  451238 cri.go:89] found id: ""
	I0805 13:02:26.122150  451238 logs.go:276] 0 containers: []
	W0805 13:02:26.122158  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:26.122173  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:26.122191  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:26.193819  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:26.193850  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:26.193865  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:26.273453  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:26.273492  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:26.312474  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:26.312509  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:26.363176  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:26.363215  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:22.524303  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:24.525047  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:26.528347  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:23.909120  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:26.409913  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:25.944824  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:28.444803  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:28.878523  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:28.892242  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:28.892330  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:28.928650  451238 cri.go:89] found id: ""
	I0805 13:02:28.928682  451238 logs.go:276] 0 containers: []
	W0805 13:02:28.928693  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:28.928702  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:28.928772  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:28.965582  451238 cri.go:89] found id: ""
	I0805 13:02:28.965615  451238 logs.go:276] 0 containers: []
	W0805 13:02:28.965626  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:28.965634  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:28.965698  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:29.001824  451238 cri.go:89] found id: ""
	I0805 13:02:29.001855  451238 logs.go:276] 0 containers: []
	W0805 13:02:29.001865  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:29.001874  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:29.001939  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:29.037688  451238 cri.go:89] found id: ""
	I0805 13:02:29.037715  451238 logs.go:276] 0 containers: []
	W0805 13:02:29.037722  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:29.037730  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:29.037780  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:29.078495  451238 cri.go:89] found id: ""
	I0805 13:02:29.078540  451238 logs.go:276] 0 containers: []
	W0805 13:02:29.078552  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:29.078559  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:29.078627  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:29.113728  451238 cri.go:89] found id: ""
	I0805 13:02:29.113764  451238 logs.go:276] 0 containers: []
	W0805 13:02:29.113776  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:29.113786  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:29.113851  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:29.147590  451238 cri.go:89] found id: ""
	I0805 13:02:29.147618  451238 logs.go:276] 0 containers: []
	W0805 13:02:29.147629  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:29.147638  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:29.147702  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:29.186015  451238 cri.go:89] found id: ""
	I0805 13:02:29.186043  451238 logs.go:276] 0 containers: []
	W0805 13:02:29.186052  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:29.186062  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:29.186074  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:29.242795  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:29.242850  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:29.257012  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:29.257046  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:29.330528  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:29.330555  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:29.330569  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:29.418109  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:29.418145  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:29.025256  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:31.526187  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:28.909283  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:31.409736  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:30.944380  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:32.945421  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:31.986351  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:32.001265  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:32.001349  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:32.035152  451238 cri.go:89] found id: ""
	I0805 13:02:32.035191  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.035200  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:32.035208  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:32.035262  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:32.069086  451238 cri.go:89] found id: ""
	I0805 13:02:32.069118  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.069128  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:32.069136  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:32.069204  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:32.103788  451238 cri.go:89] found id: ""
	I0805 13:02:32.103814  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.103822  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:32.103831  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:32.103893  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:32.139104  451238 cri.go:89] found id: ""
	I0805 13:02:32.139138  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.139149  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:32.139157  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:32.139222  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:32.192759  451238 cri.go:89] found id: ""
	I0805 13:02:32.192789  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.192798  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:32.192804  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:32.192865  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:32.231080  451238 cri.go:89] found id: ""
	I0805 13:02:32.231115  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.231126  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:32.231135  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:32.231200  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:32.266547  451238 cri.go:89] found id: ""
	I0805 13:02:32.266578  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.266587  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:32.266594  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:32.266647  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:32.301828  451238 cri.go:89] found id: ""
	I0805 13:02:32.301856  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.301865  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:32.301875  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:32.301888  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:32.358439  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:32.358479  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:32.372349  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:32.372383  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:32.442335  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:32.442369  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:32.442388  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:32.521705  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:32.521744  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:35.060867  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:35.074370  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:35.074433  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:35.111149  451238 cri.go:89] found id: ""
	I0805 13:02:35.111181  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.111191  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:35.111200  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:35.111268  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:35.153781  451238 cri.go:89] found id: ""
	I0805 13:02:35.153814  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.153825  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:35.153832  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:35.153894  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:35.193207  451238 cri.go:89] found id: ""
	I0805 13:02:35.193239  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.193256  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:35.193291  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:35.193370  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:35.243879  451238 cri.go:89] found id: ""
	I0805 13:02:35.243915  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.243928  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:35.243936  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:35.243994  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:35.297922  451238 cri.go:89] found id: ""
	I0805 13:02:35.297954  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.297966  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:35.297973  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:35.298039  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:35.333201  451238 cri.go:89] found id: ""
	I0805 13:02:35.333234  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.333245  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:35.333254  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:35.333316  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:35.366327  451238 cri.go:89] found id: ""
	I0805 13:02:35.366361  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.366373  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:35.366381  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:35.366449  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:35.401515  451238 cri.go:89] found id: ""
	I0805 13:02:35.401546  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.401555  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:35.401565  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:35.401578  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:35.451057  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:35.451090  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:35.465054  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:35.465095  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:35.547111  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:35.547142  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:35.547160  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:35.627451  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:35.627490  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:34.025104  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:36.524904  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:33.908489  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:35.909183  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:37.909360  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:35.445317  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:37.446056  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:38.169022  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:38.181892  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:38.181968  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:38.217919  451238 cri.go:89] found id: ""
	I0805 13:02:38.217951  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.217961  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:38.217970  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:38.218041  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:38.253967  451238 cri.go:89] found id: ""
	I0805 13:02:38.253999  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.254008  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:38.254020  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:38.254073  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:38.293757  451238 cri.go:89] found id: ""
	I0805 13:02:38.293789  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.293801  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:38.293809  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:38.293904  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:38.329657  451238 cri.go:89] found id: ""
	I0805 13:02:38.329686  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.329697  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:38.329705  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:38.329772  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:38.364602  451238 cri.go:89] found id: ""
	I0805 13:02:38.364635  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.364647  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:38.364656  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:38.364732  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:38.396352  451238 cri.go:89] found id: ""
	I0805 13:02:38.396382  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.396394  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:38.396403  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:38.396471  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:38.429172  451238 cri.go:89] found id: ""
	I0805 13:02:38.429203  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.429214  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:38.429223  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:38.429293  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:38.464855  451238 cri.go:89] found id: ""
	I0805 13:02:38.464891  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.464903  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:38.464916  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:38.464931  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:38.514924  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:38.514967  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:38.530076  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:38.530113  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:38.602472  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:38.602494  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:38.602509  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:38.683905  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:38.683948  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:41.226878  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:41.245027  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:41.245100  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:41.280482  451238 cri.go:89] found id: ""
	I0805 13:02:41.280511  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.280523  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:41.280532  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:41.280597  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:41.316592  451238 cri.go:89] found id: ""
	I0805 13:02:41.316622  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.316633  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:41.316641  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:41.316708  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:41.353282  451238 cri.go:89] found id: ""
	I0805 13:02:41.353313  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.353324  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:41.353333  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:41.353397  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:41.393379  451238 cri.go:89] found id: ""
	I0805 13:02:41.393406  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.393417  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:41.393426  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:41.393502  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:41.430980  451238 cri.go:89] found id: ""
	I0805 13:02:41.431012  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.431023  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:41.431031  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:41.431106  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:41.467228  451238 cri.go:89] found id: ""
	I0805 13:02:41.467261  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.467273  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:41.467281  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:41.467348  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:41.502105  451238 cri.go:89] found id: ""
	I0805 13:02:41.502153  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.502166  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:41.502175  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:41.502250  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:41.539286  451238 cri.go:89] found id: ""
	I0805 13:02:41.539314  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.539325  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:41.539338  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:41.539353  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:41.592135  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:41.592175  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:41.608151  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:41.608184  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:41.680096  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:41.680131  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:41.680148  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:41.759589  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:41.759628  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:39.025448  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:41.526590  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:40.409447  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:42.909412  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:39.945459  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:42.444630  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:44.300461  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:44.314310  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:44.314388  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:44.348516  451238 cri.go:89] found id: ""
	I0805 13:02:44.348549  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.348562  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:44.348570  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:44.348635  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:44.388256  451238 cri.go:89] found id: ""
	I0805 13:02:44.388289  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.388299  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:44.388309  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:44.388383  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:44.426743  451238 cri.go:89] found id: ""
	I0805 13:02:44.426778  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.426786  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:44.426792  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:44.426848  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:44.463008  451238 cri.go:89] found id: ""
	I0805 13:02:44.463044  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.463054  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:44.463062  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:44.463129  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:44.497662  451238 cri.go:89] found id: ""
	I0805 13:02:44.497696  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.497707  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:44.497715  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:44.497789  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:44.534253  451238 cri.go:89] found id: ""
	I0805 13:02:44.534281  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.534288  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:44.534294  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:44.534378  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:44.574350  451238 cri.go:89] found id: ""
	I0805 13:02:44.574380  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.574390  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:44.574398  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:44.574468  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:44.609984  451238 cri.go:89] found id: ""
	I0805 13:02:44.610018  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.610031  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:44.610044  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:44.610060  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:44.650363  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:44.650402  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:44.700997  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:44.701032  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:44.716841  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:44.716874  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:44.785482  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:44.785502  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:44.785517  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:44.023932  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:46.025733  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:44.909613  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:47.409724  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:44.445234  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:46.944157  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:48.946098  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:47.365382  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:47.378779  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:47.378851  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:47.413615  451238 cri.go:89] found id: ""
	I0805 13:02:47.413636  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.413645  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:47.413651  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:47.413699  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:47.448536  451238 cri.go:89] found id: ""
	I0805 13:02:47.448563  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.448572  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:47.448578  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:47.448629  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:47.490817  451238 cri.go:89] found id: ""
	I0805 13:02:47.490847  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.490856  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:47.490862  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:47.490931  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:47.533151  451238 cri.go:89] found id: ""
	I0805 13:02:47.533179  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.533187  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:47.533193  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:47.533250  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:47.571991  451238 cri.go:89] found id: ""
	I0805 13:02:47.572022  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.572030  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:47.572036  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:47.572096  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:47.606943  451238 cri.go:89] found id: ""
	I0805 13:02:47.606976  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.606987  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:47.606995  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:47.607073  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:47.644704  451238 cri.go:89] found id: ""
	I0805 13:02:47.644741  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.644753  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:47.644762  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:47.644828  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:47.687361  451238 cri.go:89] found id: ""
	I0805 13:02:47.687395  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.687408  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:47.687427  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:47.687453  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:47.766572  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:47.766614  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:47.812209  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:47.812242  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:47.862948  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:47.862987  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:47.878697  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:47.878729  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:47.951680  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:50.452861  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:50.466370  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:50.466440  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:50.500001  451238 cri.go:89] found id: ""
	I0805 13:02:50.500031  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.500043  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:50.500051  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:50.500126  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:50.541752  451238 cri.go:89] found id: ""
	I0805 13:02:50.541786  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.541794  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:50.541800  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:50.541864  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:50.578889  451238 cri.go:89] found id: ""
	I0805 13:02:50.578915  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.578923  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:50.578930  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:50.578984  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:50.614865  451238 cri.go:89] found id: ""
	I0805 13:02:50.614896  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.614906  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:50.614912  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:50.614980  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:50.656169  451238 cri.go:89] found id: ""
	I0805 13:02:50.656195  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.656202  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:50.656209  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:50.656277  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:50.695050  451238 cri.go:89] found id: ""
	I0805 13:02:50.695082  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.695099  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:50.695108  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:50.695187  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:50.733205  451238 cri.go:89] found id: ""
	I0805 13:02:50.733233  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.733242  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:50.733249  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:50.733300  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:50.770654  451238 cri.go:89] found id: ""
	I0805 13:02:50.770683  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.770693  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:50.770706  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:50.770721  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:50.826521  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:50.826567  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:50.842153  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:50.842181  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:50.916445  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:50.916474  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:50.916487  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:50.999973  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:51.000020  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:48.525240  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:51.024459  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:49.907505  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:51.909037  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:50.946199  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:53.444128  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:53.539541  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:53.553804  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:53.553893  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:53.593075  451238 cri.go:89] found id: ""
	I0805 13:02:53.593105  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.593114  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:53.593121  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:53.593190  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:53.629967  451238 cri.go:89] found id: ""
	I0805 13:02:53.630001  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.630012  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:53.630020  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:53.630088  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:53.663535  451238 cri.go:89] found id: ""
	I0805 13:02:53.663564  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.663572  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:53.663577  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:53.663635  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:53.697650  451238 cri.go:89] found id: ""
	I0805 13:02:53.697676  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.697684  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:53.697690  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:53.697741  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:53.732845  451238 cri.go:89] found id: ""
	I0805 13:02:53.732873  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.732883  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:53.732891  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:53.732950  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:53.774673  451238 cri.go:89] found id: ""
	I0805 13:02:53.774703  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.774712  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:53.774719  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:53.774783  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:53.815368  451238 cri.go:89] found id: ""
	I0805 13:02:53.815401  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.815413  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:53.815423  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:53.815487  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:53.849726  451238 cri.go:89] found id: ""
	I0805 13:02:53.849760  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.849771  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:53.849785  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:53.849801  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:53.925356  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:53.925398  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:53.966721  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:53.966751  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:54.023096  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:54.023140  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:54.037634  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:54.037666  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:54.115159  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:56.616326  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:56.629665  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:56.629744  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:56.665665  451238 cri.go:89] found id: ""
	I0805 13:02:56.665701  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.665713  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:56.665722  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:56.665790  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:56.700446  451238 cri.go:89] found id: ""
	I0805 13:02:56.700473  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.700481  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:56.700488  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:56.700554  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:56.737152  451238 cri.go:89] found id: ""
	I0805 13:02:56.737190  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.737202  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:56.737210  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:56.737283  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:56.777909  451238 cri.go:89] found id: ""
	I0805 13:02:56.777942  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.777954  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:56.777961  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:56.778027  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:56.813503  451238 cri.go:89] found id: ""
	I0805 13:02:56.813537  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.813547  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:56.813556  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:56.813625  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:56.848964  451238 cri.go:89] found id: ""
	I0805 13:02:56.848993  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.849002  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:56.849008  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:56.849071  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:56.884310  451238 cri.go:89] found id: ""
	I0805 13:02:56.884339  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.884347  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:56.884356  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:56.884417  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:56.925895  451238 cri.go:89] found id: ""
	I0805 13:02:56.925926  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.925936  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:56.925948  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:56.925962  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:53.025086  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:55.025424  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:57.026117  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:53.909851  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:56.411536  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:55.945123  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:57.945278  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:56.982847  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:56.982882  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:56.997703  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:56.997742  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:57.071130  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:57.071153  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:57.071174  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:57.152985  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:57.153029  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:59.697501  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:59.711799  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:59.711879  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:59.746992  451238 cri.go:89] found id: ""
	I0805 13:02:59.747024  451238 logs.go:276] 0 containers: []
	W0805 13:02:59.747035  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:59.747043  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:59.747115  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:59.780563  451238 cri.go:89] found id: ""
	I0805 13:02:59.780592  451238 logs.go:276] 0 containers: []
	W0805 13:02:59.780604  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:59.780611  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:59.780676  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:59.816973  451238 cri.go:89] found id: ""
	I0805 13:02:59.817007  451238 logs.go:276] 0 containers: []
	W0805 13:02:59.817019  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:59.817027  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:59.817098  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:59.851989  451238 cri.go:89] found id: ""
	I0805 13:02:59.852018  451238 logs.go:276] 0 containers: []
	W0805 13:02:59.852028  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:59.852035  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:59.852086  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:59.887491  451238 cri.go:89] found id: ""
	I0805 13:02:59.887517  451238 logs.go:276] 0 containers: []
	W0805 13:02:59.887525  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:59.887535  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:59.887587  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:59.924965  451238 cri.go:89] found id: ""
	I0805 13:02:59.924997  451238 logs.go:276] 0 containers: []
	W0805 13:02:59.925005  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:59.925012  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:59.925062  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:59.965830  451238 cri.go:89] found id: ""
	I0805 13:02:59.965860  451238 logs.go:276] 0 containers: []
	W0805 13:02:59.965868  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:59.965875  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:59.965932  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:03:00.003208  451238 cri.go:89] found id: ""
	I0805 13:03:00.003241  451238 logs.go:276] 0 containers: []
	W0805 13:03:00.003250  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:03:00.003260  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:00.003275  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:00.056865  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:00.056911  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:00.070563  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:00.070593  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:03:00.137931  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:03:00.137957  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:00.137976  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:00.221598  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:03:00.221649  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:59.525042  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:02.024461  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:58.903499  450576 pod_ready.go:81] duration metric: took 4m0.001018928s for pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace to be "Ready" ...
	E0805 13:02:58.903533  450576 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace to be "Ready" (will not retry!)
	I0805 13:02:58.903556  450576 pod_ready.go:38] duration metric: took 4m8.049032492s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 13:02:58.903598  450576 kubeadm.go:597] duration metric: took 4m18.518107211s to restartPrimaryControlPlane
	W0805 13:02:58.903786  450576 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0805 13:02:58.903819  450576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0805 13:02:59.945464  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:02.443954  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:02.761328  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:03:02.775836  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:03:02.775904  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:03:02.812714  451238 cri.go:89] found id: ""
	I0805 13:03:02.812752  451238 logs.go:276] 0 containers: []
	W0805 13:03:02.812764  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:03:02.812773  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:03:02.812848  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:03:02.850072  451238 cri.go:89] found id: ""
	I0805 13:03:02.850103  451238 logs.go:276] 0 containers: []
	W0805 13:03:02.850130  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:03:02.850138  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:03:02.850197  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:03:02.886956  451238 cri.go:89] found id: ""
	I0805 13:03:02.887081  451238 logs.go:276] 0 containers: []
	W0805 13:03:02.887103  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:03:02.887114  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:03:02.887188  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:03:02.924874  451238 cri.go:89] found id: ""
	I0805 13:03:02.924906  451238 logs.go:276] 0 containers: []
	W0805 13:03:02.924918  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:03:02.924925  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:03:02.924996  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:03:02.965965  451238 cri.go:89] found id: ""
	I0805 13:03:02.965996  451238 logs.go:276] 0 containers: []
	W0805 13:03:02.966007  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:03:02.966015  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:03:02.966101  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:03:03.001081  451238 cri.go:89] found id: ""
	I0805 13:03:03.001118  451238 logs.go:276] 0 containers: []
	W0805 13:03:03.001130  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:03:03.001140  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:03:03.001201  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:03:03.036194  451238 cri.go:89] found id: ""
	I0805 13:03:03.036223  451238 logs.go:276] 0 containers: []
	W0805 13:03:03.036234  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:03:03.036243  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:03:03.036303  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:03:03.071905  451238 cri.go:89] found id: ""
	I0805 13:03:03.071940  451238 logs.go:276] 0 containers: []
	W0805 13:03:03.071951  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:03:03.071964  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:03.071982  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:03.124400  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:03.124442  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:03.138492  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:03.138520  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:03:03.207300  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:03:03.207326  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:03.207342  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:03.294941  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:03:03.294983  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:03:05.836187  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:03:05.850504  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:03:05.850609  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:03:05.889692  451238 cri.go:89] found id: ""
	I0805 13:03:05.889718  451238 logs.go:276] 0 containers: []
	W0805 13:03:05.889729  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:03:05.889737  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:03:05.889804  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:03:05.924597  451238 cri.go:89] found id: ""
	I0805 13:03:05.924630  451238 logs.go:276] 0 containers: []
	W0805 13:03:05.924640  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:03:05.924647  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:03:05.924711  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:03:05.960373  451238 cri.go:89] found id: ""
	I0805 13:03:05.960404  451238 logs.go:276] 0 containers: []
	W0805 13:03:05.960413  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:03:05.960419  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:03:05.960471  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:03:05.996583  451238 cri.go:89] found id: ""
	I0805 13:03:05.996617  451238 logs.go:276] 0 containers: []
	W0805 13:03:05.996628  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:03:05.996636  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:03:05.996708  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:03:06.033539  451238 cri.go:89] found id: ""
	I0805 13:03:06.033567  451238 logs.go:276] 0 containers: []
	W0805 13:03:06.033575  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:03:06.033586  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:03:06.033655  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:03:06.069348  451238 cri.go:89] found id: ""
	I0805 13:03:06.069378  451238 logs.go:276] 0 containers: []
	W0805 13:03:06.069391  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:03:06.069401  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:03:06.069466  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:03:06.103570  451238 cri.go:89] found id: ""
	I0805 13:03:06.103599  451238 logs.go:276] 0 containers: []
	W0805 13:03:06.103607  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:03:06.103613  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:03:06.103665  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:03:06.140230  451238 cri.go:89] found id: ""
	I0805 13:03:06.140260  451238 logs.go:276] 0 containers: []
	W0805 13:03:06.140271  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:03:06.140284  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:06.140300  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:06.191073  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:06.191123  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:06.204825  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:06.204857  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:03:06.281309  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:03:06.281339  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:06.281358  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:06.361709  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:03:06.361749  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:03:04.025007  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:06.524506  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:04.444267  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:06.444910  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:08.445441  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:08.903194  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:03:08.921602  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:03:08.921681  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:03:08.960916  451238 cri.go:89] found id: ""
	I0805 13:03:08.960945  451238 logs.go:276] 0 containers: []
	W0805 13:03:08.960975  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:03:08.960986  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:03:08.961055  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:03:08.996316  451238 cri.go:89] found id: ""
	I0805 13:03:08.996417  451238 logs.go:276] 0 containers: []
	W0805 13:03:08.996436  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:03:08.996448  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:03:08.996522  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:03:09.038536  451238 cri.go:89] found id: ""
	I0805 13:03:09.038572  451238 logs.go:276] 0 containers: []
	W0805 13:03:09.038584  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:03:09.038593  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:03:09.038664  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:03:09.075368  451238 cri.go:89] found id: ""
	I0805 13:03:09.075396  451238 logs.go:276] 0 containers: []
	W0805 13:03:09.075405  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:03:09.075412  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:03:09.075474  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:03:09.114232  451238 cri.go:89] found id: ""
	I0805 13:03:09.114262  451238 logs.go:276] 0 containers: []
	W0805 13:03:09.114272  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:03:09.114280  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:03:09.114353  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:03:09.161878  451238 cri.go:89] found id: ""
	I0805 13:03:09.161964  451238 logs.go:276] 0 containers: []
	W0805 13:03:09.161978  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:03:09.161988  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:03:09.162062  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:03:09.206694  451238 cri.go:89] found id: ""
	I0805 13:03:09.206727  451238 logs.go:276] 0 containers: []
	W0805 13:03:09.206739  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:03:09.206748  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:03:09.206890  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:03:09.257029  451238 cri.go:89] found id: ""
	I0805 13:03:09.257066  451238 logs.go:276] 0 containers: []
	W0805 13:03:09.257079  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:03:09.257090  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:09.257107  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:09.278638  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:09.278679  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:03:09.353760  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:03:09.353781  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:09.353793  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:09.438371  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:03:09.438419  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:03:09.487253  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:09.487297  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:08.018954  450884 pod_ready.go:81] duration metric: took 4m0.00055059s for pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace to be "Ready" ...
	E0805 13:03:08.018987  450884 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace to be "Ready" (will not retry!)
	I0805 13:03:08.019010  450884 pod_ready.go:38] duration metric: took 4m11.028507743s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 13:03:08.019048  450884 kubeadm.go:597] duration metric: took 4m19.097834327s to restartPrimaryControlPlane
	W0805 13:03:08.019122  450884 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0805 13:03:08.019157  450884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0805 13:03:10.945002  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:12.945953  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:12.042215  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:03:12.055721  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:03:12.055812  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:03:12.096936  451238 cri.go:89] found id: ""
	I0805 13:03:12.096965  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.096977  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:03:12.096985  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:03:12.097051  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:03:12.136149  451238 cri.go:89] found id: ""
	I0805 13:03:12.136181  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.136192  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:03:12.136199  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:03:12.136276  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:03:12.180568  451238 cri.go:89] found id: ""
	I0805 13:03:12.180606  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.180618  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:03:12.180626  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:03:12.180695  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:03:12.221759  451238 cri.go:89] found id: ""
	I0805 13:03:12.221794  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.221806  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:03:12.221815  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:03:12.221882  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:03:12.259460  451238 cri.go:89] found id: ""
	I0805 13:03:12.259490  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.259498  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:03:12.259508  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:03:12.259563  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:03:12.301245  451238 cri.go:89] found id: ""
	I0805 13:03:12.301277  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.301289  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:03:12.301297  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:03:12.301368  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:03:12.343640  451238 cri.go:89] found id: ""
	I0805 13:03:12.343678  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.343690  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:03:12.343698  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:03:12.343809  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:03:12.382729  451238 cri.go:89] found id: ""
	I0805 13:03:12.382762  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.382774  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:03:12.382787  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:12.382807  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:12.400862  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:12.400897  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:03:12.478755  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:03:12.478788  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:12.478807  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:12.566029  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:03:12.566080  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:03:12.611834  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:12.611929  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:15.171517  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:03:15.185569  451238 kubeadm.go:597] duration metric: took 4m3.737627997s to restartPrimaryControlPlane
	W0805 13:03:15.185662  451238 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0805 13:03:15.185697  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0805 13:03:15.669994  451238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 13:03:15.684794  451238 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 13:03:15.695088  451238 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 13:03:15.705403  451238 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 13:03:15.705427  451238 kubeadm.go:157] found existing configuration files:
	
	I0805 13:03:15.705488  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 13:03:15.714777  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 13:03:15.714833  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 13:03:15.724437  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 13:03:15.733263  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 13:03:15.733317  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 13:03:15.743004  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 13:03:15.752219  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 13:03:15.752278  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 13:03:15.761788  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 13:03:15.771193  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 13:03:15.771245  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 13:03:15.780964  451238 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 13:03:15.855628  451238 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0805 13:03:15.855751  451238 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 13:03:16.015686  451238 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 13:03:16.015880  451238 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 13:03:16.016041  451238 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 13:03:16.207054  451238 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 13:03:16.209133  451238 out.go:204]   - Generating certificates and keys ...
	I0805 13:03:16.209256  451238 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 13:03:16.209376  451238 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 13:03:16.209493  451238 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0805 13:03:16.209597  451238 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0805 13:03:16.209703  451238 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0805 13:03:16.211637  451238 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0805 13:03:16.211726  451238 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0805 13:03:16.211833  451238 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0805 13:03:16.211959  451238 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0805 13:03:16.212690  451238 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0805 13:03:16.212863  451238 kubeadm.go:310] [certs] Using the existing "sa" key
	I0805 13:03:16.212963  451238 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 13:03:16.283080  451238 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 13:03:16.609523  451238 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 13:03:16.765635  451238 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 13:03:16.934487  451238 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 13:03:16.955335  451238 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 13:03:16.956267  451238 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 13:03:16.956328  451238 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 13:03:17.088081  451238 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 13:03:15.445305  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:17.447306  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:17.090118  451238 out.go:204]   - Booting up control plane ...
	I0805 13:03:17.090264  451238 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 13:03:17.100902  451238 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 13:03:17.101263  451238 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 13:03:17.102210  451238 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 13:03:17.112522  451238 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0805 13:03:19.943658  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:21.944253  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:23.945158  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:25.252381  450576 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.348530672s)
	I0805 13:03:25.252504  450576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 13:03:25.269305  450576 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 13:03:25.279322  450576 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 13:03:25.289241  450576 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 13:03:25.289266  450576 kubeadm.go:157] found existing configuration files:
	
	I0805 13:03:25.289304  450576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 13:03:25.298671  450576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 13:03:25.298732  450576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 13:03:25.309962  450576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 13:03:25.320180  450576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 13:03:25.320247  450576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 13:03:25.330481  450576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 13:03:25.340565  450576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 13:03:25.340652  450576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 13:03:25.351244  450576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 13:03:25.361443  450576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 13:03:25.361536  450576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 13:03:25.371655  450576 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 13:03:25.419277  450576 kubeadm.go:310] W0805 13:03:25.398597    2979 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0805 13:03:25.420220  450576 kubeadm.go:310] W0805 13:03:25.399642    2979 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0805 13:03:25.537148  450576 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 13:03:25.945501  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:27.945972  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:33.413703  450576 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-rc.0
	I0805 13:03:33.413775  450576 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 13:03:33.413863  450576 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 13:03:33.414008  450576 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 13:03:33.414152  450576 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0805 13:03:33.414235  450576 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 13:03:33.415804  450576 out.go:204]   - Generating certificates and keys ...
	I0805 13:03:33.415874  450576 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 13:03:33.415949  450576 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 13:03:33.416037  450576 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0805 13:03:33.416101  450576 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0805 13:03:33.416174  450576 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0805 13:03:33.416237  450576 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0805 13:03:33.416289  450576 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0805 13:03:33.416357  450576 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0805 13:03:33.416437  450576 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0805 13:03:33.416518  450576 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0805 13:03:33.416553  450576 kubeadm.go:310] [certs] Using the existing "sa" key
	I0805 13:03:33.416603  450576 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 13:03:33.416646  450576 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 13:03:33.416701  450576 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0805 13:03:33.416745  450576 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 13:03:33.416816  450576 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 13:03:33.416878  450576 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 13:03:33.416971  450576 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 13:03:33.417059  450576 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 13:03:33.418572  450576 out.go:204]   - Booting up control plane ...
	I0805 13:03:33.418671  450576 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 13:03:33.418751  450576 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 13:03:33.418833  450576 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 13:03:33.418965  450576 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 13:03:33.419092  450576 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 13:03:33.419172  450576 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 13:03:33.419342  450576 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0805 13:03:33.419488  450576 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0805 13:03:33.419577  450576 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.308417ms
	I0805 13:03:33.419672  450576 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0805 13:03:33.419780  450576 kubeadm.go:310] [api-check] The API server is healthy after 5.001429681s
	I0805 13:03:33.419908  450576 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 13:03:33.420049  450576 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 13:03:33.420117  450576 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0805 13:03:33.420293  450576 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-669469 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 13:03:33.420385  450576 kubeadm.go:310] [bootstrap-token] Using token: i9zl3x.c4hzh1c9ccxlydzt
	I0805 13:03:33.421925  450576 out.go:204]   - Configuring RBAC rules ...
	I0805 13:03:33.422042  450576 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 13:03:33.422157  450576 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 13:03:33.422352  450576 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 13:03:33.422488  450576 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 13:03:33.422649  450576 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 13:03:33.422784  450576 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 13:03:33.422914  450576 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 13:03:33.422991  450576 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0805 13:03:33.423060  450576 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0805 13:03:33.423070  450576 kubeadm.go:310] 
	I0805 13:03:33.423160  450576 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0805 13:03:33.423173  450576 kubeadm.go:310] 
	I0805 13:03:33.423274  450576 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0805 13:03:33.423283  450576 kubeadm.go:310] 
	I0805 13:03:33.423316  450576 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0805 13:03:33.423409  450576 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 13:03:33.423495  450576 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 13:03:33.423513  450576 kubeadm.go:310] 
	I0805 13:03:33.423616  450576 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0805 13:03:33.423628  450576 kubeadm.go:310] 
	I0805 13:03:33.423692  450576 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 13:03:33.423701  450576 kubeadm.go:310] 
	I0805 13:03:33.423793  450576 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0805 13:03:33.423931  450576 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 13:03:33.424030  450576 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 13:03:33.424039  450576 kubeadm.go:310] 
	I0805 13:03:33.424106  450576 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0805 13:03:33.424176  450576 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0805 13:03:33.424185  450576 kubeadm.go:310] 
	I0805 13:03:33.424282  450576 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token i9zl3x.c4hzh1c9ccxlydzt \
	I0805 13:03:33.424430  450576 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d5d31a77e9c4cbf19599d2fca5d8f2345e115b01301fa4b841f92bcfec86ddc6 \
	I0805 13:03:33.424473  450576 kubeadm.go:310] 	--control-plane 
	I0805 13:03:33.424482  450576 kubeadm.go:310] 
	I0805 13:03:33.424588  450576 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0805 13:03:33.424602  450576 kubeadm.go:310] 
	I0805 13:03:33.424725  450576 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token i9zl3x.c4hzh1c9ccxlydzt \
	I0805 13:03:33.424870  450576 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d5d31a77e9c4cbf19599d2fca5d8f2345e115b01301fa4b841f92bcfec86ddc6 
	I0805 13:03:33.424892  450576 cni.go:84] Creating CNI manager for ""
	I0805 13:03:33.424911  450576 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 13:03:33.426503  450576 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0805 13:03:33.427981  450576 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0805 13:03:33.439484  450576 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0805 13:03:33.458459  450576 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 13:03:33.458547  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:33.458579  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-669469 minikube.k8s.io/updated_at=2024_08_05T13_03_33_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f minikube.k8s.io/name=no-preload-669469 minikube.k8s.io/primary=true
	I0805 13:03:33.488847  450576 ops.go:34] apiserver oom_adj: -16
	I0805 13:03:29.946423  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:32.444923  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:33.674306  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:34.174940  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:34.674936  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:35.174693  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:35.675004  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:36.174801  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:36.674878  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:37.174394  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:37.263948  450576 kubeadm.go:1113] duration metric: took 3.805464287s to wait for elevateKubeSystemPrivileges
	I0805 13:03:37.263985  450576 kubeadm.go:394] duration metric: took 4m56.93214495s to StartCluster
	I0805 13:03:37.264025  450576 settings.go:142] acquiring lock: {Name:mkef693333292ed53a03690c72ec170ce2e26d3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 13:03:37.264143  450576 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 13:03:37.265965  450576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/kubeconfig: {Name:mkf2ea766e58530103015ce4ba9d1ed3336f3926 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 13:03:37.266283  450576 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.223 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 13:03:37.266400  450576 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 13:03:37.266469  450576 addons.go:69] Setting storage-provisioner=true in profile "no-preload-669469"
	I0805 13:03:37.266510  450576 addons.go:234] Setting addon storage-provisioner=true in "no-preload-669469"
	W0805 13:03:37.266518  450576 addons.go:243] addon storage-provisioner should already be in state true
	I0805 13:03:37.266519  450576 config.go:182] Loaded profile config "no-preload-669469": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0805 13:03:37.266551  450576 host.go:66] Checking if "no-preload-669469" exists ...
	I0805 13:03:37.266505  450576 addons.go:69] Setting default-storageclass=true in profile "no-preload-669469"
	I0805 13:03:37.266547  450576 addons.go:69] Setting metrics-server=true in profile "no-preload-669469"
	I0805 13:03:37.266612  450576 addons.go:234] Setting addon metrics-server=true in "no-preload-669469"
	I0805 13:03:37.266616  450576 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-669469"
	W0805 13:03:37.266627  450576 addons.go:243] addon metrics-server should already be in state true
	I0805 13:03:37.266668  450576 host.go:66] Checking if "no-preload-669469" exists ...
	I0805 13:03:37.267002  450576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:03:37.267002  450576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:03:37.267035  450576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:03:37.267049  450576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:03:37.267041  450576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:03:37.267085  450576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:03:37.267985  450576 out.go:177] * Verifying Kubernetes components...
	I0805 13:03:37.269486  450576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 13:03:37.283242  450576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44391
	I0805 13:03:37.283291  450576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35597
	I0805 13:03:37.283245  450576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38679
	I0805 13:03:37.283710  450576 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:03:37.283785  450576 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:03:37.283717  450576 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:03:37.284296  450576 main.go:141] libmachine: Using API Version  1
	I0805 13:03:37.284316  450576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:03:37.284319  450576 main.go:141] libmachine: Using API Version  1
	I0805 13:03:37.284296  450576 main.go:141] libmachine: Using API Version  1
	I0805 13:03:37.284335  450576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:03:37.284360  450576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:03:37.284734  450576 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:03:37.284735  450576 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:03:37.284746  450576 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:03:37.284963  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetState
	I0805 13:03:37.285343  450576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:03:37.285375  450576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:03:37.285387  450576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:03:37.285441  450576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:03:37.288699  450576 addons.go:234] Setting addon default-storageclass=true in "no-preload-669469"
	W0805 13:03:37.288722  450576 addons.go:243] addon default-storageclass should already be in state true
	I0805 13:03:37.288753  450576 host.go:66] Checking if "no-preload-669469" exists ...
	I0805 13:03:37.289023  450576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:03:37.289049  450576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:03:37.303814  450576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38647
	I0805 13:03:37.304491  450576 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:03:37.305081  450576 main.go:141] libmachine: Using API Version  1
	I0805 13:03:37.305104  450576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:03:37.305552  450576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42975
	I0805 13:03:37.305566  450576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36331
	I0805 13:03:37.305583  450576 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:03:37.305928  450576 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:03:37.306007  450576 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:03:37.306148  450576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:03:37.306190  450576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:03:37.306485  450576 main.go:141] libmachine: Using API Version  1
	I0805 13:03:37.306503  450576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:03:37.306595  450576 main.go:141] libmachine: Using API Version  1
	I0805 13:03:37.306611  450576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:03:37.306971  450576 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:03:37.306998  450576 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:03:37.307157  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetState
	I0805 13:03:37.307162  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetState
	I0805 13:03:37.309002  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 13:03:37.309241  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 13:03:37.311054  450576 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0805 13:03:37.311055  450576 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 13:03:37.312682  450576 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0805 13:03:37.312695  450576 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0805 13:03:37.312710  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 13:03:37.312834  450576 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 13:03:37.312856  450576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 13:03:37.312874  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 13:03:37.317044  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 13:03:37.317635  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 13:03:37.317660  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 13:03:37.317753  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 13:03:37.317955  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 13:03:37.318141  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 13:03:37.318360  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 13:03:37.318400  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 13:03:37.318427  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 13:03:37.318539  450576 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa Username:docker}
	I0805 13:03:37.318633  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 13:03:37.318967  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 13:03:37.319111  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 13:03:37.319241  450576 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa Username:docker}
	I0805 13:03:37.325066  450576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46527
	I0805 13:03:37.325633  450576 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:03:37.326052  450576 main.go:141] libmachine: Using API Version  1
	I0805 13:03:37.326071  450576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:03:37.326326  450576 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:03:37.326473  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetState
	I0805 13:03:37.328502  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 13:03:37.328814  450576 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 13:03:37.328826  450576 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 13:03:37.328839  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 13:03:37.331482  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 13:03:37.331853  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 13:03:37.331874  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 13:03:37.332013  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 13:03:37.332169  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 13:03:37.332270  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 13:03:37.332358  450576 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa Username:docker}
	I0805 13:03:37.483477  450576 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 13:03:37.501924  450576 node_ready.go:35] waiting up to 6m0s for node "no-preload-669469" to be "Ready" ...
	I0805 13:03:37.511394  450576 node_ready.go:49] node "no-preload-669469" has status "Ready":"True"
	I0805 13:03:37.511427  450576 node_ready.go:38] duration metric: took 9.462968ms for node "no-preload-669469" to be "Ready" ...
	I0805 13:03:37.511443  450576 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 13:03:37.526505  450576 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 13:03:37.575598  450576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 13:03:37.583338  450576 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0805 13:03:37.583362  450576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0805 13:03:37.594019  450576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 13:03:37.629885  450576 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0805 13:03:37.629913  450576 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0805 13:03:37.684790  450576 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0805 13:03:37.684825  450576 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0805 13:03:37.753629  450576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0805 13:03:37.857352  450576 main.go:141] libmachine: Making call to close driver server
	I0805 13:03:37.857386  450576 main.go:141] libmachine: (no-preload-669469) Calling .Close
	I0805 13:03:37.857777  450576 main.go:141] libmachine: (no-preload-669469) DBG | Closing plugin on server side
	I0805 13:03:37.857780  450576 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:03:37.857812  450576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:03:37.857829  450576 main.go:141] libmachine: Making call to close driver server
	I0805 13:03:37.857838  450576 main.go:141] libmachine: (no-preload-669469) Calling .Close
	I0805 13:03:37.858101  450576 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:03:37.858117  450576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:03:37.858153  450576 main.go:141] libmachine: (no-preload-669469) DBG | Closing plugin on server side
	I0805 13:03:37.871616  450576 main.go:141] libmachine: Making call to close driver server
	I0805 13:03:37.871639  450576 main.go:141] libmachine: (no-preload-669469) Calling .Close
	I0805 13:03:37.871970  450576 main.go:141] libmachine: (no-preload-669469) DBG | Closing plugin on server side
	I0805 13:03:37.872022  450576 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:03:37.872031  450576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:03:38.290429  450576 main.go:141] libmachine: Making call to close driver server
	I0805 13:03:38.290449  450576 main.go:141] libmachine: (no-preload-669469) Calling .Close
	I0805 13:03:38.290784  450576 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:03:38.290856  450576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:03:38.290871  450576 main.go:141] libmachine: Making call to close driver server
	I0805 13:03:38.290880  450576 main.go:141] libmachine: (no-preload-669469) Calling .Close
	I0805 13:03:38.290829  450576 main.go:141] libmachine: (no-preload-669469) DBG | Closing plugin on server side
	I0805 13:03:38.291265  450576 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:03:38.291289  450576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:03:38.291271  450576 main.go:141] libmachine: (no-preload-669469) DBG | Closing plugin on server side
	I0805 13:03:38.880274  450576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.126602375s)
	I0805 13:03:38.880331  450576 main.go:141] libmachine: Making call to close driver server
	I0805 13:03:38.880344  450576 main.go:141] libmachine: (no-preload-669469) Calling .Close
	I0805 13:03:38.880868  450576 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:03:38.880896  450576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:03:38.880906  450576 main.go:141] libmachine: Making call to close driver server
	I0805 13:03:38.880916  450576 main.go:141] libmachine: (no-preload-669469) Calling .Close
	I0805 13:03:38.880871  450576 main.go:141] libmachine: (no-preload-669469) DBG | Closing plugin on server side
	I0805 13:03:38.881196  450576 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:03:38.881204  450576 main.go:141] libmachine: (no-preload-669469) DBG | Closing plugin on server side
	I0805 13:03:38.881211  450576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:03:38.881230  450576 addons.go:475] Verifying addon metrics-server=true in "no-preload-669469"
	I0805 13:03:38.882896  450576 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0805 13:03:34.945631  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:37.446855  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:39.741362  450884 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.722174979s)
	I0805 13:03:39.741438  450884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 13:03:39.760465  450884 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 13:03:39.770587  450884 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 13:03:39.780157  450884 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 13:03:39.780177  450884 kubeadm.go:157] found existing configuration files:
	
	I0805 13:03:39.780215  450884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0805 13:03:39.790172  450884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 13:03:39.790243  450884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 13:03:39.803838  450884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0805 13:03:39.816314  450884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 13:03:39.816367  450884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 13:03:39.826636  450884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0805 13:03:39.836513  450884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 13:03:39.836570  450884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 13:03:39.846356  450884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0805 13:03:39.855694  450884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 13:03:39.855770  450884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 13:03:39.865721  450884 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 13:03:40.081251  450884 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 13:03:38.884521  450576 addons.go:510] duration metric: took 1.618121451s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0805 13:03:39.536758  450576 pod_ready.go:102] pod "etcd-no-preload-669469" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:41.035239  450576 pod_ready.go:92] pod "etcd-no-preload-669469" in "kube-system" namespace has status "Ready":"True"
	I0805 13:03:41.035266  450576 pod_ready.go:81] duration metric: took 3.508734543s for pod "etcd-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 13:03:41.035280  450576 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 13:03:41.042787  450576 pod_ready.go:92] pod "kube-apiserver-no-preload-669469" in "kube-system" namespace has status "Ready":"True"
	I0805 13:03:41.042811  450576 pod_ready.go:81] duration metric: took 7.522909ms for pod "kube-apiserver-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 13:03:41.042824  450576 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 13:03:42.048338  450576 pod_ready.go:92] pod "kube-controller-manager-no-preload-669469" in "kube-system" namespace has status "Ready":"True"
	I0805 13:03:42.048363  450576 pod_ready.go:81] duration metric: took 1.005531569s for pod "kube-controller-manager-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 13:03:42.048373  450576 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 13:03:39.945815  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:42.445704  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:44.056394  450576 pod_ready.go:102] pod "kube-scheduler-no-preload-669469" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:45.555280  450576 pod_ready.go:92] pod "kube-scheduler-no-preload-669469" in "kube-system" namespace has status "Ready":"True"
	I0805 13:03:45.555310  450576 pod_ready.go:81] duration metric: took 3.506927542s for pod "kube-scheduler-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 13:03:45.555321  450576 pod_ready.go:38] duration metric: took 8.043865797s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 13:03:45.555338  450576 api_server.go:52] waiting for apiserver process to appear ...
	I0805 13:03:45.555397  450576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:03:45.572225  450576 api_server.go:72] duration metric: took 8.30589728s to wait for apiserver process to appear ...
	I0805 13:03:45.572249  450576 api_server.go:88] waiting for apiserver healthz status ...
	I0805 13:03:45.572272  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 13:03:45.578042  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 200:
	ok
	I0805 13:03:45.579014  450576 api_server.go:141] control plane version: v1.31.0-rc.0
	I0805 13:03:45.579034  450576 api_server.go:131] duration metric: took 6.778214ms to wait for apiserver health ...
	I0805 13:03:45.579042  450576 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 13:03:45.585537  450576 system_pods.go:59] 9 kube-system pods found
	I0805 13:03:45.585660  450576 system_pods.go:61] "coredns-6f6b679f8f-npbmj" [9eea9e0a-697b-42c9-857c-a3556c658fde] Running
	I0805 13:03:45.585673  450576 system_pods.go:61] "coredns-6f6b679f8f-pqhwx" [3d7bb193-e93e-49b8-be4b-943f2d7fe59d] Running
	I0805 13:03:45.585679  450576 system_pods.go:61] "etcd-no-preload-669469" [550acfbb-f255-470e-9e4f-a6eb36447951] Running
	I0805 13:03:45.585687  450576 system_pods.go:61] "kube-apiserver-no-preload-669469" [57089d30-f83b-4f06-8281-8bcdfb571df9] Running
	I0805 13:03:45.585694  450576 system_pods.go:61] "kube-controller-manager-no-preload-669469" [8f3b2de3-6296-4f95-8d91-b9408c8eb38b] Running
	I0805 13:03:45.585700  450576 system_pods.go:61] "kube-proxy-tpn5s" [f89e32f9-d750-41ac-891e-e3ca4a4fbbd2] Running
	I0805 13:03:45.585705  450576 system_pods.go:61] "kube-scheduler-no-preload-669469" [69af56a0-7269-4bc5-83ea-c632c7b8d060] Running
	I0805 13:03:45.585716  450576 system_pods.go:61] "metrics-server-6867b74b74-x4j7b" [55a747e4-f9a7-41f1-b584-470048ba6fcb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 13:03:45.585726  450576 system_pods.go:61] "storage-provisioner" [cb19adf6-e208-4709-b02f-ae32acc30478] Running
	I0805 13:03:45.585736  450576 system_pods.go:74] duration metric: took 6.688464ms to wait for pod list to return data ...
	I0805 13:03:45.585749  450576 default_sa.go:34] waiting for default service account to be created ...
	I0805 13:03:45.589498  450576 default_sa.go:45] found service account: "default"
	I0805 13:03:45.589526  450576 default_sa.go:55] duration metric: took 3.765664ms for default service account to be created ...
	I0805 13:03:45.589535  450576 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 13:03:45.597499  450576 system_pods.go:86] 9 kube-system pods found
	I0805 13:03:45.597527  450576 system_pods.go:89] "coredns-6f6b679f8f-npbmj" [9eea9e0a-697b-42c9-857c-a3556c658fde] Running
	I0805 13:03:45.597533  450576 system_pods.go:89] "coredns-6f6b679f8f-pqhwx" [3d7bb193-e93e-49b8-be4b-943f2d7fe59d] Running
	I0805 13:03:45.597537  450576 system_pods.go:89] "etcd-no-preload-669469" [550acfbb-f255-470e-9e4f-a6eb36447951] Running
	I0805 13:03:45.597541  450576 system_pods.go:89] "kube-apiserver-no-preload-669469" [57089d30-f83b-4f06-8281-8bcdfb571df9] Running
	I0805 13:03:45.597547  450576 system_pods.go:89] "kube-controller-manager-no-preload-669469" [8f3b2de3-6296-4f95-8d91-b9408c8eb38b] Running
	I0805 13:03:45.597550  450576 system_pods.go:89] "kube-proxy-tpn5s" [f89e32f9-d750-41ac-891e-e3ca4a4fbbd2] Running
	I0805 13:03:45.597554  450576 system_pods.go:89] "kube-scheduler-no-preload-669469" [69af56a0-7269-4bc5-83ea-c632c7b8d060] Running
	I0805 13:03:45.597563  450576 system_pods.go:89] "metrics-server-6867b74b74-x4j7b" [55a747e4-f9a7-41f1-b584-470048ba6fcb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 13:03:45.597568  450576 system_pods.go:89] "storage-provisioner" [cb19adf6-e208-4709-b02f-ae32acc30478] Running
	I0805 13:03:45.597577  450576 system_pods.go:126] duration metric: took 8.035546ms to wait for k8s-apps to be running ...
	I0805 13:03:45.597586  450576 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 13:03:45.597631  450576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 13:03:45.619317  450576 system_svc.go:56] duration metric: took 21.706117ms WaitForService to wait for kubelet
	I0805 13:03:45.619365  450576 kubeadm.go:582] duration metric: took 8.353035332s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 13:03:45.619398  450576 node_conditions.go:102] verifying NodePressure condition ...
	I0805 13:03:45.622763  450576 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 13:03:45.622790  450576 node_conditions.go:123] node cpu capacity is 2
	I0805 13:03:45.622801  450576 node_conditions.go:105] duration metric: took 3.396756ms to run NodePressure ...
	I0805 13:03:45.622814  450576 start.go:241] waiting for startup goroutines ...
	I0805 13:03:45.622821  450576 start.go:246] waiting for cluster config update ...
	I0805 13:03:45.622831  450576 start.go:255] writing updated cluster config ...
	I0805 13:03:45.623102  450576 ssh_runner.go:195] Run: rm -f paused
	I0805 13:03:45.682547  450576 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-rc.0 (minor skew: 1)
	I0805 13:03:45.684415  450576 out.go:177] * Done! kubectl is now configured to use "no-preload-669469" cluster and "default" namespace by default
	I0805 13:03:48.707730  450884 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0805 13:03:48.707817  450884 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 13:03:48.707920  450884 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 13:03:48.708065  450884 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 13:03:48.708218  450884 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 13:03:48.708311  450884 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 13:03:48.709807  450884 out.go:204]   - Generating certificates and keys ...
	I0805 13:03:48.709878  450884 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 13:03:48.709931  450884 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 13:03:48.710008  450884 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0805 13:03:48.710084  450884 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0805 13:03:48.710148  450884 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0805 13:03:48.710196  450884 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0805 13:03:48.710251  450884 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0805 13:03:48.710316  450884 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0805 13:03:48.710415  450884 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0805 13:03:48.710520  450884 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0805 13:03:48.710582  450884 kubeadm.go:310] [certs] Using the existing "sa" key
	I0805 13:03:48.710656  450884 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 13:03:48.710700  450884 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 13:03:48.710746  450884 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0805 13:03:48.710790  450884 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 13:03:48.710843  450884 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 13:03:48.710895  450884 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 13:03:48.710971  450884 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 13:03:48.711055  450884 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 13:03:48.713503  450884 out.go:204]   - Booting up control plane ...
	I0805 13:03:48.713601  450884 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 13:03:48.713687  450884 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 13:03:48.713763  450884 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 13:03:48.713911  450884 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 13:03:48.714039  450884 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 13:03:48.714105  450884 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 13:03:48.714222  450884 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0805 13:03:48.714284  450884 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0805 13:03:48.714345  450884 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.128103ms
	I0805 13:03:48.714423  450884 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0805 13:03:48.714491  450884 kubeadm.go:310] [api-check] The API server is healthy after 5.502076793s
	I0805 13:03:48.714600  450884 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 13:03:48.714730  450884 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 13:03:48.714794  450884 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0805 13:03:48.714987  450884 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-371585 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 13:03:48.715075  450884 kubeadm.go:310] [bootstrap-token] Using token: cpuyhq.sjq5yhx27tk7meks
	I0805 13:03:48.716575  450884 out.go:204]   - Configuring RBAC rules ...
	I0805 13:03:48.716686  450884 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 13:03:48.716775  450884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 13:03:48.716952  450884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 13:03:48.717075  450884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 13:03:48.717196  450884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 13:03:48.717270  450884 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 13:03:48.717391  450884 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 13:03:48.717450  450884 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0805 13:03:48.717512  450884 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0805 13:03:48.717521  450884 kubeadm.go:310] 
	I0805 13:03:48.717613  450884 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0805 13:03:48.717623  450884 kubeadm.go:310] 
	I0805 13:03:48.717724  450884 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0805 13:03:48.717734  450884 kubeadm.go:310] 
	I0805 13:03:48.717768  450884 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0805 13:03:48.717848  450884 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 13:03:48.717892  450884 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 13:03:48.717898  450884 kubeadm.go:310] 
	I0805 13:03:48.717968  450884 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0805 13:03:48.717978  450884 kubeadm.go:310] 
	I0805 13:03:48.718047  450884 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 13:03:48.718057  450884 kubeadm.go:310] 
	I0805 13:03:48.718133  450884 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0805 13:03:48.718220  450884 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 13:03:48.718297  450884 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 13:03:48.718304  450884 kubeadm.go:310] 
	I0805 13:03:48.718422  450884 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0805 13:03:48.718506  450884 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0805 13:03:48.718513  450884 kubeadm.go:310] 
	I0805 13:03:48.718585  450884 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token cpuyhq.sjq5yhx27tk7meks \
	I0805 13:03:48.718669  450884 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d5d31a77e9c4cbf19599d2fca5d8f2345e115b01301fa4b841f92bcfec86ddc6 \
	I0805 13:03:48.718688  450884 kubeadm.go:310] 	--control-plane 
	I0805 13:03:48.718694  450884 kubeadm.go:310] 
	I0805 13:03:48.718761  450884 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0805 13:03:48.718769  450884 kubeadm.go:310] 
	I0805 13:03:48.718848  450884 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token cpuyhq.sjq5yhx27tk7meks \
	I0805 13:03:48.718948  450884 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d5d31a77e9c4cbf19599d2fca5d8f2345e115b01301fa4b841f92bcfec86ddc6 
	I0805 13:03:48.718957  450884 cni.go:84] Creating CNI manager for ""
	I0805 13:03:48.718965  450884 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 13:03:48.720262  450884 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0805 13:03:44.946225  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:47.444313  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:48.721390  450884 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0805 13:03:48.732324  450884 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0805 13:03:48.750318  450884 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 13:03:48.750397  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:48.750398  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-371585 minikube.k8s.io/updated_at=2024_08_05T13_03_48_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f minikube.k8s.io/name=default-k8s-diff-port-371585 minikube.k8s.io/primary=true
	I0805 13:03:48.781590  450884 ops.go:34] apiserver oom_adj: -16
	I0805 13:03:48.966544  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:49.467473  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:49.967093  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:50.466813  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:50.967183  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:51.467350  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:51.967432  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:49.444667  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:49.444719  450393 pod_ready.go:81] duration metric: took 4m0.006667631s for pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace to be "Ready" ...
	E0805 13:03:49.444731  450393 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0805 13:03:49.444738  450393 pod_ready.go:38] duration metric: took 4m2.407503205s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 13:03:49.444757  450393 api_server.go:52] waiting for apiserver process to appear ...
	I0805 13:03:49.444787  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:03:49.444849  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:03:49.502039  450393 cri.go:89] found id: "be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7"
	I0805 13:03:49.502067  450393 cri.go:89] found id: ""
	I0805 13:03:49.502079  450393 logs.go:276] 1 containers: [be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7]
	I0805 13:03:49.502139  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:49.510426  450393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:03:49.510494  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:03:49.553861  450393 cri.go:89] found id: "85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804"
	I0805 13:03:49.553889  450393 cri.go:89] found id: ""
	I0805 13:03:49.553899  450393 logs.go:276] 1 containers: [85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804]
	I0805 13:03:49.553960  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:49.558802  450393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:03:49.558868  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:03:49.594787  450393 cri.go:89] found id: "b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb"
	I0805 13:03:49.594810  450393 cri.go:89] found id: ""
	I0805 13:03:49.594828  450393 logs.go:276] 1 containers: [b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb]
	I0805 13:03:49.594891  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:49.599735  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:03:49.599822  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:03:49.637856  450393 cri.go:89] found id: "8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756"
	I0805 13:03:49.637878  450393 cri.go:89] found id: ""
	I0805 13:03:49.637886  450393 logs.go:276] 1 containers: [8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756]
	I0805 13:03:49.637939  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:49.642228  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:03:49.642295  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:03:49.683822  450393 cri.go:89] found id: "c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0"
	I0805 13:03:49.683844  450393 cri.go:89] found id: ""
	I0805 13:03:49.683853  450393 logs.go:276] 1 containers: [c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0]
	I0805 13:03:49.683913  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:49.688077  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:03:49.688155  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:03:49.724887  450393 cri.go:89] found id: "75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f"
	I0805 13:03:49.724913  450393 cri.go:89] found id: ""
	I0805 13:03:49.724923  450393 logs.go:276] 1 containers: [75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f]
	I0805 13:03:49.724987  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:49.728965  450393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:03:49.729052  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:03:49.765826  450393 cri.go:89] found id: ""
	I0805 13:03:49.765859  450393 logs.go:276] 0 containers: []
	W0805 13:03:49.765871  450393 logs.go:278] No container was found matching "kindnet"
	I0805 13:03:49.765878  450393 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0805 13:03:49.765944  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0805 13:03:49.803790  450393 cri.go:89] found id: "07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b"
	I0805 13:03:49.803811  450393 cri.go:89] found id: "2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86"
	I0805 13:03:49.803815  450393 cri.go:89] found id: ""
	I0805 13:03:49.803823  450393 logs.go:276] 2 containers: [07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b 2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86]
	I0805 13:03:49.803887  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:49.808064  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:49.812308  450393 logs.go:123] Gathering logs for storage-provisioner [2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86] ...
	I0805 13:03:49.812332  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86"
	I0805 13:03:49.851842  450393 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:49.851867  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:50.418758  450393 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:50.418808  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 13:03:50.564965  450393 logs.go:123] Gathering logs for coredns [b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb] ...
	I0805 13:03:50.564999  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb"
	I0805 13:03:50.608518  450393 logs.go:123] Gathering logs for kube-apiserver [be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7] ...
	I0805 13:03:50.608557  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7"
	I0805 13:03:50.658446  450393 logs.go:123] Gathering logs for etcd [85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804] ...
	I0805 13:03:50.658482  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804"
	I0805 13:03:50.699924  450393 logs.go:123] Gathering logs for kube-scheduler [8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756] ...
	I0805 13:03:50.699962  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756"
	I0805 13:03:50.741228  450393 logs.go:123] Gathering logs for kube-proxy [c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0] ...
	I0805 13:03:50.741264  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0"
	I0805 13:03:50.776100  450393 logs.go:123] Gathering logs for kube-controller-manager [75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f] ...
	I0805 13:03:50.776133  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f"
	I0805 13:03:50.827847  450393 logs.go:123] Gathering logs for storage-provisioner [07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b] ...
	I0805 13:03:50.827880  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b"
	I0805 13:03:50.867699  450393 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:50.867731  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:50.920049  450393 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:50.920085  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:50.934198  450393 logs.go:123] Gathering logs for container status ...
	I0805 13:03:50.934224  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:03:53.477808  450393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:03:53.494062  450393 api_server.go:72] duration metric: took 4m14.183013645s to wait for apiserver process to appear ...
	I0805 13:03:53.494093  450393 api_server.go:88] waiting for apiserver healthz status ...
	I0805 13:03:53.494143  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:03:53.494211  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:03:53.534293  450393 cri.go:89] found id: "be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7"
	I0805 13:03:53.534322  450393 cri.go:89] found id: ""
	I0805 13:03:53.534333  450393 logs.go:276] 1 containers: [be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7]
	I0805 13:03:53.534400  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:53.539014  450393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:03:53.539088  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:03:53.576587  450393 cri.go:89] found id: "85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804"
	I0805 13:03:53.576608  450393 cri.go:89] found id: ""
	I0805 13:03:53.576616  450393 logs.go:276] 1 containers: [85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804]
	I0805 13:03:53.576667  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:53.582068  450393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:03:53.582147  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:03:53.623240  450393 cri.go:89] found id: "b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb"
	I0805 13:03:53.623264  450393 cri.go:89] found id: ""
	I0805 13:03:53.623274  450393 logs.go:276] 1 containers: [b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb]
	I0805 13:03:53.623352  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:53.627638  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:03:53.627699  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:03:53.668167  450393 cri.go:89] found id: "8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756"
	I0805 13:03:53.668198  450393 cri.go:89] found id: ""
	I0805 13:03:53.668209  450393 logs.go:276] 1 containers: [8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756]
	I0805 13:03:53.668281  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:53.672390  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:03:53.672469  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:03:53.714046  450393 cri.go:89] found id: "c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0"
	I0805 13:03:53.714069  450393 cri.go:89] found id: ""
	I0805 13:03:53.714078  450393 logs.go:276] 1 containers: [c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0]
	I0805 13:03:53.714130  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:53.718325  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:03:53.718392  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:03:53.756343  450393 cri.go:89] found id: "75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f"
	I0805 13:03:53.756372  450393 cri.go:89] found id: ""
	I0805 13:03:53.756382  450393 logs.go:276] 1 containers: [75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f]
	I0805 13:03:53.756444  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:53.760627  450393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:03:53.760696  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:03:53.806370  450393 cri.go:89] found id: ""
	I0805 13:03:53.806406  450393 logs.go:276] 0 containers: []
	W0805 13:03:53.806424  450393 logs.go:278] No container was found matching "kindnet"
	I0805 13:03:53.806432  450393 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0805 13:03:53.806505  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0805 13:03:53.843082  450393 cri.go:89] found id: "07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b"
	I0805 13:03:53.843116  450393 cri.go:89] found id: "2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86"
	I0805 13:03:53.843121  450393 cri.go:89] found id: ""
	I0805 13:03:53.843129  450393 logs.go:276] 2 containers: [07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b 2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86]
	I0805 13:03:53.843188  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:53.847214  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:53.851093  450393 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:53.851112  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:52.467589  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:52.967390  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:53.466580  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:53.967544  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:54.467454  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:54.967281  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:55.467111  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:55.967513  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:56.467255  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:56.967513  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:54.296506  450393 logs.go:123] Gathering logs for kube-apiserver [be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7] ...
	I0805 13:03:54.296556  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7"
	I0805 13:03:54.343983  450393 logs.go:123] Gathering logs for etcd [85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804] ...
	I0805 13:03:54.344026  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804"
	I0805 13:03:54.389236  450393 logs.go:123] Gathering logs for coredns [b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb] ...
	I0805 13:03:54.389271  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb"
	I0805 13:03:54.427964  450393 logs.go:123] Gathering logs for kube-proxy [c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0] ...
	I0805 13:03:54.427996  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0"
	I0805 13:03:54.465953  450393 logs.go:123] Gathering logs for kube-controller-manager [75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f] ...
	I0805 13:03:54.465988  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f"
	I0805 13:03:54.521755  450393 logs.go:123] Gathering logs for storage-provisioner [07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b] ...
	I0805 13:03:54.521835  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b"
	I0805 13:03:54.565481  450393 logs.go:123] Gathering logs for storage-provisioner [2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86] ...
	I0805 13:03:54.565513  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86"
	I0805 13:03:54.606592  450393 logs.go:123] Gathering logs for container status ...
	I0805 13:03:54.606634  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:03:54.650820  450393 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:54.650858  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:54.704512  450393 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:54.704559  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:54.722149  450393 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:54.722184  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 13:03:54.844289  450393 logs.go:123] Gathering logs for kube-scheduler [8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756] ...
	I0805 13:03:54.844324  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756"
	I0805 13:03:57.386998  450393 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0805 13:03:57.391714  450393 api_server.go:279] https://192.168.39.196:8443/healthz returned 200:
	ok
	I0805 13:03:57.392752  450393 api_server.go:141] control plane version: v1.30.3
	I0805 13:03:57.392776  450393 api_server.go:131] duration metric: took 3.898675075s to wait for apiserver health ...
	I0805 13:03:57.392783  450393 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 13:03:57.392812  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:03:57.392868  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:03:57.430171  450393 cri.go:89] found id: "be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7"
	I0805 13:03:57.430201  450393 cri.go:89] found id: ""
	I0805 13:03:57.430210  450393 logs.go:276] 1 containers: [be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7]
	I0805 13:03:57.430270  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:57.434861  450393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:03:57.434920  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:03:57.490595  450393 cri.go:89] found id: "85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804"
	I0805 13:03:57.490622  450393 cri.go:89] found id: ""
	I0805 13:03:57.490632  450393 logs.go:276] 1 containers: [85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804]
	I0805 13:03:57.490702  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:57.496054  450393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:03:57.496141  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:03:57.540248  450393 cri.go:89] found id: "b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb"
	I0805 13:03:57.540278  450393 cri.go:89] found id: ""
	I0805 13:03:57.540289  450393 logs.go:276] 1 containers: [b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb]
	I0805 13:03:57.540353  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:57.547750  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:03:57.547820  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:03:57.595821  450393 cri.go:89] found id: "8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756"
	I0805 13:03:57.595852  450393 cri.go:89] found id: ""
	I0805 13:03:57.595864  450393 logs.go:276] 1 containers: [8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756]
	I0805 13:03:57.595932  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:57.600153  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:03:57.600225  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:03:57.640382  450393 cri.go:89] found id: "c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0"
	I0805 13:03:57.640409  450393 cri.go:89] found id: ""
	I0805 13:03:57.640418  450393 logs.go:276] 1 containers: [c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0]
	I0805 13:03:57.640486  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:57.645476  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:03:57.645569  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:03:57.700199  450393 cri.go:89] found id: "75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f"
	I0805 13:03:57.700224  450393 cri.go:89] found id: ""
	I0805 13:03:57.700233  450393 logs.go:276] 1 containers: [75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f]
	I0805 13:03:57.700294  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:57.704818  450393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:03:57.704874  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:03:57.745647  450393 cri.go:89] found id: ""
	I0805 13:03:57.745677  450393 logs.go:276] 0 containers: []
	W0805 13:03:57.745687  450393 logs.go:278] No container was found matching "kindnet"
	I0805 13:03:57.745696  450393 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0805 13:03:57.745760  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0805 13:03:57.787327  450393 cri.go:89] found id: "07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b"
	I0805 13:03:57.787367  450393 cri.go:89] found id: "2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86"
	I0805 13:03:57.787374  450393 cri.go:89] found id: ""
	I0805 13:03:57.787384  450393 logs.go:276] 2 containers: [07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b 2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86]
	I0805 13:03:57.787448  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:57.792340  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:57.796906  450393 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:57.796933  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:57.850401  450393 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:57.850447  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 13:03:57.961760  450393 logs.go:123] Gathering logs for kube-apiserver [be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7] ...
	I0805 13:03:57.961808  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7"
	I0805 13:03:58.009682  450393 logs.go:123] Gathering logs for etcd [85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804] ...
	I0805 13:03:58.009720  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804"
	I0805 13:03:58.061874  450393 logs.go:123] Gathering logs for kube-proxy [c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0] ...
	I0805 13:03:58.061915  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0"
	I0805 13:03:58.105715  450393 logs.go:123] Gathering logs for kube-controller-manager [75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f] ...
	I0805 13:03:58.105745  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f"
	I0805 13:03:58.164739  450393 logs.go:123] Gathering logs for storage-provisioner [07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b] ...
	I0805 13:03:58.164780  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b"
	I0805 13:03:58.203530  450393 logs.go:123] Gathering logs for storage-provisioner [2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86] ...
	I0805 13:03:58.203579  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86"
	I0805 13:03:58.245478  450393 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:58.245511  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:58.647807  450393 logs.go:123] Gathering logs for container status ...
	I0805 13:03:58.647857  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:03:58.694175  450393 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:58.694211  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:58.709744  450393 logs.go:123] Gathering logs for coredns [b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb] ...
	I0805 13:03:58.709773  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb"
	I0805 13:03:58.750668  450393 logs.go:123] Gathering logs for kube-scheduler [8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756] ...
	I0805 13:03:58.750698  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756"
	I0805 13:04:01.297212  450393 system_pods.go:59] 8 kube-system pods found
	I0805 13:04:01.297248  450393 system_pods.go:61] "coredns-7db6d8ff4d-wm7lh" [e3851d79-431c-4629-bfdc-ed9615cd46aa] Running
	I0805 13:04:01.297255  450393 system_pods.go:61] "etcd-embed-certs-321139" [98de664b-92d7-432d-9881-496dd8edd9f3] Running
	I0805 13:04:01.297261  450393 system_pods.go:61] "kube-apiserver-embed-certs-321139" [2d93e6df-1933-4ac1-82f6-d0d8f74f6d4e] Running
	I0805 13:04:01.297265  450393 system_pods.go:61] "kube-controller-manager-embed-certs-321139" [84165f78-f74b-4714-81b9-eeac2771b86b] Running
	I0805 13:04:01.297269  450393 system_pods.go:61] "kube-proxy-shgv2" [a19c5991-505f-4105-8c20-7afd63dd8e61] Running
	I0805 13:04:01.297273  450393 system_pods.go:61] "kube-scheduler-embed-certs-321139" [961a5013-fd55-48a2-adc2-acde33f6aed5] Running
	I0805 13:04:01.297281  450393 system_pods.go:61] "metrics-server-569cc877fc-k8mrt" [6d400b20-5de5-4046-b773-39766c67cdb4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 13:04:01.297289  450393 system_pods.go:61] "storage-provisioner" [8b2db057-5262-4648-93ea-f2f0ed51a19b] Running
	I0805 13:04:01.297300  450393 system_pods.go:74] duration metric: took 3.904508974s to wait for pod list to return data ...
	I0805 13:04:01.297312  450393 default_sa.go:34] waiting for default service account to be created ...
	I0805 13:04:01.299765  450393 default_sa.go:45] found service account: "default"
	I0805 13:04:01.299792  450393 default_sa.go:55] duration metric: took 2.470684ms for default service account to be created ...
	I0805 13:04:01.299802  450393 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 13:04:01.304612  450393 system_pods.go:86] 8 kube-system pods found
	I0805 13:04:01.304644  450393 system_pods.go:89] "coredns-7db6d8ff4d-wm7lh" [e3851d79-431c-4629-bfdc-ed9615cd46aa] Running
	I0805 13:04:01.304651  450393 system_pods.go:89] "etcd-embed-certs-321139" [98de664b-92d7-432d-9881-496dd8edd9f3] Running
	I0805 13:04:01.304656  450393 system_pods.go:89] "kube-apiserver-embed-certs-321139" [2d93e6df-1933-4ac1-82f6-d0d8f74f6d4e] Running
	I0805 13:04:01.304661  450393 system_pods.go:89] "kube-controller-manager-embed-certs-321139" [84165f78-f74b-4714-81b9-eeac2771b86b] Running
	I0805 13:04:01.304665  450393 system_pods.go:89] "kube-proxy-shgv2" [a19c5991-505f-4105-8c20-7afd63dd8e61] Running
	I0805 13:04:01.304670  450393 system_pods.go:89] "kube-scheduler-embed-certs-321139" [961a5013-fd55-48a2-adc2-acde33f6aed5] Running
	I0805 13:04:01.304677  450393 system_pods.go:89] "metrics-server-569cc877fc-k8mrt" [6d400b20-5de5-4046-b773-39766c67cdb4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 13:04:01.304685  450393 system_pods.go:89] "storage-provisioner" [8b2db057-5262-4648-93ea-f2f0ed51a19b] Running
	I0805 13:04:01.304694  450393 system_pods.go:126] duration metric: took 4.885808ms to wait for k8s-apps to be running ...
	I0805 13:04:01.304702  450393 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 13:04:01.304751  450393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 13:04:01.323278  450393 system_svc.go:56] duration metric: took 18.55935ms WaitForService to wait for kubelet
	I0805 13:04:01.323316  450393 kubeadm.go:582] duration metric: took 4m22.01227204s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 13:04:01.323349  450393 node_conditions.go:102] verifying NodePressure condition ...
	I0805 13:04:01.326802  450393 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 13:04:01.326829  450393 node_conditions.go:123] node cpu capacity is 2
	I0805 13:04:01.326843  450393 node_conditions.go:105] duration metric: took 3.486931ms to run NodePressure ...
	I0805 13:04:01.326859  450393 start.go:241] waiting for startup goroutines ...
	I0805 13:04:01.326869  450393 start.go:246] waiting for cluster config update ...
	I0805 13:04:01.326883  450393 start.go:255] writing updated cluster config ...
	I0805 13:04:01.327230  450393 ssh_runner.go:195] Run: rm -f paused
	I0805 13:04:01.380315  450393 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0805 13:04:01.381891  450393 out.go:177] * Done! kubectl is now configured to use "embed-certs-321139" cluster and "default" namespace by default
	I0805 13:03:57.113870  451238 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0805 13:03:57.114408  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:03:57.114630  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:03:57.467412  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:57.967538  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:58.467217  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:58.967035  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:59.466816  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:59.966909  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:04:00.467553  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:04:00.967667  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:04:01.467382  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:04:01.967495  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:04:02.085428  450884 kubeadm.go:1113] duration metric: took 13.335097096s to wait for elevateKubeSystemPrivileges
	I0805 13:04:02.085464  450884 kubeadm.go:394] duration metric: took 5m13.227479413s to StartCluster
	I0805 13:04:02.085482  450884 settings.go:142] acquiring lock: {Name:mkef693333292ed53a03690c72ec170ce2e26d3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 13:04:02.085571  450884 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 13:04:02.087178  450884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/kubeconfig: {Name:mkf2ea766e58530103015ce4ba9d1ed3336f3926 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 13:04:02.087425  450884 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.228 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 13:04:02.087550  450884 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 13:04:02.087653  450884 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-371585"
	I0805 13:04:02.087659  450884 config.go:182] Loaded profile config "default-k8s-diff-port-371585": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 13:04:02.087681  450884 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-371585"
	I0805 13:04:02.087697  450884 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-371585"
	I0805 13:04:02.087718  450884 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-371585"
	W0805 13:04:02.087729  450884 addons.go:243] addon metrics-server should already be in state true
	I0805 13:04:02.087783  450884 host.go:66] Checking if "default-k8s-diff-port-371585" exists ...
	I0805 13:04:02.087727  450884 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-371585"
	I0805 13:04:02.087692  450884 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-371585"
	W0805 13:04:02.087953  450884 addons.go:243] addon storage-provisioner should already be in state true
	I0805 13:04:02.087986  450884 host.go:66] Checking if "default-k8s-diff-port-371585" exists ...
	I0805 13:04:02.088243  450884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:04:02.088294  450884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:04:02.088243  450884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:04:02.088377  450884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:04:02.088406  450884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:04:02.088415  450884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:04:02.088935  450884 out.go:177] * Verifying Kubernetes components...
	I0805 13:04:02.090386  450884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 13:04:02.105328  450884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39169
	I0805 13:04:02.105335  450884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33049
	I0805 13:04:02.105853  450884 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:04:02.105848  450884 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:04:02.106395  450884 main.go:141] libmachine: Using API Version  1
	I0805 13:04:02.106398  450884 main.go:141] libmachine: Using API Version  1
	I0805 13:04:02.106420  450884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:04:02.106423  450884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:04:02.106506  450884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33831
	I0805 13:04:02.106879  450884 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:04:02.106957  450884 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:04:02.106982  450884 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:04:02.107193  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetState
	I0805 13:04:02.107508  450884 main.go:141] libmachine: Using API Version  1
	I0805 13:04:02.107522  450884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:04:02.107534  450884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:04:02.107561  450884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:04:02.107903  450884 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:04:02.108458  450884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:04:02.108490  450884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:04:02.111681  450884 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-371585"
	W0805 13:04:02.111709  450884 addons.go:243] addon default-storageclass should already be in state true
	I0805 13:04:02.111775  450884 host.go:66] Checking if "default-k8s-diff-port-371585" exists ...
	I0805 13:04:02.113601  450884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:04:02.113648  450884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:04:02.127860  450884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37207
	I0805 13:04:02.128512  450884 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:04:02.128619  450884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39253
	I0805 13:04:02.129023  450884 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:04:02.129174  450884 main.go:141] libmachine: Using API Version  1
	I0805 13:04:02.129198  450884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:04:02.129495  450884 main.go:141] libmachine: Using API Version  1
	I0805 13:04:02.129516  450884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:04:02.129566  450884 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:04:02.129850  450884 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:04:02.129879  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetState
	I0805 13:04:02.130443  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetState
	I0805 13:04:02.131691  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 13:04:02.132370  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 13:04:02.133468  450884 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 13:04:02.134210  450884 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0805 13:04:02.134899  450884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37161
	I0805 13:04:02.135049  450884 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0805 13:04:02.135067  450884 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0805 13:04:02.135099  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 13:04:02.135183  450884 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 13:04:02.135201  450884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 13:04:02.135216  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 13:04:02.135404  450884 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:04:02.136704  450884 main.go:141] libmachine: Using API Version  1
	I0805 13:04:02.136723  450884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:04:02.138362  450884 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:04:02.138801  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 13:04:02.138918  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 13:04:02.139264  450884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:04:02.139290  450884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:04:02.139335  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 13:04:02.139377  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 13:04:02.139404  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 13:04:02.139448  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 13:04:02.139482  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 13:04:02.139503  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 13:04:02.139581  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 13:04:02.139637  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 13:04:02.139737  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 13:04:02.139807  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 13:04:02.139867  450884 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa Username:docker}
	I0805 13:04:02.139909  450884 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa Username:docker}
	I0805 13:04:02.159720  450884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34137
	I0805 13:04:02.160199  450884 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:04:02.160744  450884 main.go:141] libmachine: Using API Version  1
	I0805 13:04:02.160770  450884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:04:02.161048  450884 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:04:02.161246  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetState
	I0805 13:04:02.162535  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 13:04:02.162788  450884 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 13:04:02.162805  450884 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 13:04:02.162825  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 13:04:02.165787  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 13:04:02.166204  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 13:04:02.166236  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 13:04:02.166411  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 13:04:02.166594  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 13:04:02.166744  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 13:04:02.166876  450884 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa Username:docker}
	I0805 13:04:02.349175  450884 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 13:04:02.453663  450884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 13:04:02.462474  450884 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-371585" to be "Ready" ...
	I0805 13:04:02.472177  450884 node_ready.go:49] node "default-k8s-diff-port-371585" has status "Ready":"True"
	I0805 13:04:02.472201  450884 node_ready.go:38] duration metric: took 9.692872ms for node "default-k8s-diff-port-371585" to be "Ready" ...
	I0805 13:04:02.472211  450884 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 13:04:02.474341  450884 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0805 13:04:02.474363  450884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0805 13:04:02.485604  450884 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-5vxpl" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:02.514889  450884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 13:04:02.543388  450884 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0805 13:04:02.543428  450884 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0805 13:04:02.618040  450884 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0805 13:04:02.618094  450884 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0805 13:04:02.716705  450884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0805 13:04:02.784102  450884 main.go:141] libmachine: Making call to close driver server
	I0805 13:04:02.784193  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .Close
	I0805 13:04:02.784545  450884 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:04:02.784566  450884 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:04:02.784577  450884 main.go:141] libmachine: Making call to close driver server
	I0805 13:04:02.784586  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .Close
	I0805 13:04:02.784588  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | Closing plugin on server side
	I0805 13:04:02.784851  450884 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:04:02.784868  450884 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:04:02.784868  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | Closing plugin on server side
	I0805 13:04:02.797584  450884 main.go:141] libmachine: Making call to close driver server
	I0805 13:04:02.797617  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .Close
	I0805 13:04:02.797938  450884 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:04:02.797956  450884 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:04:03.431060  450884 main.go:141] libmachine: Making call to close driver server
	I0805 13:04:03.431091  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .Close
	I0805 13:04:03.431452  450884 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:04:03.431494  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | Closing plugin on server side
	I0805 13:04:03.431511  450884 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:04:03.431530  450884 main.go:141] libmachine: Making call to close driver server
	I0805 13:04:03.431539  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .Close
	I0805 13:04:03.431839  450884 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:04:03.431893  450884 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:04:03.746668  450884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.029912928s)
	I0805 13:04:03.746734  450884 main.go:141] libmachine: Making call to close driver server
	I0805 13:04:03.746750  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .Close
	I0805 13:04:03.747152  450884 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:04:03.747180  450884 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:04:03.747191  450884 main.go:141] libmachine: Making call to close driver server
	I0805 13:04:03.747200  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .Close
	I0805 13:04:03.748527  450884 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:04:03.748558  450884 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:04:03.748571  450884 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-371585"
	I0805 13:04:03.750522  450884 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0805 13:04:03.751714  450884 addons.go:510] duration metric: took 1.664163176s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0805 13:04:04.491832  450884 pod_ready.go:92] pod "coredns-7db6d8ff4d-5vxpl" in "kube-system" namespace has status "Ready":"True"
	I0805 13:04:04.491861  450884 pod_ready.go:81] duration metric: took 2.00623062s for pod "coredns-7db6d8ff4d-5vxpl" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.491870  450884 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-qtt9j" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.496173  450884 pod_ready.go:92] pod "coredns-7db6d8ff4d-qtt9j" in "kube-system" namespace has status "Ready":"True"
	I0805 13:04:04.496194  450884 pod_ready.go:81] duration metric: took 4.317446ms for pod "coredns-7db6d8ff4d-qtt9j" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.496202  450884 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.500270  450884 pod_ready.go:92] pod "etcd-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"True"
	I0805 13:04:04.500297  450884 pod_ready.go:81] duration metric: took 4.088399ms for pod "etcd-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.500309  450884 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.504892  450884 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"True"
	I0805 13:04:04.504917  450884 pod_ready.go:81] duration metric: took 4.598589ms for pod "kube-apiserver-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.504926  450884 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.509448  450884 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"True"
	I0805 13:04:04.509468  450884 pod_ready.go:81] duration metric: took 4.535174ms for pod "kube-controller-manager-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.509478  450884 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4v6sn" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.890517  450884 pod_ready.go:92] pod "kube-proxy-4v6sn" in "kube-system" namespace has status "Ready":"True"
	I0805 13:04:04.890544  450884 pod_ready.go:81] duration metric: took 381.059204ms for pod "kube-proxy-4v6sn" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.890552  450884 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:05.289670  450884 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"True"
	I0805 13:04:05.289701  450884 pod_ready.go:81] duration metric: took 399.141309ms for pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:05.289712  450884 pod_ready.go:38] duration metric: took 2.817491444s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 13:04:05.289732  450884 api_server.go:52] waiting for apiserver process to appear ...
	I0805 13:04:05.289805  450884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:04:05.305815  450884 api_server.go:72] duration metric: took 3.218344531s to wait for apiserver process to appear ...
	I0805 13:04:05.305848  450884 api_server.go:88] waiting for apiserver healthz status ...
	I0805 13:04:05.305870  450884 api_server.go:253] Checking apiserver healthz at https://192.168.50.228:8444/healthz ...
	I0805 13:04:05.311144  450884 api_server.go:279] https://192.168.50.228:8444/healthz returned 200:
	ok
	I0805 13:04:05.312427  450884 api_server.go:141] control plane version: v1.30.3
	I0805 13:04:05.312450  450884 api_server.go:131] duration metric: took 6.595933ms to wait for apiserver health ...
	I0805 13:04:05.312460  450884 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 13:04:05.493376  450884 system_pods.go:59] 9 kube-system pods found
	I0805 13:04:05.493417  450884 system_pods.go:61] "coredns-7db6d8ff4d-5vxpl" [6f6aa906-d76f-4f92-8de4-4d3a4a1ee733] Running
	I0805 13:04:05.493425  450884 system_pods.go:61] "coredns-7db6d8ff4d-qtt9j" [8dcadd0b-af8c-4d76-a1d1-ceeaffb725b8] Running
	I0805 13:04:05.493432  450884 system_pods.go:61] "etcd-default-k8s-diff-port-371585" [c3ab12b8-78ea-42c5-a1d3-e37eb9e72961] Running
	I0805 13:04:05.493438  450884 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-371585" [16d27e99-f652-4e88-907f-c2895f051a8a] Running
	I0805 13:04:05.493444  450884 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-371585" [f8d0d828-a7fb-4887-bbf9-e3ad9fd3ebf3] Running
	I0805 13:04:05.493450  450884 system_pods.go:61] "kube-proxy-4v6sn" [497a1512-cdee-49ff-92ea-ea523d3de2a4] Running
	I0805 13:04:05.493456  450884 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-371585" [48ae4522-6d11-4f79-820b-68eb06410186] Running
	I0805 13:04:05.493465  450884 system_pods.go:61] "metrics-server-569cc877fc-xf92r" [edb560ac-ddb1-4afa-b3a3-aa054ea38162] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 13:04:05.493475  450884 system_pods.go:61] "storage-provisioner" [8f3de3fc-9b34-4a46-a7cf-5487647b06ca] Running
	I0805 13:04:05.493488  450884 system_pods.go:74] duration metric: took 181.019102ms to wait for pod list to return data ...
	I0805 13:04:05.493504  450884 default_sa.go:34] waiting for default service account to be created ...
	I0805 13:04:05.688283  450884 default_sa.go:45] found service account: "default"
	I0805 13:04:05.688313  450884 default_sa.go:55] duration metric: took 194.799711ms for default service account to be created ...
	I0805 13:04:05.688323  450884 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 13:04:05.892656  450884 system_pods.go:86] 9 kube-system pods found
	I0805 13:04:05.892688  450884 system_pods.go:89] "coredns-7db6d8ff4d-5vxpl" [6f6aa906-d76f-4f92-8de4-4d3a4a1ee733] Running
	I0805 13:04:05.892696  450884 system_pods.go:89] "coredns-7db6d8ff4d-qtt9j" [8dcadd0b-af8c-4d76-a1d1-ceeaffb725b8] Running
	I0805 13:04:05.892702  450884 system_pods.go:89] "etcd-default-k8s-diff-port-371585" [c3ab12b8-78ea-42c5-a1d3-e37eb9e72961] Running
	I0805 13:04:05.892709  450884 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-371585" [16d27e99-f652-4e88-907f-c2895f051a8a] Running
	I0805 13:04:05.892715  450884 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-371585" [f8d0d828-a7fb-4887-bbf9-e3ad9fd3ebf3] Running
	I0805 13:04:05.892721  450884 system_pods.go:89] "kube-proxy-4v6sn" [497a1512-cdee-49ff-92ea-ea523d3de2a4] Running
	I0805 13:04:05.892727  450884 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-371585" [48ae4522-6d11-4f79-820b-68eb06410186] Running
	I0805 13:04:05.892737  450884 system_pods.go:89] "metrics-server-569cc877fc-xf92r" [edb560ac-ddb1-4afa-b3a3-aa054ea38162] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 13:04:05.892743  450884 system_pods.go:89] "storage-provisioner" [8f3de3fc-9b34-4a46-a7cf-5487647b06ca] Running
	I0805 13:04:05.892755  450884 system_pods.go:126] duration metric: took 204.423562ms to wait for k8s-apps to be running ...
	I0805 13:04:05.892765  450884 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 13:04:05.892819  450884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 13:04:05.907542  450884 system_svc.go:56] duration metric: took 14.764349ms WaitForService to wait for kubelet
	I0805 13:04:05.907576  450884 kubeadm.go:582] duration metric: took 3.820116927s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 13:04:05.907599  450884 node_conditions.go:102] verifying NodePressure condition ...
	I0805 13:04:06.089000  450884 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 13:04:06.089025  450884 node_conditions.go:123] node cpu capacity is 2
	I0805 13:04:06.089035  450884 node_conditions.go:105] duration metric: took 181.431221ms to run NodePressure ...
	I0805 13:04:06.089047  450884 start.go:241] waiting for startup goroutines ...
	I0805 13:04:06.089054  450884 start.go:246] waiting for cluster config update ...
	I0805 13:04:06.089065  450884 start.go:255] writing updated cluster config ...
	I0805 13:04:06.089373  450884 ssh_runner.go:195] Run: rm -f paused
	I0805 13:04:06.140202  450884 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0805 13:04:06.142149  450884 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-371585" cluster and "default" namespace by default
	I0805 13:04:02.115811  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:04:02.116057  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:04:12.115990  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:04:12.116208  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:04:32.116734  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:04:32.117001  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:05:12.119196  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:05:12.119475  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:05:12.119502  451238 kubeadm.go:310] 
	I0805 13:05:12.119564  451238 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0805 13:05:12.119622  451238 kubeadm.go:310] 		timed out waiting for the condition
	I0805 13:05:12.119634  451238 kubeadm.go:310] 
	I0805 13:05:12.119680  451238 kubeadm.go:310] 	This error is likely caused by:
	I0805 13:05:12.119724  451238 kubeadm.go:310] 		- The kubelet is not running
	I0805 13:05:12.119880  451238 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0805 13:05:12.119898  451238 kubeadm.go:310] 
	I0805 13:05:12.120029  451238 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0805 13:05:12.120114  451238 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0805 13:05:12.120169  451238 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0805 13:05:12.120179  451238 kubeadm.go:310] 
	I0805 13:05:12.120321  451238 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0805 13:05:12.120445  451238 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0805 13:05:12.120455  451238 kubeadm.go:310] 
	I0805 13:05:12.120612  451238 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0805 13:05:12.120751  451238 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0805 13:05:12.120888  451238 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0805 13:05:12.121010  451238 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0805 13:05:12.121023  451238 kubeadm.go:310] 
	I0805 13:05:12.121325  451238 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 13:05:12.121458  451238 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0805 13:05:12.121545  451238 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0805 13:05:12.121714  451238 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0805 13:05:12.121782  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0805 13:05:12.587687  451238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 13:05:12.603422  451238 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 13:05:12.614302  451238 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 13:05:12.614330  451238 kubeadm.go:157] found existing configuration files:
	
	I0805 13:05:12.614391  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 13:05:12.625131  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 13:05:12.625199  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 13:05:12.635606  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 13:05:12.644896  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 13:05:12.644953  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 13:05:12.655178  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 13:05:12.664668  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 13:05:12.664753  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 13:05:12.675174  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 13:05:12.684765  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 13:05:12.684834  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 13:05:12.694762  451238 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 13:05:12.930906  451238 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 13:07:09.256859  451238 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0805 13:07:09.257016  451238 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0805 13:07:09.258511  451238 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0805 13:07:09.258579  451238 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 13:07:09.258710  451238 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 13:07:09.258881  451238 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 13:07:09.259022  451238 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 13:07:09.259125  451238 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 13:07:09.260912  451238 out.go:204]   - Generating certificates and keys ...
	I0805 13:07:09.261023  451238 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 13:07:09.261123  451238 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 13:07:09.261232  451238 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0805 13:07:09.261319  451238 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0805 13:07:09.261411  451238 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0805 13:07:09.261507  451238 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0805 13:07:09.261601  451238 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0805 13:07:09.261690  451238 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0805 13:07:09.261801  451238 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0805 13:07:09.261946  451238 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0805 13:07:09.262015  451238 kubeadm.go:310] [certs] Using the existing "sa" key
	I0805 13:07:09.262119  451238 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 13:07:09.262198  451238 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 13:07:09.262273  451238 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 13:07:09.262369  451238 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 13:07:09.262464  451238 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 13:07:09.262615  451238 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 13:07:09.262731  451238 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 13:07:09.262770  451238 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 13:07:09.262831  451238 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 13:07:09.264428  451238 out.go:204]   - Booting up control plane ...
	I0805 13:07:09.264537  451238 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 13:07:09.264663  451238 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 13:07:09.264774  451238 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 13:07:09.264896  451238 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 13:07:09.265144  451238 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0805 13:07:09.265224  451238 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0805 13:07:09.265318  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:07:09.265554  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:07:09.265630  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:07:09.265783  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:07:09.265886  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:07:09.266143  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:07:09.266221  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:07:09.266387  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:07:09.266472  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:07:09.266656  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:07:09.266673  451238 kubeadm.go:310] 
	I0805 13:07:09.266707  451238 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0805 13:07:09.266738  451238 kubeadm.go:310] 		timed out waiting for the condition
	I0805 13:07:09.266743  451238 kubeadm.go:310] 
	I0805 13:07:09.266788  451238 kubeadm.go:310] 	This error is likely caused by:
	I0805 13:07:09.266819  451238 kubeadm.go:310] 		- The kubelet is not running
	I0805 13:07:09.266924  451238 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0805 13:07:09.266932  451238 kubeadm.go:310] 
	I0805 13:07:09.267050  451238 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0805 13:07:09.267137  451238 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0805 13:07:09.267192  451238 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0805 13:07:09.267201  451238 kubeadm.go:310] 
	I0805 13:07:09.267316  451238 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0805 13:07:09.267435  451238 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0805 13:07:09.267445  451238 kubeadm.go:310] 
	I0805 13:07:09.267570  451238 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0805 13:07:09.267683  451238 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0805 13:07:09.267802  451238 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0805 13:07:09.267898  451238 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0805 13:07:09.267986  451238 kubeadm.go:310] 
	I0805 13:07:09.268003  451238 kubeadm.go:394] duration metric: took 7m57.870990174s to StartCluster
	I0805 13:07:09.268066  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:07:09.268158  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:07:09.311436  451238 cri.go:89] found id: ""
	I0805 13:07:09.311471  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.311497  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:07:09.311509  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:07:09.311573  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:07:09.347748  451238 cri.go:89] found id: ""
	I0805 13:07:09.347776  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.347784  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:07:09.347797  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:07:09.347860  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:07:09.385418  451238 cri.go:89] found id: ""
	I0805 13:07:09.385445  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.385453  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:07:09.385460  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:07:09.385517  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:07:09.427209  451238 cri.go:89] found id: ""
	I0805 13:07:09.427255  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.427268  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:07:09.427276  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:07:09.427360  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:07:09.461763  451238 cri.go:89] found id: ""
	I0805 13:07:09.461787  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.461795  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:07:09.461801  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:07:09.461854  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:07:09.498655  451238 cri.go:89] found id: ""
	I0805 13:07:09.498692  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.498705  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:07:09.498713  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:07:09.498782  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:07:09.534100  451238 cri.go:89] found id: ""
	I0805 13:07:09.534134  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.534143  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:07:09.534149  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:07:09.534207  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:07:09.570089  451238 cri.go:89] found id: ""
	I0805 13:07:09.570125  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.570137  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:07:09.570153  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:07:09.570176  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:07:09.625158  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:07:09.625199  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:07:09.640087  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:07:09.640119  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:07:09.719851  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:07:09.719879  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:07:09.719895  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:07:09.832717  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:07:09.832758  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0805 13:07:09.878585  451238 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0805 13:07:09.878653  451238 out.go:239] * 
	W0805 13:07:09.878739  451238 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0805 13:07:09.878767  451238 out.go:239] * 
	W0805 13:07:09.879755  451238 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 13:07:09.883027  451238 out.go:177] 
	W0805 13:07:09.884197  451238 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0805 13:07:09.884243  451238 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0805 13:07:09.884265  451238 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0805 13:07:09.885783  451238 out.go:177] 
	
	
	==> CRI-O <==
	Aug 05 13:16:15 old-k8s-version-635707 crio[653]: time="2024-08-05 13:16:15.366034703Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863775366008437,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b410eea6-efe0-4f10-b4e2-198cac44e82d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:16:15 old-k8s-version-635707 crio[653]: time="2024-08-05 13:16:15.366660591Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6b591568-2821-4ac3-9947-437258d10108 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:16:15 old-k8s-version-635707 crio[653]: time="2024-08-05 13:16:15.366741469Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6b591568-2821-4ac3-9947-437258d10108 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:16:15 old-k8s-version-635707 crio[653]: time="2024-08-05 13:16:15.366778060Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6b591568-2821-4ac3-9947-437258d10108 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:16:15 old-k8s-version-635707 crio[653]: time="2024-08-05 13:16:15.403887882Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0aa31f70-b2ec-4002-8009-ad7be64621e2 name=/runtime.v1.RuntimeService/Version
	Aug 05 13:16:15 old-k8s-version-635707 crio[653]: time="2024-08-05 13:16:15.404005146Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0aa31f70-b2ec-4002-8009-ad7be64621e2 name=/runtime.v1.RuntimeService/Version
	Aug 05 13:16:15 old-k8s-version-635707 crio[653]: time="2024-08-05 13:16:15.405296474Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bdad022e-08b2-40cd-a2da-2a3d16d579f7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:16:15 old-k8s-version-635707 crio[653]: time="2024-08-05 13:16:15.405704207Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863775405683701,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bdad022e-08b2-40cd-a2da-2a3d16d579f7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:16:15 old-k8s-version-635707 crio[653]: time="2024-08-05 13:16:15.406377988Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=779257de-f6af-4301-9cad-a8658c56f53b name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:16:15 old-k8s-version-635707 crio[653]: time="2024-08-05 13:16:15.406490692Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=779257de-f6af-4301-9cad-a8658c56f53b name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:16:15 old-k8s-version-635707 crio[653]: time="2024-08-05 13:16:15.406549016Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=779257de-f6af-4301-9cad-a8658c56f53b name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:16:15 old-k8s-version-635707 crio[653]: time="2024-08-05 13:16:15.441131091Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0e5225ed-7230-4fd5-9671-919ce9be86a9 name=/runtime.v1.RuntimeService/Version
	Aug 05 13:16:15 old-k8s-version-635707 crio[653]: time="2024-08-05 13:16:15.441283687Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0e5225ed-7230-4fd5-9671-919ce9be86a9 name=/runtime.v1.RuntimeService/Version
	Aug 05 13:16:15 old-k8s-version-635707 crio[653]: time="2024-08-05 13:16:15.443089786Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=39e8a1fc-5dd8-4ebe-bb44-414f27a64a0b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:16:15 old-k8s-version-635707 crio[653]: time="2024-08-05 13:16:15.443520103Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863775443497957,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=39e8a1fc-5dd8-4ebe-bb44-414f27a64a0b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:16:15 old-k8s-version-635707 crio[653]: time="2024-08-05 13:16:15.444058611Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8a3348d8-2296-4b4c-9e7c-fe28b2ac03de name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:16:15 old-k8s-version-635707 crio[653]: time="2024-08-05 13:16:15.444125540Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8a3348d8-2296-4b4c-9e7c-fe28b2ac03de name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:16:15 old-k8s-version-635707 crio[653]: time="2024-08-05 13:16:15.444167630Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8a3348d8-2296-4b4c-9e7c-fe28b2ac03de name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:16:15 old-k8s-version-635707 crio[653]: time="2024-08-05 13:16:15.477247984Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=18c10f09-3ae7-4a64-977c-5394dce1fd70 name=/runtime.v1.RuntimeService/Version
	Aug 05 13:16:15 old-k8s-version-635707 crio[653]: time="2024-08-05 13:16:15.477354001Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=18c10f09-3ae7-4a64-977c-5394dce1fd70 name=/runtime.v1.RuntimeService/Version
	Aug 05 13:16:15 old-k8s-version-635707 crio[653]: time="2024-08-05 13:16:15.478789183Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=785051e1-18ff-4ffa-8ba0-e2508d1295d5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:16:15 old-k8s-version-635707 crio[653]: time="2024-08-05 13:16:15.479319120Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863775479297778,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=785051e1-18ff-4ffa-8ba0-e2508d1295d5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:16:15 old-k8s-version-635707 crio[653]: time="2024-08-05 13:16:15.479839954Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fe31b852-59ca-4514-9d49-9c553301a8ee name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:16:15 old-k8s-version-635707 crio[653]: time="2024-08-05 13:16:15.479889893Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fe31b852-59ca-4514-9d49-9c553301a8ee name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:16:15 old-k8s-version-635707 crio[653]: time="2024-08-05 13:16:15.479951007Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=fe31b852-59ca-4514-9d49-9c553301a8ee name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug 5 12:58] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051038] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041240] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.092710] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.744514] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.605530] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug 5 12:59] systemd-fstab-generator[575]: Ignoring "noauto" option for root device
	[  +0.063666] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056001] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.204547] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.129155] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.264906] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +6.500378] systemd-fstab-generator[840]: Ignoring "noauto" option for root device
	[  +0.060609] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.866070] systemd-fstab-generator[964]: Ignoring "noauto" option for root device
	[ +12.192283] kauditd_printk_skb: 46 callbacks suppressed
	[Aug 5 13:03] systemd-fstab-generator[5024]: Ignoring "noauto" option for root device
	[Aug 5 13:05] systemd-fstab-generator[5302]: Ignoring "noauto" option for root device
	[  +0.067316] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 13:16:15 up 17 min,  0 users,  load average: 0.15, 0.10, 0.08
	Linux old-k8s-version-635707 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 05 13:16:10 old-k8s-version-635707 kubelet[6478]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001000c0, 0xc00092cea0)
	Aug 05 13:16:10 old-k8s-version-635707 kubelet[6478]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Aug 05 13:16:10 old-k8s-version-635707 kubelet[6478]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Aug 05 13:16:10 old-k8s-version-635707 kubelet[6478]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Aug 05 13:16:10 old-k8s-version-635707 kubelet[6478]: goroutine 160 [select]:
	Aug 05 13:16:10 old-k8s-version-635707 kubelet[6478]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000c85ef0, 0x4f0ac20, 0xc0001f9a40, 0x1, 0xc0001000c0)
	Aug 05 13:16:10 old-k8s-version-635707 kubelet[6478]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Aug 05 13:16:10 old-k8s-version-635707 kubelet[6478]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc00024f0a0, 0xc0001000c0)
	Aug 05 13:16:10 old-k8s-version-635707 kubelet[6478]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Aug 05 13:16:10 old-k8s-version-635707 kubelet[6478]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Aug 05 13:16:10 old-k8s-version-635707 kubelet[6478]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Aug 05 13:16:10 old-k8s-version-635707 kubelet[6478]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000960160, 0xc00092ed00)
	Aug 05 13:16:10 old-k8s-version-635707 kubelet[6478]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Aug 05 13:16:10 old-k8s-version-635707 kubelet[6478]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Aug 05 13:16:10 old-k8s-version-635707 kubelet[6478]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Aug 05 13:16:10 old-k8s-version-635707 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 05 13:16:10 old-k8s-version-635707 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Aug 05 13:16:10 old-k8s-version-635707 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Aug 05 13:16:10 old-k8s-version-635707 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 05 13:16:10 old-k8s-version-635707 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 05 13:16:10 old-k8s-version-635707 kubelet[6488]: I0805 13:16:10.730724    6488 server.go:416] Version: v1.20.0
	Aug 05 13:16:10 old-k8s-version-635707 kubelet[6488]: I0805 13:16:10.731011    6488 server.go:837] Client rotation is on, will bootstrap in background
	Aug 05 13:16:10 old-k8s-version-635707 kubelet[6488]: I0805 13:16:10.733249    6488 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 05 13:16:10 old-k8s-version-635707 kubelet[6488]: W0805 13:16:10.734444    6488 manager.go:159] Cannot detect current cgroup on cgroup v2
	Aug 05 13:16:10 old-k8s-version-635707 kubelet[6488]: I0805 13:16:10.734565    6488 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-635707 -n old-k8s-version-635707
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-635707 -n old-k8s-version-635707: exit status 2 (222.081546ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-635707" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (376.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-669469 -n no-preload-669469
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-05 13:19:04.213174514 +0000 UTC m=+6731.170498219
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-669469 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-669469 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (7.273µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-669469 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-669469 -n no-preload-669469
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-669469 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-669469 logs -n 25: (1.399459674s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-119870 sudo                                  | bridge-119870                | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-119870 sudo                                  | bridge-119870                | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-119870 sudo find                             | bridge-119870                | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-119870 sudo crio                             | bridge-119870                | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-119870                                       | bridge-119870                | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	| delete  | -p                                                     | disable-driver-mounts-130994 | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | disable-driver-mounts-130994                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-371585 | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:51 UTC |
	|         | default-k8s-diff-port-371585                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-321139            | embed-certs-321139           | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-321139                                  | embed-certs-321139           | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-669469             | no-preload-669469            | jenkins | v1.33.1 | 05 Aug 24 12:51 UTC | 05 Aug 24 12:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-669469                                   | no-preload-669469            | jenkins | v1.33.1 | 05 Aug 24 12:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-371585  | default-k8s-diff-port-371585 | jenkins | v1.33.1 | 05 Aug 24 12:51 UTC | 05 Aug 24 12:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-371585 | jenkins | v1.33.1 | 05 Aug 24 12:51 UTC |                     |
	|         | default-k8s-diff-port-371585                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-321139                 | embed-certs-321139           | jenkins | v1.33.1 | 05 Aug 24 12:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-635707        | old-k8s-version-635707       | jenkins | v1.33.1 | 05 Aug 24 12:53 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-321139                                  | embed-certs-321139           | jenkins | v1.33.1 | 05 Aug 24 12:53 UTC | 05 Aug 24 13:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-669469                  | no-preload-669469            | jenkins | v1.33.1 | 05 Aug 24 12:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-669469                                   | no-preload-669469            | jenkins | v1.33.1 | 05 Aug 24 12:53 UTC | 05 Aug 24 13:03 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-371585       | default-k8s-diff-port-371585 | jenkins | v1.33.1 | 05 Aug 24 12:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-371585 | jenkins | v1.33.1 | 05 Aug 24 12:54 UTC | 05 Aug 24 13:04 UTC |
	|         | default-k8s-diff-port-371585                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-635707                              | old-k8s-version-635707       | jenkins | v1.33.1 | 05 Aug 24 12:55 UTC | 05 Aug 24 12:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-635707             | old-k8s-version-635707       | jenkins | v1.33.1 | 05 Aug 24 12:55 UTC | 05 Aug 24 12:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-635707                              | old-k8s-version-635707       | jenkins | v1.33.1 | 05 Aug 24 12:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-635707                              | old-k8s-version-635707       | jenkins | v1.33.1 | 05 Aug 24 13:18 UTC | 05 Aug 24 13:18 UTC |
	| start   | -p newest-cni-202226 --memory=2200 --alsologtostderr   | newest-cni-202226            | jenkins | v1.33.1 | 05 Aug 24 13:18 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 13:18:26
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 13:18:26.368362  457875 out.go:291] Setting OutFile to fd 1 ...
	I0805 13:18:26.368492  457875 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 13:18:26.368505  457875 out.go:304] Setting ErrFile to fd 2...
	I0805 13:18:26.368510  457875 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 13:18:26.368767  457875 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
	I0805 13:18:26.369349  457875 out.go:298] Setting JSON to false
	I0805 13:18:26.370490  457875 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10853,"bootTime":1722853053,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0805 13:18:26.370547  457875 start.go:139] virtualization: kvm guest
	I0805 13:18:26.373014  457875 out.go:177] * [newest-cni-202226] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0805 13:18:26.374386  457875 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 13:18:26.374420  457875 notify.go:220] Checking for updates...
	I0805 13:18:26.376868  457875 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 13:18:26.378124  457875 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 13:18:26.379360  457875 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 13:18:26.380591  457875 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0805 13:18:26.381914  457875 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 13:18:26.383621  457875 config.go:182] Loaded profile config "default-k8s-diff-port-371585": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 13:18:26.383731  457875 config.go:182] Loaded profile config "embed-certs-321139": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 13:18:26.383872  457875 config.go:182] Loaded profile config "no-preload-669469": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0805 13:18:26.384026  457875 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 13:18:26.421218  457875 out.go:177] * Using the kvm2 driver based on user configuration
	I0805 13:18:26.422582  457875 start.go:297] selected driver: kvm2
	I0805 13:18:26.422597  457875 start.go:901] validating driver "kvm2" against <nil>
	I0805 13:18:26.422608  457875 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 13:18:26.423296  457875 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 13:18:26.423376  457875 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19377-383955/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0805 13:18:26.438324  457875 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0805 13:18:26.438387  457875 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0805 13:18:26.438413  457875 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0805 13:18:26.438689  457875 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0805 13:18:26.438731  457875 cni.go:84] Creating CNI manager for ""
	I0805 13:18:26.438741  457875 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 13:18:26.438749  457875 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 13:18:26.438818  457875 start.go:340] cluster config:
	{Name:newest-cni-202226 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-202226 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 13:18:26.438970  457875 iso.go:125] acquiring lock: {Name:mk78a4988ea0dfb86bb6f7367e362683a39fd912 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 13:18:26.440871  457875 out.go:177] * Starting "newest-cni-202226" primary control-plane node in "newest-cni-202226" cluster
	I0805 13:18:26.441898  457875 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0805 13:18:26.441933  457875 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0805 13:18:26.441943  457875 cache.go:56] Caching tarball of preloaded images
	I0805 13:18:26.442039  457875 preload.go:172] Found /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0805 13:18:26.442053  457875 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on crio
	I0805 13:18:26.442180  457875 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/newest-cni-202226/config.json ...
	I0805 13:18:26.442208  457875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/newest-cni-202226/config.json: {Name:mk1a5254133dcb701bf7be7d071737595fc9038f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 13:18:26.442422  457875 start.go:360] acquireMachinesLock for newest-cni-202226: {Name:mk3babe91d55c30c0b650587cdec6489eb3a7ed6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 13:18:26.442461  457875 start.go:364] duration metric: took 20.565µs to acquireMachinesLock for "newest-cni-202226"
	I0805 13:18:26.442483  457875 start.go:93] Provisioning new machine with config: &{Name:newest-cni-202226 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-202226 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 13:18:26.442574  457875 start.go:125] createHost starting for "" (driver="kvm2")
	I0805 13:18:26.443995  457875 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 13:18:26.444112  457875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:18:26.444161  457875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:18:26.458223  457875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45031
	I0805 13:18:26.458637  457875 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:18:26.459253  457875 main.go:141] libmachine: Using API Version  1
	I0805 13:18:26.459272  457875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:18:26.459615  457875 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:18:26.459946  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetMachineName
	I0805 13:18:26.460117  457875 main.go:141] libmachine: (newest-cni-202226) Calling .DriverName
	I0805 13:18:26.460285  457875 start.go:159] libmachine.API.Create for "newest-cni-202226" (driver="kvm2")
	I0805 13:18:26.460325  457875 client.go:168] LocalClient.Create starting
	I0805 13:18:26.460361  457875 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem
	I0805 13:18:26.460408  457875 main.go:141] libmachine: Decoding PEM data...
	I0805 13:18:26.460432  457875 main.go:141] libmachine: Parsing certificate...
	I0805 13:18:26.460516  457875 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem
	I0805 13:18:26.460545  457875 main.go:141] libmachine: Decoding PEM data...
	I0805 13:18:26.460567  457875 main.go:141] libmachine: Parsing certificate...
	I0805 13:18:26.460592  457875 main.go:141] libmachine: Running pre-create checks...
	I0805 13:18:26.460606  457875 main.go:141] libmachine: (newest-cni-202226) Calling .PreCreateCheck
	I0805 13:18:26.460999  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetConfigRaw
	I0805 13:18:26.461369  457875 main.go:141] libmachine: Creating machine...
	I0805 13:18:26.461402  457875 main.go:141] libmachine: (newest-cni-202226) Calling .Create
	I0805 13:18:26.461534  457875 main.go:141] libmachine: (newest-cni-202226) Creating KVM machine...
	I0805 13:18:26.462932  457875 main.go:141] libmachine: (newest-cni-202226) DBG | found existing default KVM network
	I0805 13:18:26.464149  457875 main.go:141] libmachine: (newest-cni-202226) DBG | I0805 13:18:26.464008  457897 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:a4:80:eb} reservation:<nil>}
	I0805 13:18:26.465077  457875 main.go:141] libmachine: (newest-cni-202226) DBG | I0805 13:18:26.464966  457897 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:90:96:e1} reservation:<nil>}
	I0805 13:18:26.466181  457875 main.go:141] libmachine: (newest-cni-202226) DBG | I0805 13:18:26.466129  457897 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a5010}
	I0805 13:18:26.466260  457875 main.go:141] libmachine: (newest-cni-202226) DBG | created network xml: 
	I0805 13:18:26.466276  457875 main.go:141] libmachine: (newest-cni-202226) DBG | <network>
	I0805 13:18:26.466285  457875 main.go:141] libmachine: (newest-cni-202226) DBG |   <name>mk-newest-cni-202226</name>
	I0805 13:18:26.466293  457875 main.go:141] libmachine: (newest-cni-202226) DBG |   <dns enable='no'/>
	I0805 13:18:26.466302  457875 main.go:141] libmachine: (newest-cni-202226) DBG |   
	I0805 13:18:26.466314  457875 main.go:141] libmachine: (newest-cni-202226) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0805 13:18:26.466327  457875 main.go:141] libmachine: (newest-cni-202226) DBG |     <dhcp>
	I0805 13:18:26.466350  457875 main.go:141] libmachine: (newest-cni-202226) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0805 13:18:26.466359  457875 main.go:141] libmachine: (newest-cni-202226) DBG |     </dhcp>
	I0805 13:18:26.466363  457875 main.go:141] libmachine: (newest-cni-202226) DBG |   </ip>
	I0805 13:18:26.466374  457875 main.go:141] libmachine: (newest-cni-202226) DBG |   
	I0805 13:18:26.466381  457875 main.go:141] libmachine: (newest-cni-202226) DBG | </network>
	I0805 13:18:26.466395  457875 main.go:141] libmachine: (newest-cni-202226) DBG | 
	I0805 13:18:26.471531  457875 main.go:141] libmachine: (newest-cni-202226) DBG | trying to create private KVM network mk-newest-cni-202226 192.168.61.0/24...
	I0805 13:18:26.542163  457875 main.go:141] libmachine: (newest-cni-202226) DBG | private KVM network mk-newest-cni-202226 192.168.61.0/24 created
	I0805 13:18:26.542204  457875 main.go:141] libmachine: (newest-cni-202226) DBG | I0805 13:18:26.542133  457897 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 13:18:26.542218  457875 main.go:141] libmachine: (newest-cni-202226) Setting up store path in /home/jenkins/minikube-integration/19377-383955/.minikube/machines/newest-cni-202226 ...
	I0805 13:18:26.542235  457875 main.go:141] libmachine: (newest-cni-202226) Building disk image from file:///home/jenkins/minikube-integration/19377-383955/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0805 13:18:26.542335  457875 main.go:141] libmachine: (newest-cni-202226) Downloading /home/jenkins/minikube-integration/19377-383955/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19377-383955/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0805 13:18:26.834608  457875 main.go:141] libmachine: (newest-cni-202226) DBG | I0805 13:18:26.834489  457897 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/newest-cni-202226/id_rsa...
	I0805 13:18:26.993727  457875 main.go:141] libmachine: (newest-cni-202226) DBG | I0805 13:18:26.993585  457897 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/newest-cni-202226/newest-cni-202226.rawdisk...
	I0805 13:18:26.993754  457875 main.go:141] libmachine: (newest-cni-202226) DBG | Writing magic tar header
	I0805 13:18:26.993767  457875 main.go:141] libmachine: (newest-cni-202226) DBG | Writing SSH key tar header
	I0805 13:18:26.993775  457875 main.go:141] libmachine: (newest-cni-202226) DBG | I0805 13:18:26.993733  457897 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19377-383955/.minikube/machines/newest-cni-202226 ...
	I0805 13:18:26.993882  457875 main.go:141] libmachine: (newest-cni-202226) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/newest-cni-202226
	I0805 13:18:26.993897  457875 main.go:141] libmachine: (newest-cni-202226) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19377-383955/.minikube/machines
	I0805 13:18:26.993908  457875 main.go:141] libmachine: (newest-cni-202226) Setting executable bit set on /home/jenkins/minikube-integration/19377-383955/.minikube/machines/newest-cni-202226 (perms=drwx------)
	I0805 13:18:26.993915  457875 main.go:141] libmachine: (newest-cni-202226) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 13:18:26.993925  457875 main.go:141] libmachine: (newest-cni-202226) Setting executable bit set on /home/jenkins/minikube-integration/19377-383955/.minikube/machines (perms=drwxr-xr-x)
	I0805 13:18:26.993937  457875 main.go:141] libmachine: (newest-cni-202226) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19377-383955
	I0805 13:18:26.993949  457875 main.go:141] libmachine: (newest-cni-202226) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0805 13:18:26.993961  457875 main.go:141] libmachine: (newest-cni-202226) DBG | Checking permissions on dir: /home/jenkins
	I0805 13:18:26.993971  457875 main.go:141] libmachine: (newest-cni-202226) DBG | Checking permissions on dir: /home
	I0805 13:18:26.993976  457875 main.go:141] libmachine: (newest-cni-202226) DBG | Skipping /home - not owner
	I0805 13:18:26.993986  457875 main.go:141] libmachine: (newest-cni-202226) Setting executable bit set on /home/jenkins/minikube-integration/19377-383955/.minikube (perms=drwxr-xr-x)
	I0805 13:18:26.993992  457875 main.go:141] libmachine: (newest-cni-202226) Setting executable bit set on /home/jenkins/minikube-integration/19377-383955 (perms=drwxrwxr-x)
	I0805 13:18:26.994002  457875 main.go:141] libmachine: (newest-cni-202226) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0805 13:18:26.994008  457875 main.go:141] libmachine: (newest-cni-202226) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0805 13:18:26.994017  457875 main.go:141] libmachine: (newest-cni-202226) Creating domain...
	I0805 13:18:26.995264  457875 main.go:141] libmachine: (newest-cni-202226) define libvirt domain using xml: 
	I0805 13:18:26.995287  457875 main.go:141] libmachine: (newest-cni-202226) <domain type='kvm'>
	I0805 13:18:26.995297  457875 main.go:141] libmachine: (newest-cni-202226)   <name>newest-cni-202226</name>
	I0805 13:18:26.995305  457875 main.go:141] libmachine: (newest-cni-202226)   <memory unit='MiB'>2200</memory>
	I0805 13:18:26.995313  457875 main.go:141] libmachine: (newest-cni-202226)   <vcpu>2</vcpu>
	I0805 13:18:26.995319  457875 main.go:141] libmachine: (newest-cni-202226)   <features>
	I0805 13:18:26.995327  457875 main.go:141] libmachine: (newest-cni-202226)     <acpi/>
	I0805 13:18:26.995356  457875 main.go:141] libmachine: (newest-cni-202226)     <apic/>
	I0805 13:18:26.995367  457875 main.go:141] libmachine: (newest-cni-202226)     <pae/>
	I0805 13:18:26.995371  457875 main.go:141] libmachine: (newest-cni-202226)     
	I0805 13:18:26.995376  457875 main.go:141] libmachine: (newest-cni-202226)   </features>
	I0805 13:18:26.995391  457875 main.go:141] libmachine: (newest-cni-202226)   <cpu mode='host-passthrough'>
	I0805 13:18:26.995399  457875 main.go:141] libmachine: (newest-cni-202226)   
	I0805 13:18:26.995404  457875 main.go:141] libmachine: (newest-cni-202226)   </cpu>
	I0805 13:18:26.995409  457875 main.go:141] libmachine: (newest-cni-202226)   <os>
	I0805 13:18:26.995416  457875 main.go:141] libmachine: (newest-cni-202226)     <type>hvm</type>
	I0805 13:18:26.995421  457875 main.go:141] libmachine: (newest-cni-202226)     <boot dev='cdrom'/>
	I0805 13:18:26.995425  457875 main.go:141] libmachine: (newest-cni-202226)     <boot dev='hd'/>
	I0805 13:18:26.995432  457875 main.go:141] libmachine: (newest-cni-202226)     <bootmenu enable='no'/>
	I0805 13:18:26.995437  457875 main.go:141] libmachine: (newest-cni-202226)   </os>
	I0805 13:18:26.995444  457875 main.go:141] libmachine: (newest-cni-202226)   <devices>
	I0805 13:18:26.995449  457875 main.go:141] libmachine: (newest-cni-202226)     <disk type='file' device='cdrom'>
	I0805 13:18:26.995459  457875 main.go:141] libmachine: (newest-cni-202226)       <source file='/home/jenkins/minikube-integration/19377-383955/.minikube/machines/newest-cni-202226/boot2docker.iso'/>
	I0805 13:18:26.995468  457875 main.go:141] libmachine: (newest-cni-202226)       <target dev='hdc' bus='scsi'/>
	I0805 13:18:26.995503  457875 main.go:141] libmachine: (newest-cni-202226)       <readonly/>
	I0805 13:18:26.995530  457875 main.go:141] libmachine: (newest-cni-202226)     </disk>
	I0805 13:18:26.995542  457875 main.go:141] libmachine: (newest-cni-202226)     <disk type='file' device='disk'>
	I0805 13:18:26.995561  457875 main.go:141] libmachine: (newest-cni-202226)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0805 13:18:26.995575  457875 main.go:141] libmachine: (newest-cni-202226)       <source file='/home/jenkins/minikube-integration/19377-383955/.minikube/machines/newest-cni-202226/newest-cni-202226.rawdisk'/>
	I0805 13:18:26.995586  457875 main.go:141] libmachine: (newest-cni-202226)       <target dev='hda' bus='virtio'/>
	I0805 13:18:26.995598  457875 main.go:141] libmachine: (newest-cni-202226)     </disk>
	I0805 13:18:26.995611  457875 main.go:141] libmachine: (newest-cni-202226)     <interface type='network'>
	I0805 13:18:26.995625  457875 main.go:141] libmachine: (newest-cni-202226)       <source network='mk-newest-cni-202226'/>
	I0805 13:18:26.995635  457875 main.go:141] libmachine: (newest-cni-202226)       <model type='virtio'/>
	I0805 13:18:26.995643  457875 main.go:141] libmachine: (newest-cni-202226)     </interface>
	I0805 13:18:26.995653  457875 main.go:141] libmachine: (newest-cni-202226)     <interface type='network'>
	I0805 13:18:26.995662  457875 main.go:141] libmachine: (newest-cni-202226)       <source network='default'/>
	I0805 13:18:26.995669  457875 main.go:141] libmachine: (newest-cni-202226)       <model type='virtio'/>
	I0805 13:18:26.995711  457875 main.go:141] libmachine: (newest-cni-202226)     </interface>
	I0805 13:18:26.995734  457875 main.go:141] libmachine: (newest-cni-202226)     <serial type='pty'>
	I0805 13:18:26.995777  457875 main.go:141] libmachine: (newest-cni-202226)       <target port='0'/>
	I0805 13:18:26.995790  457875 main.go:141] libmachine: (newest-cni-202226)     </serial>
	I0805 13:18:26.995797  457875 main.go:141] libmachine: (newest-cni-202226)     <console type='pty'>
	I0805 13:18:26.995805  457875 main.go:141] libmachine: (newest-cni-202226)       <target type='serial' port='0'/>
	I0805 13:18:26.995809  457875 main.go:141] libmachine: (newest-cni-202226)     </console>
	I0805 13:18:26.995816  457875 main.go:141] libmachine: (newest-cni-202226)     <rng model='virtio'>
	I0805 13:18:26.995822  457875 main.go:141] libmachine: (newest-cni-202226)       <backend model='random'>/dev/random</backend>
	I0805 13:18:26.995829  457875 main.go:141] libmachine: (newest-cni-202226)     </rng>
	I0805 13:18:26.995834  457875 main.go:141] libmachine: (newest-cni-202226)     
	I0805 13:18:26.995840  457875 main.go:141] libmachine: (newest-cni-202226)     
	I0805 13:18:26.995845  457875 main.go:141] libmachine: (newest-cni-202226)   </devices>
	I0805 13:18:26.995852  457875 main.go:141] libmachine: (newest-cni-202226) </domain>
	I0805 13:18:26.995859  457875 main.go:141] libmachine: (newest-cni-202226) 
	I0805 13:18:27.000005  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:ba:a2:69 in network default
	I0805 13:18:27.000743  457875 main.go:141] libmachine: (newest-cni-202226) Ensuring networks are active...
	I0805 13:18:27.000774  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:27.001506  457875 main.go:141] libmachine: (newest-cni-202226) Ensuring network default is active
	I0805 13:18:27.001769  457875 main.go:141] libmachine: (newest-cni-202226) Ensuring network mk-newest-cni-202226 is active
	I0805 13:18:27.002177  457875 main.go:141] libmachine: (newest-cni-202226) Getting domain xml...
	I0805 13:18:27.003431  457875 main.go:141] libmachine: (newest-cni-202226) Creating domain...
	I0805 13:18:28.270704  457875 main.go:141] libmachine: (newest-cni-202226) Waiting to get IP...
	I0805 13:18:28.271679  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:28.272177  457875 main.go:141] libmachine: (newest-cni-202226) DBG | unable to find current IP address of domain newest-cni-202226 in network mk-newest-cni-202226
	I0805 13:18:28.272200  457875 main.go:141] libmachine: (newest-cni-202226) DBG | I0805 13:18:28.272123  457897 retry.go:31] will retry after 265.021167ms: waiting for machine to come up
	I0805 13:18:28.538724  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:28.539225  457875 main.go:141] libmachine: (newest-cni-202226) DBG | unable to find current IP address of domain newest-cni-202226 in network mk-newest-cni-202226
	I0805 13:18:28.539255  457875 main.go:141] libmachine: (newest-cni-202226) DBG | I0805 13:18:28.539169  457897 retry.go:31] will retry after 242.749139ms: waiting for machine to come up
	I0805 13:18:28.783761  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:28.784372  457875 main.go:141] libmachine: (newest-cni-202226) DBG | unable to find current IP address of domain newest-cni-202226 in network mk-newest-cni-202226
	I0805 13:18:28.784397  457875 main.go:141] libmachine: (newest-cni-202226) DBG | I0805 13:18:28.784323  457897 retry.go:31] will retry after 386.37514ms: waiting for machine to come up
	I0805 13:18:29.171867  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:29.172344  457875 main.go:141] libmachine: (newest-cni-202226) DBG | unable to find current IP address of domain newest-cni-202226 in network mk-newest-cni-202226
	I0805 13:18:29.172374  457875 main.go:141] libmachine: (newest-cni-202226) DBG | I0805 13:18:29.172293  457897 retry.go:31] will retry after 512.222291ms: waiting for machine to come up
	I0805 13:18:29.685925  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:29.686406  457875 main.go:141] libmachine: (newest-cni-202226) DBG | unable to find current IP address of domain newest-cni-202226 in network mk-newest-cni-202226
	I0805 13:18:29.686430  457875 main.go:141] libmachine: (newest-cni-202226) DBG | I0805 13:18:29.686350  457897 retry.go:31] will retry after 528.323657ms: waiting for machine to come up
	I0805 13:18:30.216177  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:30.216638  457875 main.go:141] libmachine: (newest-cni-202226) DBG | unable to find current IP address of domain newest-cni-202226 in network mk-newest-cni-202226
	I0805 13:18:30.216667  457875 main.go:141] libmachine: (newest-cni-202226) DBG | I0805 13:18:30.216583  457897 retry.go:31] will retry after 853.235297ms: waiting for machine to come up
	I0805 13:18:31.071071  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:31.071484  457875 main.go:141] libmachine: (newest-cni-202226) DBG | unable to find current IP address of domain newest-cni-202226 in network mk-newest-cni-202226
	I0805 13:18:31.071549  457875 main.go:141] libmachine: (newest-cni-202226) DBG | I0805 13:18:31.071454  457897 retry.go:31] will retry after 830.888939ms: waiting for machine to come up
	I0805 13:18:31.904113  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:31.904691  457875 main.go:141] libmachine: (newest-cni-202226) DBG | unable to find current IP address of domain newest-cni-202226 in network mk-newest-cni-202226
	I0805 13:18:31.904724  457875 main.go:141] libmachine: (newest-cni-202226) DBG | I0805 13:18:31.904636  457897 retry.go:31] will retry after 1.099819521s: waiting for machine to come up
	I0805 13:18:33.005947  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:33.006438  457875 main.go:141] libmachine: (newest-cni-202226) DBG | unable to find current IP address of domain newest-cni-202226 in network mk-newest-cni-202226
	I0805 13:18:33.006469  457875 main.go:141] libmachine: (newest-cni-202226) DBG | I0805 13:18:33.006400  457897 retry.go:31] will retry after 1.439900304s: waiting for machine to come up
	I0805 13:18:34.448436  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:34.448836  457875 main.go:141] libmachine: (newest-cni-202226) DBG | unable to find current IP address of domain newest-cni-202226 in network mk-newest-cni-202226
	I0805 13:18:34.448862  457875 main.go:141] libmachine: (newest-cni-202226) DBG | I0805 13:18:34.448795  457897 retry.go:31] will retry after 1.864467776s: waiting for machine to come up
	I0805 13:18:36.315005  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:36.315499  457875 main.go:141] libmachine: (newest-cni-202226) DBG | unable to find current IP address of domain newest-cni-202226 in network mk-newest-cni-202226
	I0805 13:18:36.315529  457875 main.go:141] libmachine: (newest-cni-202226) DBG | I0805 13:18:36.315451  457897 retry.go:31] will retry after 2.566511981s: waiting for machine to come up
	I0805 13:18:38.884717  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:38.885152  457875 main.go:141] libmachine: (newest-cni-202226) DBG | unable to find current IP address of domain newest-cni-202226 in network mk-newest-cni-202226
	I0805 13:18:38.885175  457875 main.go:141] libmachine: (newest-cni-202226) DBG | I0805 13:18:38.885072  457897 retry.go:31] will retry after 2.661819272s: waiting for machine to come up
	I0805 13:18:41.548306  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:41.548697  457875 main.go:141] libmachine: (newest-cni-202226) DBG | unable to find current IP address of domain newest-cni-202226 in network mk-newest-cni-202226
	I0805 13:18:41.548723  457875 main.go:141] libmachine: (newest-cni-202226) DBG | I0805 13:18:41.548658  457897 retry.go:31] will retry after 3.151248526s: waiting for machine to come up
	I0805 13:18:44.702019  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:44.702461  457875 main.go:141] libmachine: (newest-cni-202226) DBG | unable to find current IP address of domain newest-cni-202226 in network mk-newest-cni-202226
	I0805 13:18:44.702493  457875 main.go:141] libmachine: (newest-cni-202226) DBG | I0805 13:18:44.702409  457897 retry.go:31] will retry after 4.595987858s: waiting for machine to come up
	I0805 13:18:49.300940  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:49.301450  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has current primary IP address 192.168.61.136 and MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:49.301472  457875 main.go:141] libmachine: (newest-cni-202226) Found IP for machine: 192.168.61.136
	I0805 13:18:49.301487  457875 main.go:141] libmachine: (newest-cni-202226) Reserving static IP address...
	I0805 13:18:49.301809  457875 main.go:141] libmachine: (newest-cni-202226) DBG | unable to find host DHCP lease matching {name: "newest-cni-202226", mac: "52:54:00:13:72:ff", ip: "192.168.61.136"} in network mk-newest-cni-202226
	I0805 13:18:49.378755  457875 main.go:141] libmachine: (newest-cni-202226) Reserved static IP address: 192.168.61.136
	I0805 13:18:49.378779  457875 main.go:141] libmachine: (newest-cni-202226) Waiting for SSH to be available...
	I0805 13:18:49.378800  457875 main.go:141] libmachine: (newest-cni-202226) DBG | Getting to WaitForSSH function...
	I0805 13:18:49.381696  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:49.382211  457875 main.go:141] libmachine: (newest-cni-202226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:72:ff", ip: ""} in network mk-newest-cni-202226: {Iface:virbr3 ExpiryTime:2024-08-05 14:18:41 +0000 UTC Type:0 Mac:52:54:00:13:72:ff Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:minikube Clientid:01:52:54:00:13:72:ff}
	I0805 13:18:49.382249  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined IP address 192.168.61.136 and MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:49.382364  457875 main.go:141] libmachine: (newest-cni-202226) DBG | Using SSH client type: external
	I0805 13:18:49.382392  457875 main.go:141] libmachine: (newest-cni-202226) DBG | Using SSH private key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/newest-cni-202226/id_rsa (-rw-------)
	I0805 13:18:49.382443  457875 main.go:141] libmachine: (newest-cni-202226) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19377-383955/.minikube/machines/newest-cni-202226/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 13:18:49.382466  457875 main.go:141] libmachine: (newest-cni-202226) DBG | About to run SSH command:
	I0805 13:18:49.382482  457875 main.go:141] libmachine: (newest-cni-202226) DBG | exit 0
	I0805 13:18:49.512280  457875 main.go:141] libmachine: (newest-cni-202226) DBG | SSH cmd err, output: <nil>: 
	I0805 13:18:49.512610  457875 main.go:141] libmachine: (newest-cni-202226) KVM machine creation complete!
	I0805 13:18:49.512950  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetConfigRaw
	I0805 13:18:49.513523  457875 main.go:141] libmachine: (newest-cni-202226) Calling .DriverName
	I0805 13:18:49.513741  457875 main.go:141] libmachine: (newest-cni-202226) Calling .DriverName
	I0805 13:18:49.513980  457875 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0805 13:18:49.513996  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetState
	I0805 13:18:49.515457  457875 main.go:141] libmachine: Detecting operating system of created instance...
	I0805 13:18:49.515470  457875 main.go:141] libmachine: Waiting for SSH to be available...
	I0805 13:18:49.515476  457875 main.go:141] libmachine: Getting to WaitForSSH function...
	I0805 13:18:49.515481  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHHostname
	I0805 13:18:49.518001  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:49.518361  457875 main.go:141] libmachine: (newest-cni-202226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:72:ff", ip: ""} in network mk-newest-cni-202226: {Iface:virbr3 ExpiryTime:2024-08-05 14:18:41 +0000 UTC Type:0 Mac:52:54:00:13:72:ff Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-202226 Clientid:01:52:54:00:13:72:ff}
	I0805 13:18:49.518381  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined IP address 192.168.61.136 and MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:49.518558  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHPort
	I0805 13:18:49.518739  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHKeyPath
	I0805 13:18:49.518895  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHKeyPath
	I0805 13:18:49.519031  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHUsername
	I0805 13:18:49.519227  457875 main.go:141] libmachine: Using SSH client type: native
	I0805 13:18:49.519437  457875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0805 13:18:49.519451  457875 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0805 13:18:49.631354  457875 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 13:18:49.631379  457875 main.go:141] libmachine: Detecting the provisioner...
	I0805 13:18:49.631388  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHHostname
	I0805 13:18:49.634255  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:49.634624  457875 main.go:141] libmachine: (newest-cni-202226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:72:ff", ip: ""} in network mk-newest-cni-202226: {Iface:virbr3 ExpiryTime:2024-08-05 14:18:41 +0000 UTC Type:0 Mac:52:54:00:13:72:ff Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-202226 Clientid:01:52:54:00:13:72:ff}
	I0805 13:18:49.634670  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined IP address 192.168.61.136 and MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:49.634789  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHPort
	I0805 13:18:49.635057  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHKeyPath
	I0805 13:18:49.635269  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHKeyPath
	I0805 13:18:49.635452  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHUsername
	I0805 13:18:49.635636  457875 main.go:141] libmachine: Using SSH client type: native
	I0805 13:18:49.635886  457875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0805 13:18:49.635902  457875 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0805 13:18:49.748769  457875 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0805 13:18:49.748849  457875 main.go:141] libmachine: found compatible host: buildroot
	I0805 13:18:49.748857  457875 main.go:141] libmachine: Provisioning with buildroot...
	I0805 13:18:49.748865  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetMachineName
	I0805 13:18:49.749155  457875 buildroot.go:166] provisioning hostname "newest-cni-202226"
	I0805 13:18:49.749202  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetMachineName
	I0805 13:18:49.749372  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHHostname
	I0805 13:18:49.752058  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:49.752469  457875 main.go:141] libmachine: (newest-cni-202226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:72:ff", ip: ""} in network mk-newest-cni-202226: {Iface:virbr3 ExpiryTime:2024-08-05 14:18:41 +0000 UTC Type:0 Mac:52:54:00:13:72:ff Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-202226 Clientid:01:52:54:00:13:72:ff}
	I0805 13:18:49.752499  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined IP address 192.168.61.136 and MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:49.752712  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHPort
	I0805 13:18:49.752918  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHKeyPath
	I0805 13:18:49.753125  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHKeyPath
	I0805 13:18:49.753284  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHUsername
	I0805 13:18:49.753454  457875 main.go:141] libmachine: Using SSH client type: native
	I0805 13:18:49.753702  457875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0805 13:18:49.753721  457875 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-202226 && echo "newest-cni-202226" | sudo tee /etc/hostname
	I0805 13:18:49.881214  457875 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-202226
	
	I0805 13:18:49.881247  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHHostname
	I0805 13:18:49.883882  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:49.884197  457875 main.go:141] libmachine: (newest-cni-202226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:72:ff", ip: ""} in network mk-newest-cni-202226: {Iface:virbr3 ExpiryTime:2024-08-05 14:18:41 +0000 UTC Type:0 Mac:52:54:00:13:72:ff Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-202226 Clientid:01:52:54:00:13:72:ff}
	I0805 13:18:49.884241  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined IP address 192.168.61.136 and MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:49.884406  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHPort
	I0805 13:18:49.884634  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHKeyPath
	I0805 13:18:49.884816  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHKeyPath
	I0805 13:18:49.884990  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHUsername
	I0805 13:18:49.885203  457875 main.go:141] libmachine: Using SSH client type: native
	I0805 13:18:49.885433  457875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0805 13:18:49.885458  457875 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-202226' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-202226/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-202226' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 13:18:50.004907  457875 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 13:18:50.004966  457875 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19377-383955/.minikube CaCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19377-383955/.minikube}
	I0805 13:18:50.005005  457875 buildroot.go:174] setting up certificates
	I0805 13:18:50.005015  457875 provision.go:84] configureAuth start
	I0805 13:18:50.005028  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetMachineName
	I0805 13:18:50.005337  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetIP
	I0805 13:18:50.007933  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:50.008295  457875 main.go:141] libmachine: (newest-cni-202226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:72:ff", ip: ""} in network mk-newest-cni-202226: {Iface:virbr3 ExpiryTime:2024-08-05 14:18:41 +0000 UTC Type:0 Mac:52:54:00:13:72:ff Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-202226 Clientid:01:52:54:00:13:72:ff}
	I0805 13:18:50.008336  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined IP address 192.168.61.136 and MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:50.008499  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHHostname
	I0805 13:18:50.010808  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:50.011133  457875 main.go:141] libmachine: (newest-cni-202226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:72:ff", ip: ""} in network mk-newest-cni-202226: {Iface:virbr3 ExpiryTime:2024-08-05 14:18:41 +0000 UTC Type:0 Mac:52:54:00:13:72:ff Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-202226 Clientid:01:52:54:00:13:72:ff}
	I0805 13:18:50.011158  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined IP address 192.168.61.136 and MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:50.011307  457875 provision.go:143] copyHostCerts
	I0805 13:18:50.011383  457875 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem, removing ...
	I0805 13:18:50.011405  457875 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 13:18:50.011488  457875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem (1082 bytes)
	I0805 13:18:50.011603  457875 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem, removing ...
	I0805 13:18:50.011616  457875 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 13:18:50.011662  457875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem (1123 bytes)
	I0805 13:18:50.011733  457875 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem, removing ...
	I0805 13:18:50.011752  457875 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 13:18:50.011787  457875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem (1675 bytes)
	I0805 13:18:50.011902  457875 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem org=jenkins.newest-cni-202226 san=[127.0.0.1 192.168.61.136 localhost minikube newest-cni-202226]
	I0805 13:18:50.191356  457875 provision.go:177] copyRemoteCerts
	I0805 13:18:50.191436  457875 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 13:18:50.191470  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHHostname
	I0805 13:18:50.194369  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:50.194702  457875 main.go:141] libmachine: (newest-cni-202226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:72:ff", ip: ""} in network mk-newest-cni-202226: {Iface:virbr3 ExpiryTime:2024-08-05 14:18:41 +0000 UTC Type:0 Mac:52:54:00:13:72:ff Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-202226 Clientid:01:52:54:00:13:72:ff}
	I0805 13:18:50.194733  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined IP address 192.168.61.136 and MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:50.194966  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHPort
	I0805 13:18:50.195159  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHKeyPath
	I0805 13:18:50.195295  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHUsername
	I0805 13:18:50.195404  457875 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/newest-cni-202226/id_rsa Username:docker}
	I0805 13:18:50.282373  457875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0805 13:18:50.308224  457875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 13:18:50.332180  457875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0805 13:18:50.356602  457875 provision.go:87] duration metric: took 351.568345ms to configureAuth
	I0805 13:18:50.356643  457875 buildroot.go:189] setting minikube options for container-runtime
	I0805 13:18:50.356849  457875 config.go:182] Loaded profile config "newest-cni-202226": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0805 13:18:50.356941  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHHostname
	I0805 13:18:50.359993  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:50.360368  457875 main.go:141] libmachine: (newest-cni-202226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:72:ff", ip: ""} in network mk-newest-cni-202226: {Iface:virbr3 ExpiryTime:2024-08-05 14:18:41 +0000 UTC Type:0 Mac:52:54:00:13:72:ff Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-202226 Clientid:01:52:54:00:13:72:ff}
	I0805 13:18:50.360400  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined IP address 192.168.61.136 and MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:50.360590  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHPort
	I0805 13:18:50.360794  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHKeyPath
	I0805 13:18:50.360972  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHKeyPath
	I0805 13:18:50.361196  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHUsername
	I0805 13:18:50.361365  457875 main.go:141] libmachine: Using SSH client type: native
	I0805 13:18:50.361539  457875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0805 13:18:50.361555  457875 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 13:18:50.648986  457875 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 13:18:50.649020  457875 main.go:141] libmachine: Checking connection to Docker...
	I0805 13:18:50.649029  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetURL
	I0805 13:18:50.650545  457875 main.go:141] libmachine: (newest-cni-202226) DBG | Using libvirt version 6000000
	I0805 13:18:50.652809  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:50.653189  457875 main.go:141] libmachine: (newest-cni-202226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:72:ff", ip: ""} in network mk-newest-cni-202226: {Iface:virbr3 ExpiryTime:2024-08-05 14:18:41 +0000 UTC Type:0 Mac:52:54:00:13:72:ff Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-202226 Clientid:01:52:54:00:13:72:ff}
	I0805 13:18:50.653220  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined IP address 192.168.61.136 and MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:50.653471  457875 main.go:141] libmachine: Docker is up and running!
	I0805 13:18:50.653485  457875 main.go:141] libmachine: Reticulating splines...
	I0805 13:18:50.653492  457875 client.go:171] duration metric: took 24.193156045s to LocalClient.Create
	I0805 13:18:50.653514  457875 start.go:167] duration metric: took 24.193229507s to libmachine.API.Create "newest-cni-202226"
	I0805 13:18:50.653529  457875 start.go:293] postStartSetup for "newest-cni-202226" (driver="kvm2")
	I0805 13:18:50.653545  457875 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 13:18:50.653564  457875 main.go:141] libmachine: (newest-cni-202226) Calling .DriverName
	I0805 13:18:50.653829  457875 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 13:18:50.653869  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHHostname
	I0805 13:18:50.656328  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:50.656680  457875 main.go:141] libmachine: (newest-cni-202226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:72:ff", ip: ""} in network mk-newest-cni-202226: {Iface:virbr3 ExpiryTime:2024-08-05 14:18:41 +0000 UTC Type:0 Mac:52:54:00:13:72:ff Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-202226 Clientid:01:52:54:00:13:72:ff}
	I0805 13:18:50.656707  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined IP address 192.168.61.136 and MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:50.656865  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHPort
	I0805 13:18:50.657061  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHKeyPath
	I0805 13:18:50.657262  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHUsername
	I0805 13:18:50.657439  457875 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/newest-cni-202226/id_rsa Username:docker}
	I0805 13:18:50.742313  457875 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 13:18:50.746828  457875 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 13:18:50.746876  457875 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/addons for local assets ...
	I0805 13:18:50.746960  457875 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/files for local assets ...
	I0805 13:18:50.747057  457875 filesync.go:149] local asset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> 3912192.pem in /etc/ssl/certs
	I0805 13:18:50.747171  457875 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 13:18:50.756547  457875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 13:18:50.780905  457875 start.go:296] duration metric: took 127.358825ms for postStartSetup
	I0805 13:18:50.780968  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetConfigRaw
	I0805 13:18:50.781526  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetIP
	I0805 13:18:50.785627  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:50.786105  457875 main.go:141] libmachine: (newest-cni-202226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:72:ff", ip: ""} in network mk-newest-cni-202226: {Iface:virbr3 ExpiryTime:2024-08-05 14:18:41 +0000 UTC Type:0 Mac:52:54:00:13:72:ff Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-202226 Clientid:01:52:54:00:13:72:ff}
	I0805 13:18:50.786135  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined IP address 192.168.61.136 and MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:50.786434  457875 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/newest-cni-202226/config.json ...
	I0805 13:18:50.786610  457875 start.go:128] duration metric: took 24.34402361s to createHost
	I0805 13:18:50.786634  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHHostname
	I0805 13:18:50.788928  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:50.789232  457875 main.go:141] libmachine: (newest-cni-202226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:72:ff", ip: ""} in network mk-newest-cni-202226: {Iface:virbr3 ExpiryTime:2024-08-05 14:18:41 +0000 UTC Type:0 Mac:52:54:00:13:72:ff Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-202226 Clientid:01:52:54:00:13:72:ff}
	I0805 13:18:50.789258  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined IP address 192.168.61.136 and MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:50.789407  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHPort
	I0805 13:18:50.789612  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHKeyPath
	I0805 13:18:50.789830  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHKeyPath
	I0805 13:18:50.790018  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHUsername
	I0805 13:18:50.790250  457875 main.go:141] libmachine: Using SSH client type: native
	I0805 13:18:50.790457  457875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0805 13:18:50.790472  457875 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 13:18:50.900464  457875 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722863930.874942893
	
	I0805 13:18:50.900487  457875 fix.go:216] guest clock: 1722863930.874942893
	I0805 13:18:50.900496  457875 fix.go:229] Guest: 2024-08-05 13:18:50.874942893 +0000 UTC Remote: 2024-08-05 13:18:50.786622424 +0000 UTC m=+24.453687715 (delta=88.320469ms)
	I0805 13:18:50.900523  457875 fix.go:200] guest clock delta is within tolerance: 88.320469ms
	I0805 13:18:50.900530  457875 start.go:83] releasing machines lock for "newest-cni-202226", held for 24.458061197s
	I0805 13:18:50.900550  457875 main.go:141] libmachine: (newest-cni-202226) Calling .DriverName
	I0805 13:18:50.900839  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetIP
	I0805 13:18:50.903451  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:50.903726  457875 main.go:141] libmachine: (newest-cni-202226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:72:ff", ip: ""} in network mk-newest-cni-202226: {Iface:virbr3 ExpiryTime:2024-08-05 14:18:41 +0000 UTC Type:0 Mac:52:54:00:13:72:ff Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-202226 Clientid:01:52:54:00:13:72:ff}
	I0805 13:18:50.903773  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined IP address 192.168.61.136 and MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:50.903948  457875 main.go:141] libmachine: (newest-cni-202226) Calling .DriverName
	I0805 13:18:50.904491  457875 main.go:141] libmachine: (newest-cni-202226) Calling .DriverName
	I0805 13:18:50.904702  457875 main.go:141] libmachine: (newest-cni-202226) Calling .DriverName
	I0805 13:18:50.904787  457875 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 13:18:50.904823  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHHostname
	I0805 13:18:50.904932  457875 ssh_runner.go:195] Run: cat /version.json
	I0805 13:18:50.904963  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHHostname
	I0805 13:18:50.907662  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:50.907788  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:50.908050  457875 main.go:141] libmachine: (newest-cni-202226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:72:ff", ip: ""} in network mk-newest-cni-202226: {Iface:virbr3 ExpiryTime:2024-08-05 14:18:41 +0000 UTC Type:0 Mac:52:54:00:13:72:ff Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-202226 Clientid:01:52:54:00:13:72:ff}
	I0805 13:18:50.908082  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined IP address 192.168.61.136 and MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:50.908108  457875 main.go:141] libmachine: (newest-cni-202226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:72:ff", ip: ""} in network mk-newest-cni-202226: {Iface:virbr3 ExpiryTime:2024-08-05 14:18:41 +0000 UTC Type:0 Mac:52:54:00:13:72:ff Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-202226 Clientid:01:52:54:00:13:72:ff}
	I0805 13:18:50.908126  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined IP address 192.168.61.136 and MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:50.908201  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHPort
	I0805 13:18:50.908312  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHPort
	I0805 13:18:50.908395  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHKeyPath
	I0805 13:18:50.908474  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHKeyPath
	I0805 13:18:50.908532  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHUsername
	I0805 13:18:50.908692  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHUsername
	I0805 13:18:50.908679  457875 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/newest-cni-202226/id_rsa Username:docker}
	I0805 13:18:50.908822  457875 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/newest-cni-202226/id_rsa Username:docker}
	I0805 13:18:51.007527  457875 ssh_runner.go:195] Run: systemctl --version
	I0805 13:18:51.013398  457875 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 13:18:51.167487  457875 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 13:18:51.174618  457875 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 13:18:51.174673  457875 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 13:18:51.191634  457875 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 13:18:51.191660  457875 start.go:495] detecting cgroup driver to use...
	I0805 13:18:51.191757  457875 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 13:18:51.207018  457875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 13:18:51.220293  457875 docker.go:217] disabling cri-docker service (if available) ...
	I0805 13:18:51.220340  457875 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 13:18:51.237353  457875 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 13:18:51.251977  457875 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 13:18:51.387504  457875 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 13:18:51.529575  457875 docker.go:233] disabling docker service ...
	I0805 13:18:51.529658  457875 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 13:18:51.544849  457875 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 13:18:51.558513  457875 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 13:18:51.708184  457875 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 13:18:51.841685  457875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 13:18:51.856078  457875 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 13:18:51.875088  457875 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0805 13:18:52.165784  457875 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0805 13:18:52.165858  457875 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 13:18:52.177369  457875 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 13:18:52.177433  457875 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 13:18:52.187925  457875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 13:18:52.199071  457875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 13:18:52.209357  457875 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 13:18:52.219422  457875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 13:18:52.229796  457875 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 13:18:52.249911  457875 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 13:18:52.260749  457875 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 13:18:52.270749  457875 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0805 13:18:52.270825  457875 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0805 13:18:52.284913  457875 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 13:18:52.295180  457875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 13:18:52.426553  457875 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 13:18:52.568711  457875 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 13:18:52.568796  457875 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 13:18:52.574199  457875 start.go:563] Will wait 60s for crictl version
	I0805 13:18:52.574261  457875 ssh_runner.go:195] Run: which crictl
	I0805 13:18:52.577997  457875 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 13:18:52.616295  457875 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 13:18:52.616389  457875 ssh_runner.go:195] Run: crio --version
	I0805 13:18:52.649196  457875 ssh_runner.go:195] Run: crio --version
	I0805 13:18:52.685484  457875 out.go:177] * Preparing Kubernetes v1.31.0-rc.0 on CRI-O 1.29.1 ...
	I0805 13:18:52.686770  457875 main.go:141] libmachine: (newest-cni-202226) Calling .GetIP
	I0805 13:18:52.689577  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:52.689940  457875 main.go:141] libmachine: (newest-cni-202226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:72:ff", ip: ""} in network mk-newest-cni-202226: {Iface:virbr3 ExpiryTime:2024-08-05 14:18:41 +0000 UTC Type:0 Mac:52:54:00:13:72:ff Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-202226 Clientid:01:52:54:00:13:72:ff}
	I0805 13:18:52.689969  457875 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined IP address 192.168.61.136 and MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:18:52.690215  457875 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0805 13:18:52.694559  457875 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 13:18:52.709234  457875 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0805 13:18:52.710414  457875 kubeadm.go:883] updating cluster {Name:newest-cni-202226 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-rc.0 ClusterName:newest-cni-202226 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 13:18:52.710617  457875 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0805 13:18:53.013537  457875 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0805 13:18:53.339660  457875 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0805 13:18:53.644987  457875 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0805 13:18:53.645166  457875 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0805 13:18:53.946017  457875 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0805 13:18:54.250486  457875 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0805 13:18:54.531489  457875 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 13:18:54.568389  457875 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-rc.0". assuming images are not preloaded.
	I0805 13:18:54.568458  457875 ssh_runner.go:195] Run: which lz4
	I0805 13:18:54.572603  457875 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0805 13:18:54.577055  457875 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 13:18:54.577089  457875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389126804 bytes)
	I0805 13:18:55.948810  457875 crio.go:462] duration metric: took 1.37624802s to copy over tarball
	I0805 13:18:55.948903  457875 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 13:18:58.102434  457875 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.153488816s)
	I0805 13:18:58.102469  457875 crio.go:469] duration metric: took 2.153627195s to extract the tarball
	I0805 13:18:58.102477  457875 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0805 13:18:58.140233  457875 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 13:18:58.189559  457875 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 13:18:58.189589  457875 cache_images.go:84] Images are preloaded, skipping loading
	I0805 13:18:58.189597  457875 kubeadm.go:934] updating node { 192.168.61.136 8443 v1.31.0-rc.0 crio true true} ...
	I0805 13:18:58.189753  457875 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-202226 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-202226 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 13:18:58.189841  457875 ssh_runner.go:195] Run: crio config
	I0805 13:18:58.248784  457875 cni.go:84] Creating CNI manager for ""
	I0805 13:18:58.248806  457875 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 13:18:58.248816  457875 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0805 13:18:58.248848  457875 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.136 APIServerPort:8443 KubernetesVersion:v1.31.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-202226 NodeName:newest-cni-202226 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureAr
gs:map[] NodeIP:192.168.61.136 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 13:18:58.249001  457875 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.136
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-202226"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.136
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.136"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 13:18:58.249079  457875 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-rc.0
	I0805 13:18:58.260728  457875 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 13:18:58.260812  457875 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 13:18:58.271272  457875 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0805 13:18:58.288674  457875 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0805 13:18:58.305847  457875 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0805 13:18:58.324183  457875 ssh_runner.go:195] Run: grep 192.168.61.136	control-plane.minikube.internal$ /etc/hosts
	I0805 13:18:58.328594  457875 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.136	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 13:18:58.341828  457875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 13:18:58.471757  457875 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 13:18:58.491657  457875 certs.go:68] Setting up /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/newest-cni-202226 for IP: 192.168.61.136
	I0805 13:18:58.491684  457875 certs.go:194] generating shared ca certs ...
	I0805 13:18:58.491706  457875 certs.go:226] acquiring lock for ca certs: {Name:mk0abfcaff3883fbb5243c47b487f9200d9166d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 13:18:58.491950  457875 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key
	I0805 13:18:58.492014  457875 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key
	I0805 13:18:58.492041  457875 certs.go:256] generating profile certs ...
	I0805 13:18:58.492157  457875 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/newest-cni-202226/client.key
	I0805 13:18:58.492175  457875 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/newest-cni-202226/client.crt with IP's: []
	I0805 13:18:58.777075  457875 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/newest-cni-202226/client.crt ...
	I0805 13:18:58.777107  457875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/newest-cni-202226/client.crt: {Name:mkefd7593c7fcb735f7db9d6db6c01d3922ecaf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 13:18:58.777307  457875 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/newest-cni-202226/client.key ...
	I0805 13:18:58.777322  457875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/newest-cni-202226/client.key: {Name:mkf4ddbc302575654b9530f414f07ec8d1033652 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 13:18:58.777426  457875 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/newest-cni-202226/apiserver.key.64b1698f
	I0805 13:18:58.777443  457875 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/newest-cni-202226/apiserver.crt.64b1698f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.136]
	I0805 13:18:58.936420  457875 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/newest-cni-202226/apiserver.crt.64b1698f ...
	I0805 13:18:58.936450  457875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/newest-cni-202226/apiserver.crt.64b1698f: {Name:mkd595909e5c578aa6eae1f30a11a786455f80db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 13:18:58.936648  457875 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/newest-cni-202226/apiserver.key.64b1698f ...
	I0805 13:18:58.936666  457875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/newest-cni-202226/apiserver.key.64b1698f: {Name:mkf0ad9612e9c581688f6158ab815de9baa87044 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 13:18:58.936778  457875 certs.go:381] copying /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/newest-cni-202226/apiserver.crt.64b1698f -> /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/newest-cni-202226/apiserver.crt
	I0805 13:18:58.936889  457875 certs.go:385] copying /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/newest-cni-202226/apiserver.key.64b1698f -> /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/newest-cni-202226/apiserver.key
	I0805 13:18:58.936969  457875 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/newest-cni-202226/proxy-client.key
	I0805 13:18:58.936989  457875 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/newest-cni-202226/proxy-client.crt with IP's: []
	I0805 13:18:59.129979  457875 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/newest-cni-202226/proxy-client.crt ...
	I0805 13:18:59.130013  457875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/newest-cni-202226/proxy-client.crt: {Name:mkf28be3a53b3bd4291bcdd12bdaea80cd6c5f81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 13:18:59.130219  457875 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/newest-cni-202226/proxy-client.key ...
	I0805 13:18:59.130237  457875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/newest-cni-202226/proxy-client.key: {Name:mk8d4800ad3233f01cb9739133ac9c22b77b0b4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 13:18:59.130475  457875 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem (1338 bytes)
	W0805 13:18:59.130527  457875 certs.go:480] ignoring /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219_empty.pem, impossibly tiny 0 bytes
	I0805 13:18:59.130542  457875 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 13:18:59.130571  457875 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem (1082 bytes)
	I0805 13:18:59.130601  457875 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem (1123 bytes)
	I0805 13:18:59.130634  457875 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem (1675 bytes)
	I0805 13:18:59.130697  457875 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 13:18:59.131376  457875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 13:18:59.158856  457875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 13:18:59.183943  457875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 13:18:59.209342  457875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 13:18:59.234218  457875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/newest-cni-202226/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0805 13:18:59.265525  457875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/newest-cni-202226/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0805 13:18:59.296662  457875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/newest-cni-202226/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 13:18:59.327539  457875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/newest-cni-202226/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 13:18:59.353937  457875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /usr/share/ca-certificates/3912192.pem (1708 bytes)
	I0805 13:18:59.378253  457875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 13:18:59.404608  457875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem --> /usr/share/ca-certificates/391219.pem (1338 bytes)
	I0805 13:18:59.428851  457875 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 13:18:59.446696  457875 ssh_runner.go:195] Run: openssl version
	I0805 13:18:59.453150  457875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3912192.pem && ln -fs /usr/share/ca-certificates/3912192.pem /etc/ssl/certs/3912192.pem"
	I0805 13:18:59.465441  457875 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3912192.pem
	I0805 13:18:59.470363  457875 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 11:39 /usr/share/ca-certificates/3912192.pem
	I0805 13:18:59.470427  457875 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3912192.pem
	I0805 13:18:59.477349  457875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3912192.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 13:18:59.490435  457875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 13:18:59.503179  457875 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 13:18:59.508556  457875 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0805 13:18:59.508641  457875 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 13:18:59.514648  457875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 13:18:59.526542  457875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391219.pem && ln -fs /usr/share/ca-certificates/391219.pem /etc/ssl/certs/391219.pem"
	I0805 13:18:59.538306  457875 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/391219.pem
	I0805 13:18:59.543833  457875 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 11:39 /usr/share/ca-certificates/391219.pem
	I0805 13:18:59.543907  457875 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391219.pem
	I0805 13:18:59.549762  457875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391219.pem /etc/ssl/certs/51391683.0"
	I0805 13:18:59.562845  457875 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 13:18:59.567296  457875 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0805 13:18:59.567373  457875 kubeadm.go:392] StartCluster: {Name:newest-cni-202226 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-rc.0 ClusterName:newest-cni-202226 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 13:18:59.567487  457875 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 13:18:59.567555  457875 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 13:18:59.615500  457875 cri.go:89] found id: ""
	I0805 13:18:59.615589  457875 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 13:18:59.627640  457875 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 13:18:59.639020  457875 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 13:18:59.649482  457875 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 13:18:59.649508  457875 kubeadm.go:157] found existing configuration files:
	
	I0805 13:18:59.649562  457875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 13:18:59.659405  457875 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 13:18:59.659489  457875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 13:18:59.669800  457875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 13:18:59.679576  457875 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 13:18:59.679642  457875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 13:18:59.690074  457875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 13:18:59.699754  457875 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 13:18:59.699834  457875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 13:18:59.710280  457875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 13:18:59.719869  457875 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 13:18:59.719943  457875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 13:18:59.730330  457875 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 13:18:59.843956  457875 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-rc.0
	I0805 13:18:59.844045  457875 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 13:18:59.954152  457875 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 13:18:59.954331  457875 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 13:18:59.954482  457875 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0805 13:18:59.965395  457875 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 13:19:00.012828  457875 out.go:204]   - Generating certificates and keys ...
	I0805 13:19:00.012989  457875 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 13:19:00.013082  457875 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 13:19:00.055232  457875 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0805 13:19:00.323476  457875 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0805 13:19:00.528946  457875 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0805 13:19:00.862403  457875 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0805 13:19:00.993631  457875 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0805 13:19:00.993923  457875 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-202226] and IPs [192.168.61.136 127.0.0.1 ::1]
	I0805 13:19:01.117932  457875 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0805 13:19:01.118158  457875 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-202226] and IPs [192.168.61.136 127.0.0.1 ::1]
	I0805 13:19:01.275767  457875 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0805 13:19:01.431898  457875 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0805 13:19:01.727458  457875 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0805 13:19:01.727858  457875 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 13:19:01.953607  457875 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 13:19:02.357524  457875 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0805 13:19:02.447987  457875 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 13:19:02.684708  457875 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 13:19:02.905275  457875 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 13:19:02.911408  457875 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 13:19:02.915149  457875 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	
	==> CRI-O <==
	Aug 05 13:19:04 no-preload-669469 crio[701]: time="2024-08-05 13:19:04.915286626Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863944915259980,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7f6cb9ab-e48a-4167-8854-e8838020c4eb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:19:04 no-preload-669469 crio[701]: time="2024-08-05 13:19:04.915748938Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=349e4892-0bba-4284-b65e-707b410ecf36 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:19:04 no-preload-669469 crio[701]: time="2024-08-05 13:19:04.915825476Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=349e4892-0bba-4284-b65e-707b410ecf36 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:19:04 no-preload-669469 crio[701]: time="2024-08-05 13:19:04.916038542Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:720f5cc7faa808968b90cc1f67825bc5c2a55fb4bd51337abdedb43b051038e1,PodSandboxId:146e18ec96e30d222eeec255131747faf54b22756f186f1b863eed46c7b3f703,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722863019230186971,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb19adf6-e208-4709-b02f-ae32acc30478,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ea0286156b0339e1479613c3a9526db65b88d0cc949618d5b9db1633024d614,PodSandboxId:6dc0c99effd8af3d3e1c6b937ebf5c34e95a043e142d1ac70528cf75be4f4f01,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722863019098109008,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-pqhwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d7bb193-e93e-49b8-be4b-943f2d7fe59d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63d3e5aad7edc6d373e79326f2cbe5725c39f8108e2c94a88d94054c1aaad279,PodSandboxId:c967731df8d03ea3afe0cf2e7e561e4d5e786b8f4dca27e77ebd11c37dd8149a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722863019097568700,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-npbmj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e
ea9e0a-697b-42c9-857c-a3556c658fde,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8745bae4cc7fc81a4dfa17d9f2a8b64ff736eda91fd2f05a7b189f3de1871d0,PodSandboxId:f9ec1e715194fececc71cf1e147a83a51959ee540a7efa28629b0bc13b2e709a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,CreatedAt:
1722863018521148654,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tpn5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f89e32f9-d750-41ac-891e-e3ca4a4fbbd2,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a71aa20c85d5807d25a2276d35a85b10e2fd1662fd320ae8cb487c535505270,PodSandboxId:eafbded883a1b705b3a1450e46da11d61b3115332de9b047ea8f58f575a0d964,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1722863007531246788,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-669469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 107a9040e215dab2b8aab08673b4f751,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b0d970998865240c4f69eb65c3a50b0071e25ec87618d9d74ccc2bb1cd8caa1,PodSandboxId:14e32cc8dc2a5a0dbbf579da212488b074aa56edad47e2bf531195d75854e49d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1722863007506059824,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-669469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57b68c085e364fe312def9dbe225e5aa,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6359cb0c85ad0b248f0ec187d3821cf1bbcec57798ce503047ed7bb6ca345696,PodSandboxId:794289f6eaecd9a738b4f706dd2678a06270b91890e00ba4385ca63e7b4f6d8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1722863007498373599,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-669469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8078ffb805fb9155d9fb81fa32307361,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6496630cffd11b882d7d7bb3136ddd2b5aa3c243da638db1ed160978ea93c022,PodSandboxId:6317297b1dcb515b7668c236dac256c8d620fb7f4b5448813cd1b8535b3a3992,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1722863007416535495,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-669469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7aca0bf10be39af6c0200757bde06d77,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f57b7378426c58059895e772facc804452834690b99650f40a477308fae1d15,PodSandboxId:0b7be9f4229ba83122768ae8dc28b83e7d0f88b88ff58920dc2f33e630cafe0d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1722862722784968236,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-669469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7aca0bf10be39af6c0200757bde06d77,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=349e4892-0bba-4284-b65e-707b410ecf36 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:19:04 no-preload-669469 crio[701]: time="2024-08-05 13:19:04.972968248Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=99af381e-0d24-4856-9594-5c5bde607ca3 name=/runtime.v1.RuntimeService/Version
	Aug 05 13:19:04 no-preload-669469 crio[701]: time="2024-08-05 13:19:04.973073335Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=99af381e-0d24-4856-9594-5c5bde607ca3 name=/runtime.v1.RuntimeService/Version
	Aug 05 13:19:04 no-preload-669469 crio[701]: time="2024-08-05 13:19:04.974642713Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9e2043cb-474e-4206-a372-e6ee289b7804 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:19:04 no-preload-669469 crio[701]: time="2024-08-05 13:19:04.975287998Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863944975255306,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9e2043cb-474e-4206-a372-e6ee289b7804 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:19:04 no-preload-669469 crio[701]: time="2024-08-05 13:19:04.976160420Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ec548980-ca1a-4822-b77b-eb7a2edb1051 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:19:04 no-preload-669469 crio[701]: time="2024-08-05 13:19:04.976257209Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ec548980-ca1a-4822-b77b-eb7a2edb1051 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:19:04 no-preload-669469 crio[701]: time="2024-08-05 13:19:04.982298285Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:720f5cc7faa808968b90cc1f67825bc5c2a55fb4bd51337abdedb43b051038e1,PodSandboxId:146e18ec96e30d222eeec255131747faf54b22756f186f1b863eed46c7b3f703,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722863019230186971,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb19adf6-e208-4709-b02f-ae32acc30478,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ea0286156b0339e1479613c3a9526db65b88d0cc949618d5b9db1633024d614,PodSandboxId:6dc0c99effd8af3d3e1c6b937ebf5c34e95a043e142d1ac70528cf75be4f4f01,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722863019098109008,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-pqhwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d7bb193-e93e-49b8-be4b-943f2d7fe59d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63d3e5aad7edc6d373e79326f2cbe5725c39f8108e2c94a88d94054c1aaad279,PodSandboxId:c967731df8d03ea3afe0cf2e7e561e4d5e786b8f4dca27e77ebd11c37dd8149a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722863019097568700,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-npbmj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e
ea9e0a-697b-42c9-857c-a3556c658fde,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8745bae4cc7fc81a4dfa17d9f2a8b64ff736eda91fd2f05a7b189f3de1871d0,PodSandboxId:f9ec1e715194fececc71cf1e147a83a51959ee540a7efa28629b0bc13b2e709a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,CreatedAt:
1722863018521148654,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tpn5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f89e32f9-d750-41ac-891e-e3ca4a4fbbd2,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a71aa20c85d5807d25a2276d35a85b10e2fd1662fd320ae8cb487c535505270,PodSandboxId:eafbded883a1b705b3a1450e46da11d61b3115332de9b047ea8f58f575a0d964,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1722863007531246788,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-669469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 107a9040e215dab2b8aab08673b4f751,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b0d970998865240c4f69eb65c3a50b0071e25ec87618d9d74ccc2bb1cd8caa1,PodSandboxId:14e32cc8dc2a5a0dbbf579da212488b074aa56edad47e2bf531195d75854e49d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1722863007506059824,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-669469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57b68c085e364fe312def9dbe225e5aa,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6359cb0c85ad0b248f0ec187d3821cf1bbcec57798ce503047ed7bb6ca345696,PodSandboxId:794289f6eaecd9a738b4f706dd2678a06270b91890e00ba4385ca63e7b4f6d8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1722863007498373599,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-669469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8078ffb805fb9155d9fb81fa32307361,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6496630cffd11b882d7d7bb3136ddd2b5aa3c243da638db1ed160978ea93c022,PodSandboxId:6317297b1dcb515b7668c236dac256c8d620fb7f4b5448813cd1b8535b3a3992,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1722863007416535495,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-669469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7aca0bf10be39af6c0200757bde06d77,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f57b7378426c58059895e772facc804452834690b99650f40a477308fae1d15,PodSandboxId:0b7be9f4229ba83122768ae8dc28b83e7d0f88b88ff58920dc2f33e630cafe0d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1722862722784968236,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-669469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7aca0bf10be39af6c0200757bde06d77,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ec548980-ca1a-4822-b77b-eb7a2edb1051 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:19:05 no-preload-669469 crio[701]: time="2024-08-05 13:19:05.024965493Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0aa015b2-d032-4ce7-b9c9-f3271a356001 name=/runtime.v1.RuntimeService/Version
	Aug 05 13:19:05 no-preload-669469 crio[701]: time="2024-08-05 13:19:05.025096237Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0aa015b2-d032-4ce7-b9c9-f3271a356001 name=/runtime.v1.RuntimeService/Version
	Aug 05 13:19:05 no-preload-669469 crio[701]: time="2024-08-05 13:19:05.026648199Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=03c79ce8-e398-4ac3-84ba-f546e928994d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:19:05 no-preload-669469 crio[701]: time="2024-08-05 13:19:05.027350136Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863945027316244,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=03c79ce8-e398-4ac3-84ba-f546e928994d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:19:05 no-preload-669469 crio[701]: time="2024-08-05 13:19:05.028094071Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5ce742cc-94f3-49d5-b349-b463c0c8fb4f name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:19:05 no-preload-669469 crio[701]: time="2024-08-05 13:19:05.028203282Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5ce742cc-94f3-49d5-b349-b463c0c8fb4f name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:19:05 no-preload-669469 crio[701]: time="2024-08-05 13:19:05.028561080Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:720f5cc7faa808968b90cc1f67825bc5c2a55fb4bd51337abdedb43b051038e1,PodSandboxId:146e18ec96e30d222eeec255131747faf54b22756f186f1b863eed46c7b3f703,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722863019230186971,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb19adf6-e208-4709-b02f-ae32acc30478,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ea0286156b0339e1479613c3a9526db65b88d0cc949618d5b9db1633024d614,PodSandboxId:6dc0c99effd8af3d3e1c6b937ebf5c34e95a043e142d1ac70528cf75be4f4f01,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722863019098109008,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-pqhwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d7bb193-e93e-49b8-be4b-943f2d7fe59d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63d3e5aad7edc6d373e79326f2cbe5725c39f8108e2c94a88d94054c1aaad279,PodSandboxId:c967731df8d03ea3afe0cf2e7e561e4d5e786b8f4dca27e77ebd11c37dd8149a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722863019097568700,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-npbmj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e
ea9e0a-697b-42c9-857c-a3556c658fde,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8745bae4cc7fc81a4dfa17d9f2a8b64ff736eda91fd2f05a7b189f3de1871d0,PodSandboxId:f9ec1e715194fececc71cf1e147a83a51959ee540a7efa28629b0bc13b2e709a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,CreatedAt:
1722863018521148654,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tpn5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f89e32f9-d750-41ac-891e-e3ca4a4fbbd2,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a71aa20c85d5807d25a2276d35a85b10e2fd1662fd320ae8cb487c535505270,PodSandboxId:eafbded883a1b705b3a1450e46da11d61b3115332de9b047ea8f58f575a0d964,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1722863007531246788,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-669469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 107a9040e215dab2b8aab08673b4f751,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b0d970998865240c4f69eb65c3a50b0071e25ec87618d9d74ccc2bb1cd8caa1,PodSandboxId:14e32cc8dc2a5a0dbbf579da212488b074aa56edad47e2bf531195d75854e49d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1722863007506059824,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-669469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57b68c085e364fe312def9dbe225e5aa,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6359cb0c85ad0b248f0ec187d3821cf1bbcec57798ce503047ed7bb6ca345696,PodSandboxId:794289f6eaecd9a738b4f706dd2678a06270b91890e00ba4385ca63e7b4f6d8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1722863007498373599,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-669469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8078ffb805fb9155d9fb81fa32307361,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6496630cffd11b882d7d7bb3136ddd2b5aa3c243da638db1ed160978ea93c022,PodSandboxId:6317297b1dcb515b7668c236dac256c8d620fb7f4b5448813cd1b8535b3a3992,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1722863007416535495,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-669469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7aca0bf10be39af6c0200757bde06d77,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f57b7378426c58059895e772facc804452834690b99650f40a477308fae1d15,PodSandboxId:0b7be9f4229ba83122768ae8dc28b83e7d0f88b88ff58920dc2f33e630cafe0d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1722862722784968236,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-669469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7aca0bf10be39af6c0200757bde06d77,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5ce742cc-94f3-49d5-b349-b463c0c8fb4f name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:19:05 no-preload-669469 crio[701]: time="2024-08-05 13:19:05.071140203Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=49fc75c6-0417-41b1-84a3-20fb6d753aa5 name=/runtime.v1.RuntimeService/Version
	Aug 05 13:19:05 no-preload-669469 crio[701]: time="2024-08-05 13:19:05.071261869Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=49fc75c6-0417-41b1-84a3-20fb6d753aa5 name=/runtime.v1.RuntimeService/Version
	Aug 05 13:19:05 no-preload-669469 crio[701]: time="2024-08-05 13:19:05.072820035Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=065b26ea-cfe1-4a4f-93fd-47111a223df4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:19:05 no-preload-669469 crio[701]: time="2024-08-05 13:19:05.073284387Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863945073261332,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=065b26ea-cfe1-4a4f-93fd-47111a223df4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:19:05 no-preload-669469 crio[701]: time="2024-08-05 13:19:05.074301782Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2fabbb66-aa7b-403b-a329-eabb40239178 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:19:05 no-preload-669469 crio[701]: time="2024-08-05 13:19:05.074384724Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2fabbb66-aa7b-403b-a329-eabb40239178 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:19:05 no-preload-669469 crio[701]: time="2024-08-05 13:19:05.074754925Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:720f5cc7faa808968b90cc1f67825bc5c2a55fb4bd51337abdedb43b051038e1,PodSandboxId:146e18ec96e30d222eeec255131747faf54b22756f186f1b863eed46c7b3f703,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722863019230186971,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb19adf6-e208-4709-b02f-ae32acc30478,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ea0286156b0339e1479613c3a9526db65b88d0cc949618d5b9db1633024d614,PodSandboxId:6dc0c99effd8af3d3e1c6b937ebf5c34e95a043e142d1ac70528cf75be4f4f01,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722863019098109008,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-pqhwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d7bb193-e93e-49b8-be4b-943f2d7fe59d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63d3e5aad7edc6d373e79326f2cbe5725c39f8108e2c94a88d94054c1aaad279,PodSandboxId:c967731df8d03ea3afe0cf2e7e561e4d5e786b8f4dca27e77ebd11c37dd8149a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722863019097568700,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-npbmj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e
ea9e0a-697b-42c9-857c-a3556c658fde,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8745bae4cc7fc81a4dfa17d9f2a8b64ff736eda91fd2f05a7b189f3de1871d0,PodSandboxId:f9ec1e715194fececc71cf1e147a83a51959ee540a7efa28629b0bc13b2e709a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,CreatedAt:
1722863018521148654,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tpn5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f89e32f9-d750-41ac-891e-e3ca4a4fbbd2,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a71aa20c85d5807d25a2276d35a85b10e2fd1662fd320ae8cb487c535505270,PodSandboxId:eafbded883a1b705b3a1450e46da11d61b3115332de9b047ea8f58f575a0d964,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1722863007531246788,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-669469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 107a9040e215dab2b8aab08673b4f751,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b0d970998865240c4f69eb65c3a50b0071e25ec87618d9d74ccc2bb1cd8caa1,PodSandboxId:14e32cc8dc2a5a0dbbf579da212488b074aa56edad47e2bf531195d75854e49d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1722863007506059824,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-669469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57b68c085e364fe312def9dbe225e5aa,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6359cb0c85ad0b248f0ec187d3821cf1bbcec57798ce503047ed7bb6ca345696,PodSandboxId:794289f6eaecd9a738b4f706dd2678a06270b91890e00ba4385ca63e7b4f6d8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1722863007498373599,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-669469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8078ffb805fb9155d9fb81fa32307361,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6496630cffd11b882d7d7bb3136ddd2b5aa3c243da638db1ed160978ea93c022,PodSandboxId:6317297b1dcb515b7668c236dac256c8d620fb7f4b5448813cd1b8535b3a3992,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1722863007416535495,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-669469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7aca0bf10be39af6c0200757bde06d77,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f57b7378426c58059895e772facc804452834690b99650f40a477308fae1d15,PodSandboxId:0b7be9f4229ba83122768ae8dc28b83e7d0f88b88ff58920dc2f33e630cafe0d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1722862722784968236,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-669469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7aca0bf10be39af6c0200757bde06d77,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2fabbb66-aa7b-403b-a329-eabb40239178 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	720f5cc7faa80       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   146e18ec96e30       storage-provisioner
	3ea0286156b03       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   6dc0c99effd8a       coredns-6f6b679f8f-pqhwx
	63d3e5aad7edc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   c967731df8d03       coredns-6f6b679f8f-npbmj
	a8745bae4cc7f       41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318   15 minutes ago      Running             kube-proxy                0                   f9ec1e715194f       kube-proxy-tpn5s
	4a71aa20c85d5       0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c   15 minutes ago      Running             kube-scheduler            2                   eafbded883a1b       kube-scheduler-no-preload-669469
	5b0d970998865       fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c   15 minutes ago      Running             kube-controller-manager   2                   14e32cc8dc2a5       kube-controller-manager-no-preload-669469
	6359cb0c85ad0       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   15 minutes ago      Running             etcd                      2                   794289f6eaecd       etcd-no-preload-669469
	6496630cffd11       c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0   15 minutes ago      Running             kube-apiserver            2                   6317297b1dcb5       kube-apiserver-no-preload-669469
	3f57b7378426c       c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0   20 minutes ago      Exited              kube-apiserver            1                   0b7be9f4229ba       kube-apiserver-no-preload-669469
	
	
	==> coredns [3ea0286156b0339e1479613c3a9526db65b88d0cc949618d5b9db1633024d614] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [63d3e5aad7edc6d373e79326f2cbe5725c39f8108e2c94a88d94054c1aaad279] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-669469
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-669469
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f
	                    minikube.k8s.io/name=no-preload-669469
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_05T13_03_33_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 13:03:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-669469
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 13:19:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 13:19:00 +0000   Mon, 05 Aug 2024 13:03:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 13:19:00 +0000   Mon, 05 Aug 2024 13:03:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 13:19:00 +0000   Mon, 05 Aug 2024 13:03:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 13:19:00 +0000   Mon, 05 Aug 2024 13:03:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.223
	  Hostname:    no-preload-669469
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e6cf68b47cf0432ea69b9d25e8c7dfb7
	  System UUID:                e6cf68b4-7cf0-432e-a69b-9d25e8c7dfb7
	  Boot ID:                    c6760a17-44d7-4269-8a25-de73df8e3f0f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-rc.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-npbmj                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-6f6b679f8f-pqhwx                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-no-preload-669469                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-no-preload-669469             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-no-preload-669469    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-tpn5s                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-no-preload-669469             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-6867b74b74-x4j7b              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node no-preload-669469 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node no-preload-669469 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node no-preload-669469 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node no-preload-669469 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node no-preload-669469 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node no-preload-669469 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                node-controller  Node no-preload-669469 event: Registered Node no-preload-669469 in Controller
	
	
	==> dmesg <==
	[  +0.040658] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.767330] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.466493] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.441223] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.665582] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.056619] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053528] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.214168] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.126035] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +0.591017] systemd-fstab-generator[686]: Ignoring "noauto" option for root device
	[ +16.361707] systemd-fstab-generator[1221]: Ignoring "noauto" option for root device
	[  +0.059565] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.955777] systemd-fstab-generator[1342]: Ignoring "noauto" option for root device
	[  +5.717080] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.698288] kauditd_printk_skb: 52 callbacks suppressed
	[Aug 5 12:59] kauditd_printk_skb: 30 callbacks suppressed
	[Aug 5 13:03] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.399707] systemd-fstab-generator[3005]: Ignoring "noauto" option for root device
	[  +4.454018] kauditd_printk_skb: 56 callbacks suppressed
	[  +1.610788] systemd-fstab-generator[3327]: Ignoring "noauto" option for root device
	[  +4.891320] systemd-fstab-generator[3440]: Ignoring "noauto" option for root device
	[  +0.108570] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.099091] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [6359cb0c85ad0b248f0ec187d3821cf1bbcec57798ce503047ed7bb6ca345696] <==
	{"level":"info","ts":"2024-08-05T13:03:28.661445Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5072550c343bb357 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-05T13:03:28.661621Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5072550c343bb357 received MsgPreVoteResp from 5072550c343bb357 at term 1"}
	{"level":"info","ts":"2024-08-05T13:03:28.661797Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5072550c343bb357 became candidate at term 2"}
	{"level":"info","ts":"2024-08-05T13:03:28.661837Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5072550c343bb357 received MsgVoteResp from 5072550c343bb357 at term 2"}
	{"level":"info","ts":"2024-08-05T13:03:28.661946Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5072550c343bb357 became leader at term 2"}
	{"level":"info","ts":"2024-08-05T13:03:28.661982Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 5072550c343bb357 elected leader 5072550c343bb357 at term 2"}
	{"level":"info","ts":"2024-08-05T13:03:28.663612Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T13:03:28.664099Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"5072550c343bb357","local-member-attributes":"{Name:no-preload-669469 ClientURLs:[https://192.168.72.223:2379]}","request-path":"/0/members/5072550c343bb357/attributes","cluster-id":"d0d4b5aa9c0518f1","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-05T13:03:28.664170Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T13:03:28.664948Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d0d4b5aa9c0518f1","local-member-id":"5072550c343bb357","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T13:03:28.665079Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T13:03:28.665143Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T13:03:28.665183Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T13:03:28.666463Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-05T13:03:28.667371Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-05T13:03:28.667430Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-05T13:03:28.669042Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-05T13:03:28.666483Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-05T13:03:28.676819Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.223:2379"}
	{"level":"info","ts":"2024-08-05T13:13:28.729042Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":719}
	{"level":"info","ts":"2024-08-05T13:13:28.738916Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":719,"took":"9.217062ms","hash":2137158558,"current-db-size-bytes":2215936,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2215936,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-08-05T13:13:28.739003Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2137158558,"revision":719,"compact-revision":-1}
	{"level":"info","ts":"2024-08-05T13:18:28.737872Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":963}
	{"level":"info","ts":"2024-08-05T13:18:28.741912Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":963,"took":"3.636876ms","hash":1986417377,"current-db-size-bytes":2215936,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1523712,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-08-05T13:18:28.741979Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1986417377,"revision":963,"compact-revision":719}
	
	
	==> kernel <==
	 13:19:05 up 21 min,  0 users,  load average: 0.17, 0.13, 0.13
	Linux no-preload-669469 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3f57b7378426c58059895e772facc804452834690b99650f40a477308fae1d15] <==
	W0805 13:03:22.681378       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 13:03:22.702206       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 13:03:22.753912       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 13:03:22.820521       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 13:03:22.820533       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 13:03:22.850968       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 13:03:22.884098       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 13:03:22.884610       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 13:03:22.984082       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 13:03:23.008146       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 13:03:23.014038       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 13:03:23.029605       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 13:03:23.043200       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 13:03:23.086274       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 13:03:23.094194       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 13:03:23.096897       1 logging.go:55] [core] [Channel #15 SubChannel #16]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 13:03:23.121631       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 13:03:23.243210       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 13:03:23.289026       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 13:03:23.385256       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 13:03:23.433107       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 13:03:23.496926       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 13:03:23.691076       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 13:03:23.921147       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 13:03:23.924885       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [6496630cffd11b882d7d7bb3136ddd2b5aa3c243da638db1ed160978ea93c022] <==
	I0805 13:14:31.190214       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0805 13:14:31.190327       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0805 13:16:31.190479       1 handler_proxy.go:99] no RequestInfo found in the context
	E0805 13:16:31.190952       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0805 13:16:31.190511       1 handler_proxy.go:99] no RequestInfo found in the context
	E0805 13:16:31.191167       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0805 13:16:31.192232       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0805 13:16:31.192279       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0805 13:18:30.192422       1 handler_proxy.go:99] no RequestInfo found in the context
	E0805 13:18:30.192741       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0805 13:18:31.195855       1 handler_proxy.go:99] no RequestInfo found in the context
	E0805 13:18:31.195975       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0805 13:18:31.196111       1 handler_proxy.go:99] no RequestInfo found in the context
	E0805 13:18:31.196143       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0805 13:18:31.197172       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0805 13:18:31.197251       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [5b0d970998865240c4f69eb65c3a50b0071e25ec87618d9d74ccc2bb1cd8caa1] <==
	I0805 13:13:37.755016       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0805 13:13:54.565028       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-669469"
	E0805 13:14:07.308963       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0805 13:14:07.763342       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:14:37.316922       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0805 13:14:37.772375       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0805 13:14:42.787386       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="107.004µs"
	I0805 13:14:57.783129       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="75.083µs"
	E0805 13:15:07.323142       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0805 13:15:07.780489       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:15:37.331480       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0805 13:15:37.789892       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:16:07.338869       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0805 13:16:07.797557       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:16:37.346029       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0805 13:16:37.810095       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:17:07.352149       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0805 13:17:07.821040       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:17:37.362180       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0805 13:17:37.829964       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:18:07.369905       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0805 13:18:07.838098       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:18:37.379213       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0805 13:18:37.846327       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0805 13:19:00.566300       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-669469"
	
	
	==> kube-proxy [a8745bae4cc7fc81a4dfa17d9f2a8b64ff736eda91fd2f05a7b189f3de1871d0] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0805 13:03:39.560782       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0805 13:03:39.582126       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.223"]
	E0805 13:03:39.582472       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0805 13:03:39.644601       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0805 13:03:39.644667       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0805 13:03:39.644781       1 server_linux.go:169] "Using iptables Proxier"
	I0805 13:03:39.650044       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0805 13:03:39.650429       1 server.go:483] "Version info" version="v1.31.0-rc.0"
	I0805 13:03:39.650463       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 13:03:39.652241       1 config.go:197] "Starting service config controller"
	I0805 13:03:39.652309       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 13:03:39.652344       1 config.go:104] "Starting endpoint slice config controller"
	I0805 13:03:39.652361       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 13:03:39.654472       1 config.go:326] "Starting node config controller"
	I0805 13:03:39.654539       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 13:03:39.752537       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0805 13:03:39.752575       1 shared_informer.go:320] Caches are synced for service config
	I0805 13:03:39.755422       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4a71aa20c85d5807d25a2276d35a85b10e2fd1662fd320ae8cb487c535505270] <==
	W0805 13:03:30.210940       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0805 13:03:30.211214       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0805 13:03:30.211084       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0805 13:03:30.211311       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0805 13:03:30.211124       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0805 13:03:30.211377       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0805 13:03:30.211182       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0805 13:03:30.211624       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0805 13:03:31.015066       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0805 13:03:31.015241       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0805 13:03:31.021945       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0805 13:03:31.022050       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0805 13:03:31.206387       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0805 13:03:31.206488       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0805 13:03:31.317910       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0805 13:03:31.318005       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0805 13:03:31.349963       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0805 13:03:31.350090       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0805 13:03:31.383130       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0805 13:03:31.383547       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0805 13:03:31.403608       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0805 13:03:31.404566       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0805 13:03:31.416504       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0805 13:03:31.416633       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0805 13:03:33.801686       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 05 13:17:55 no-preload-669469 kubelet[3334]: E0805 13:17:55.763239    3334 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-x4j7b" podUID="55a747e4-f9a7-41f1-b584-470048ba6fcb"
	Aug 05 13:18:02 no-preload-669469 kubelet[3334]: E0805 13:18:02.983480    3334 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863882982987002,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 05 13:18:02 no-preload-669469 kubelet[3334]: E0805 13:18:02.983828    3334 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863882982987002,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 05 13:18:08 no-preload-669469 kubelet[3334]: E0805 13:18:08.765429    3334 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-x4j7b" podUID="55a747e4-f9a7-41f1-b584-470048ba6fcb"
	Aug 05 13:18:12 no-preload-669469 kubelet[3334]: E0805 13:18:12.985635    3334 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863892985203599,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 05 13:18:12 no-preload-669469 kubelet[3334]: E0805 13:18:12.985695    3334 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863892985203599,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 05 13:18:22 no-preload-669469 kubelet[3334]: E0805 13:18:22.763099    3334 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-x4j7b" podUID="55a747e4-f9a7-41f1-b584-470048ba6fcb"
	Aug 05 13:18:22 no-preload-669469 kubelet[3334]: E0805 13:18:22.988488    3334 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863902987822607,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 05 13:18:22 no-preload-669469 kubelet[3334]: E0805 13:18:22.988827    3334 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863902987822607,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 05 13:18:32 no-preload-669469 kubelet[3334]: E0805 13:18:32.806080    3334 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 13:18:32 no-preload-669469 kubelet[3334]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 13:18:32 no-preload-669469 kubelet[3334]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 13:18:32 no-preload-669469 kubelet[3334]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 13:18:32 no-preload-669469 kubelet[3334]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 13:18:32 no-preload-669469 kubelet[3334]: E0805 13:18:32.990413    3334 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863912989666491,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 05 13:18:32 no-preload-669469 kubelet[3334]: E0805 13:18:32.990451    3334 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863912989666491,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 05 13:18:34 no-preload-669469 kubelet[3334]: E0805 13:18:34.763199    3334 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-x4j7b" podUID="55a747e4-f9a7-41f1-b584-470048ba6fcb"
	Aug 05 13:18:42 no-preload-669469 kubelet[3334]: E0805 13:18:42.992584    3334 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863922992230392,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 05 13:18:42 no-preload-669469 kubelet[3334]: E0805 13:18:42.993110    3334 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863922992230392,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 05 13:18:46 no-preload-669469 kubelet[3334]: E0805 13:18:46.763490    3334 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-x4j7b" podUID="55a747e4-f9a7-41f1-b584-470048ba6fcb"
	Aug 05 13:18:52 no-preload-669469 kubelet[3334]: E0805 13:18:52.994500    3334 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863932994132967,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 05 13:18:52 no-preload-669469 kubelet[3334]: E0805 13:18:52.995592    3334 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863932994132967,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 05 13:19:01 no-preload-669469 kubelet[3334]: E0805 13:19:01.763948    3334 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-x4j7b" podUID="55a747e4-f9a7-41f1-b584-470048ba6fcb"
	Aug 05 13:19:02 no-preload-669469 kubelet[3334]: E0805 13:19:02.997065    3334 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863942996614376,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 05 13:19:02 no-preload-669469 kubelet[3334]: E0805 13:19:02.997759    3334 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863942996614376,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [720f5cc7faa808968b90cc1f67825bc5c2a55fb4bd51337abdedb43b051038e1] <==
	I0805 13:03:39.564907       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0805 13:03:39.581832       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0805 13:03:39.581942       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0805 13:03:39.591421       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0805 13:03:39.591695       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-669469_a33707dc-914f-4c2f-9543-ab961615e6e7!
	I0805 13:03:39.594182       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2ae93881-5737-4b1f-8fa9-1574a3d54891", APIVersion:"v1", ResourceVersion:"428", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-669469_a33707dc-914f-4c2f-9543-ab961615e6e7 became leader
	I0805 13:03:39.692390       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-669469_a33707dc-914f-4c2f-9543-ab961615e6e7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-669469 -n no-preload-669469
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-669469 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-x4j7b
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-669469 describe pod metrics-server-6867b74b74-x4j7b
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-669469 describe pod metrics-server-6867b74b74-x4j7b: exit status 1 (79.09175ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-x4j7b" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-669469 describe pod metrics-server-6867b74b74-x4j7b: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (376.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (396.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-321139 -n embed-certs-321139
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-05 13:19:40.396042941 +0000 UTC m=+6767.353366638
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-321139 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-321139 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.668µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-321139 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-321139 -n embed-certs-321139
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-321139 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-321139 logs -n 25: (1.197640985s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p                                                     | disable-driver-mounts-130994 | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | disable-driver-mounts-130994                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-371585 | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:51 UTC |
	|         | default-k8s-diff-port-371585                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-321139            | embed-certs-321139           | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-321139                                  | embed-certs-321139           | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-669469             | no-preload-669469            | jenkins | v1.33.1 | 05 Aug 24 12:51 UTC | 05 Aug 24 12:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-669469                                   | no-preload-669469            | jenkins | v1.33.1 | 05 Aug 24 12:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-371585  | default-k8s-diff-port-371585 | jenkins | v1.33.1 | 05 Aug 24 12:51 UTC | 05 Aug 24 12:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-371585 | jenkins | v1.33.1 | 05 Aug 24 12:51 UTC |                     |
	|         | default-k8s-diff-port-371585                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-321139                 | embed-certs-321139           | jenkins | v1.33.1 | 05 Aug 24 12:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-635707        | old-k8s-version-635707       | jenkins | v1.33.1 | 05 Aug 24 12:53 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-321139                                  | embed-certs-321139           | jenkins | v1.33.1 | 05 Aug 24 12:53 UTC | 05 Aug 24 13:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-669469                  | no-preload-669469            | jenkins | v1.33.1 | 05 Aug 24 12:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-669469                                   | no-preload-669469            | jenkins | v1.33.1 | 05 Aug 24 12:53 UTC | 05 Aug 24 13:03 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-371585       | default-k8s-diff-port-371585 | jenkins | v1.33.1 | 05 Aug 24 12:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-371585 | jenkins | v1.33.1 | 05 Aug 24 12:54 UTC | 05 Aug 24 13:04 UTC |
	|         | default-k8s-diff-port-371585                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-635707                              | old-k8s-version-635707       | jenkins | v1.33.1 | 05 Aug 24 12:55 UTC | 05 Aug 24 12:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-635707             | old-k8s-version-635707       | jenkins | v1.33.1 | 05 Aug 24 12:55 UTC | 05 Aug 24 12:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-635707                              | old-k8s-version-635707       | jenkins | v1.33.1 | 05 Aug 24 12:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-635707                              | old-k8s-version-635707       | jenkins | v1.33.1 | 05 Aug 24 13:18 UTC | 05 Aug 24 13:18 UTC |
	| start   | -p newest-cni-202226 --memory=2200 --alsologtostderr   | newest-cni-202226            | jenkins | v1.33.1 | 05 Aug 24 13:18 UTC | 05 Aug 24 13:19 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-669469                                   | no-preload-669469            | jenkins | v1.33.1 | 05 Aug 24 13:19 UTC | 05 Aug 24 13:19 UTC |
	| addons  | enable metrics-server -p newest-cni-202226             | newest-cni-202226            | jenkins | v1.33.1 | 05 Aug 24 13:19 UTC | 05 Aug 24 13:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-202226                                   | newest-cni-202226            | jenkins | v1.33.1 | 05 Aug 24 13:19 UTC | 05 Aug 24 13:19 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-202226                  | newest-cni-202226            | jenkins | v1.33.1 | 05 Aug 24 13:19 UTC | 05 Aug 24 13:19 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-202226 --memory=2200 --alsologtostderr   | newest-cni-202226            | jenkins | v1.33.1 | 05 Aug 24 13:19 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 13:19:28
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 13:19:28.411598  458687 out.go:291] Setting OutFile to fd 1 ...
	I0805 13:19:28.411759  458687 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 13:19:28.411771  458687 out.go:304] Setting ErrFile to fd 2...
	I0805 13:19:28.411778  458687 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 13:19:28.412019  458687 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
	I0805 13:19:28.412593  458687 out.go:298] Setting JSON to false
	I0805 13:19:28.413675  458687 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10915,"bootTime":1722853053,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0805 13:19:28.413743  458687 start.go:139] virtualization: kvm guest
	I0805 13:19:28.415870  458687 out.go:177] * [newest-cni-202226] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0805 13:19:28.417210  458687 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 13:19:28.417213  458687 notify.go:220] Checking for updates...
	I0805 13:19:28.419857  458687 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 13:19:28.421460  458687 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 13:19:28.422956  458687 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 13:19:28.424264  458687 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0805 13:19:28.425536  458687 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 13:19:28.427181  458687 config.go:182] Loaded profile config "newest-cni-202226": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0805 13:19:28.427560  458687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:19:28.427629  458687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:19:28.444910  458687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46207
	I0805 13:19:28.445322  458687 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:19:28.445820  458687 main.go:141] libmachine: Using API Version  1
	I0805 13:19:28.445840  458687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:19:28.446250  458687 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:19:28.446455  458687 main.go:141] libmachine: (newest-cni-202226) Calling .DriverName
	I0805 13:19:28.446749  458687 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 13:19:28.447086  458687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:19:28.447127  458687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:19:28.462182  458687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40743
	I0805 13:19:28.462633  458687 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:19:28.463095  458687 main.go:141] libmachine: Using API Version  1
	I0805 13:19:28.463121  458687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:19:28.463534  458687 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:19:28.463695  458687 main.go:141] libmachine: (newest-cni-202226) Calling .DriverName
	I0805 13:19:28.499683  458687 out.go:177] * Using the kvm2 driver based on existing profile
	I0805 13:19:28.500969  458687 start.go:297] selected driver: kvm2
	I0805 13:19:28.500986  458687 start.go:901] validating driver "kvm2" against &{Name:newest-cni-202226 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0-rc.0 ClusterName:newest-cni-202226 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pod
s:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 13:19:28.501108  458687 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 13:19:28.501892  458687 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 13:19:28.501962  458687 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19377-383955/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0805 13:19:28.516954  458687 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0805 13:19:28.517305  458687 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0805 13:19:28.517368  458687 cni.go:84] Creating CNI manager for ""
	I0805 13:19:28.517383  458687 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 13:19:28.517431  458687 start.go:340] cluster config:
	{Name:newest-cni-202226 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-202226 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddres
s: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 13:19:28.517537  458687 iso.go:125] acquiring lock: {Name:mk78a4988ea0dfb86bb6f7367e362683a39fd912 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 13:19:28.519260  458687 out.go:177] * Starting "newest-cni-202226" primary control-plane node in "newest-cni-202226" cluster
	I0805 13:19:28.520358  458687 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0805 13:19:28.520390  458687 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0805 13:19:28.520396  458687 cache.go:56] Caching tarball of preloaded images
	I0805 13:19:28.520470  458687 preload.go:172] Found /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0805 13:19:28.520480  458687 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on crio
	I0805 13:19:28.520578  458687 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/newest-cni-202226/config.json ...
	I0805 13:19:28.520746  458687 start.go:360] acquireMachinesLock for newest-cni-202226: {Name:mk3babe91d55c30c0b650587cdec6489eb3a7ed6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 13:19:28.520784  458687 start.go:364] duration metric: took 21.589µs to acquireMachinesLock for "newest-cni-202226"
	I0805 13:19:28.520798  458687 start.go:96] Skipping create...Using existing machine configuration
	I0805 13:19:28.520806  458687 fix.go:54] fixHost starting: 
	I0805 13:19:28.521069  458687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:19:28.521097  458687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:19:28.534390  458687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38979
	I0805 13:19:28.534782  458687 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:19:28.535303  458687 main.go:141] libmachine: Using API Version  1
	I0805 13:19:28.535329  458687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:19:28.535805  458687 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:19:28.535981  458687 main.go:141] libmachine: (newest-cni-202226) Calling .DriverName
	I0805 13:19:28.536136  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetState
	I0805 13:19:28.537589  458687 fix.go:112] recreateIfNeeded on newest-cni-202226: state=Stopped err=<nil>
	I0805 13:19:28.537616  458687 main.go:141] libmachine: (newest-cni-202226) Calling .DriverName
	W0805 13:19:28.537781  458687 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 13:19:28.540526  458687 out.go:177] * Restarting existing kvm2 VM for "newest-cni-202226" ...
	I0805 13:19:28.541786  458687 main.go:141] libmachine: (newest-cni-202226) Calling .Start
	I0805 13:19:28.541950  458687 main.go:141] libmachine: (newest-cni-202226) Ensuring networks are active...
	I0805 13:19:28.542675  458687 main.go:141] libmachine: (newest-cni-202226) Ensuring network default is active
	I0805 13:19:28.542972  458687 main.go:141] libmachine: (newest-cni-202226) Ensuring network mk-newest-cni-202226 is active
	I0805 13:19:28.543330  458687 main.go:141] libmachine: (newest-cni-202226) Getting domain xml...
	I0805 13:19:28.544112  458687 main.go:141] libmachine: (newest-cni-202226) Creating domain...
	I0805 13:19:29.793563  458687 main.go:141] libmachine: (newest-cni-202226) Waiting to get IP...
	I0805 13:19:29.794452  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:29.794912  458687 main.go:141] libmachine: (newest-cni-202226) DBG | unable to find current IP address of domain newest-cni-202226 in network mk-newest-cni-202226
	I0805 13:19:29.794991  458687 main.go:141] libmachine: (newest-cni-202226) DBG | I0805 13:19:29.794889  458722 retry.go:31] will retry after 297.982316ms: waiting for machine to come up
	I0805 13:19:30.094516  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:30.094958  458687 main.go:141] libmachine: (newest-cni-202226) DBG | unable to find current IP address of domain newest-cni-202226 in network mk-newest-cni-202226
	I0805 13:19:30.094990  458687 main.go:141] libmachine: (newest-cni-202226) DBG | I0805 13:19:30.094906  458722 retry.go:31] will retry after 280.896278ms: waiting for machine to come up
	I0805 13:19:30.377490  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:30.378038  458687 main.go:141] libmachine: (newest-cni-202226) DBG | unable to find current IP address of domain newest-cni-202226 in network mk-newest-cni-202226
	I0805 13:19:30.378069  458687 main.go:141] libmachine: (newest-cni-202226) DBG | I0805 13:19:30.377971  458722 retry.go:31] will retry after 411.104523ms: waiting for machine to come up
	I0805 13:19:30.790711  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:30.791176  458687 main.go:141] libmachine: (newest-cni-202226) DBG | unable to find current IP address of domain newest-cni-202226 in network mk-newest-cni-202226
	I0805 13:19:30.791196  458687 main.go:141] libmachine: (newest-cni-202226) DBG | I0805 13:19:30.791117  458722 retry.go:31] will retry after 571.44048ms: waiting for machine to come up
	I0805 13:19:31.363967  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:31.364476  458687 main.go:141] libmachine: (newest-cni-202226) DBG | unable to find current IP address of domain newest-cni-202226 in network mk-newest-cni-202226
	I0805 13:19:31.364503  458687 main.go:141] libmachine: (newest-cni-202226) DBG | I0805 13:19:31.364420  458722 retry.go:31] will retry after 544.978197ms: waiting for machine to come up
	I0805 13:19:31.911115  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:31.911548  458687 main.go:141] libmachine: (newest-cni-202226) DBG | unable to find current IP address of domain newest-cni-202226 in network mk-newest-cni-202226
	I0805 13:19:31.911577  458687 main.go:141] libmachine: (newest-cni-202226) DBG | I0805 13:19:31.911498  458722 retry.go:31] will retry after 928.057354ms: waiting for machine to come up
	I0805 13:19:32.840923  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:32.841319  458687 main.go:141] libmachine: (newest-cni-202226) DBG | unable to find current IP address of domain newest-cni-202226 in network mk-newest-cni-202226
	I0805 13:19:32.841348  458687 main.go:141] libmachine: (newest-cni-202226) DBG | I0805 13:19:32.841262  458722 retry.go:31] will retry after 1.147019331s: waiting for machine to come up
	I0805 13:19:33.989869  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:33.990318  458687 main.go:141] libmachine: (newest-cni-202226) DBG | unable to find current IP address of domain newest-cni-202226 in network mk-newest-cni-202226
	I0805 13:19:33.990364  458687 main.go:141] libmachine: (newest-cni-202226) DBG | I0805 13:19:33.990255  458722 retry.go:31] will retry after 1.085377326s: waiting for machine to come up
	I0805 13:19:35.076930  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:35.077397  458687 main.go:141] libmachine: (newest-cni-202226) DBG | unable to find current IP address of domain newest-cni-202226 in network mk-newest-cni-202226
	I0805 13:19:35.077424  458687 main.go:141] libmachine: (newest-cni-202226) DBG | I0805 13:19:35.077350  458722 retry.go:31] will retry after 1.586226868s: waiting for machine to come up
	I0805 13:19:36.664862  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:36.665276  458687 main.go:141] libmachine: (newest-cni-202226) DBG | unable to find current IP address of domain newest-cni-202226 in network mk-newest-cni-202226
	I0805 13:19:36.665296  458687 main.go:141] libmachine: (newest-cni-202226) DBG | I0805 13:19:36.665236  458722 retry.go:31] will retry after 1.87061941s: waiting for machine to come up
	
	
	==> CRI-O <==
	Aug 05 13:19:41 embed-certs-321139 crio[733]: time="2024-08-05 13:19:41.055892064Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863981055858926,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bc3377bd-56f6-48f2-b622-78f808d73b1b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:19:41 embed-certs-321139 crio[733]: time="2024-08-05 13:19:41.058640590Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=67fc1a04-7fa6-40e7-857e-c07d11869d7e name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:19:41 embed-certs-321139 crio[733]: time="2024-08-05 13:19:41.059342220Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=67fc1a04-7fa6-40e7-857e-c07d11869d7e name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:19:41 embed-certs-321139 crio[733]: time="2024-08-05 13:19:41.059635565Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b,PodSandboxId:8d6b517e958ba42aedc04b4e350f3fadd7788b7f5f30417c4f2cdbf6f52f739e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722862808306612518,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b2db057-5262-4648-93ea-f2f0ed51a19b,},Annotations:map[string]string{io.kubernetes.container.hash: a22cb328,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8edf973c34f98728f31f5f81f4ad25b839ec3dd0f41ed930d65c3d4f2f191948,PodSandboxId:78c2f0eda34cccb01df09e520ae26a9b7bc2185b9f9d00a419136e01a3063a3a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722862787551881955,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 61652096-d612-4b1d-bac3-a0df9a0e629b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b96c50c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb,PodSandboxId:d632aafdacf52b10e9b2b7bf7f3deaf56aaefbff50c31ed27a9e3b8ffc07ccfc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722862785158048980,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wm7lh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3851d79-431c-4629-bfdc-ed9615cd46aa,},Annotations:map[string]string{io.kubernetes.container.hash: ca25e05e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0,PodSandboxId:3ea21207a040295af58068810ab0010cac2197b6c4ebf43384ac02addb445654,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722862777536935964,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shgv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a19c5991-505f-4105-8
c20-7afd63dd8e61,},Annotations:map[string]string{io.kubernetes.container.hash: ef26fde1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86,PodSandboxId:8d6b517e958ba42aedc04b4e350f3fadd7788b7f5f30417c4f2cdbf6f52f739e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722862777519087317,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b2db057-5262-4648-93ea-f2f0ed51a
19b,},Annotations:map[string]string{io.kubernetes.container.hash: a22cb328,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804,PodSandboxId:72626e53802dff9bc26788699e920a66326f3e39061ac44d3ff27a7dd7939fb6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722862772795199915,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-321139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac5d8f139dc62eb6728616077f9f3d55,},Annotations:map[string]string{io.kub
ernetes.container.hash: 82e6bf3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f,PodSandboxId:cdba31db1da5c242b90f5578d1c9b81ccee46b1bbed039c101dc116cc2ed72c5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722862772783826722,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-321139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d805150634b40a739cf75f6352c5c67,},Annotations:map[strin
g]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7,PodSandboxId:980b16fe922b81963439c38a6c9df44bd68292b9711e8ed086427a17428aab87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722862772722446344,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-321139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ac13876e8cffeed8789fb80a6043482,},Annotations:map[string]string{io.
kubernetes.container.hash: 4422576b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756,PodSandboxId:9b3e234d1497348d2f1230a7a8716892424592e79944981064e92a2ac2ce2de6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722862772672738287,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-321139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a9e377164f8f6abffa50cd66ffd3878,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=67fc1a04-7fa6-40e7-857e-c07d11869d7e name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:19:41 embed-certs-321139 crio[733]: time="2024-08-05 13:19:41.105189967Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2da3a2b1-d5b1-47e8-bb83-345f2a47e06e name=/runtime.v1.RuntimeService/Version
	Aug 05 13:19:41 embed-certs-321139 crio[733]: time="2024-08-05 13:19:41.105318075Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2da3a2b1-d5b1-47e8-bb83-345f2a47e06e name=/runtime.v1.RuntimeService/Version
	Aug 05 13:19:41 embed-certs-321139 crio[733]: time="2024-08-05 13:19:41.106842103Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=40c9c7e6-9b3c-47be-8421-180957985a86 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:19:41 embed-certs-321139 crio[733]: time="2024-08-05 13:19:41.107422304Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863981107398108,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=40c9c7e6-9b3c-47be-8421-180957985a86 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:19:41 embed-certs-321139 crio[733]: time="2024-08-05 13:19:41.108155068Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a1d70b8a-5cf9-4ed4-a76f-7dd110fb7a0d name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:19:41 embed-certs-321139 crio[733]: time="2024-08-05 13:19:41.108205776Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a1d70b8a-5cf9-4ed4-a76f-7dd110fb7a0d name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:19:41 embed-certs-321139 crio[733]: time="2024-08-05 13:19:41.108476919Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b,PodSandboxId:8d6b517e958ba42aedc04b4e350f3fadd7788b7f5f30417c4f2cdbf6f52f739e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722862808306612518,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b2db057-5262-4648-93ea-f2f0ed51a19b,},Annotations:map[string]string{io.kubernetes.container.hash: a22cb328,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8edf973c34f98728f31f5f81f4ad25b839ec3dd0f41ed930d65c3d4f2f191948,PodSandboxId:78c2f0eda34cccb01df09e520ae26a9b7bc2185b9f9d00a419136e01a3063a3a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722862787551881955,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 61652096-d612-4b1d-bac3-a0df9a0e629b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b96c50c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb,PodSandboxId:d632aafdacf52b10e9b2b7bf7f3deaf56aaefbff50c31ed27a9e3b8ffc07ccfc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722862785158048980,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wm7lh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3851d79-431c-4629-bfdc-ed9615cd46aa,},Annotations:map[string]string{io.kubernetes.container.hash: ca25e05e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0,PodSandboxId:3ea21207a040295af58068810ab0010cac2197b6c4ebf43384ac02addb445654,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722862777536935964,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shgv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a19c5991-505f-4105-8
c20-7afd63dd8e61,},Annotations:map[string]string{io.kubernetes.container.hash: ef26fde1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86,PodSandboxId:8d6b517e958ba42aedc04b4e350f3fadd7788b7f5f30417c4f2cdbf6f52f739e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722862777519087317,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b2db057-5262-4648-93ea-f2f0ed51a
19b,},Annotations:map[string]string{io.kubernetes.container.hash: a22cb328,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804,PodSandboxId:72626e53802dff9bc26788699e920a66326f3e39061ac44d3ff27a7dd7939fb6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722862772795199915,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-321139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac5d8f139dc62eb6728616077f9f3d55,},Annotations:map[string]string{io.kub
ernetes.container.hash: 82e6bf3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f,PodSandboxId:cdba31db1da5c242b90f5578d1c9b81ccee46b1bbed039c101dc116cc2ed72c5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722862772783826722,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-321139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d805150634b40a739cf75f6352c5c67,},Annotations:map[strin
g]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7,PodSandboxId:980b16fe922b81963439c38a6c9df44bd68292b9711e8ed086427a17428aab87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722862772722446344,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-321139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ac13876e8cffeed8789fb80a6043482,},Annotations:map[string]string{io.
kubernetes.container.hash: 4422576b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756,PodSandboxId:9b3e234d1497348d2f1230a7a8716892424592e79944981064e92a2ac2ce2de6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722862772672738287,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-321139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a9e377164f8f6abffa50cd66ffd3878,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a1d70b8a-5cf9-4ed4-a76f-7dd110fb7a0d name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:19:41 embed-certs-321139 crio[733]: time="2024-08-05 13:19:41.152656858Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fc528018-24a7-4ae4-9599-8b4a88c9789e name=/runtime.v1.RuntimeService/Version
	Aug 05 13:19:41 embed-certs-321139 crio[733]: time="2024-08-05 13:19:41.152767861Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fc528018-24a7-4ae4-9599-8b4a88c9789e name=/runtime.v1.RuntimeService/Version
	Aug 05 13:19:41 embed-certs-321139 crio[733]: time="2024-08-05 13:19:41.154064601Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=edcf3eb5-9f9b-4061-a549-71ffa440247d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:19:41 embed-certs-321139 crio[733]: time="2024-08-05 13:19:41.154714611Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863981154687089,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=edcf3eb5-9f9b-4061-a549-71ffa440247d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:19:41 embed-certs-321139 crio[733]: time="2024-08-05 13:19:41.155478230Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=42490bed-7226-4c20-8d7d-a7e2aea010da name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:19:41 embed-certs-321139 crio[733]: time="2024-08-05 13:19:41.155546471Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=42490bed-7226-4c20-8d7d-a7e2aea010da name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:19:41 embed-certs-321139 crio[733]: time="2024-08-05 13:19:41.155728452Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b,PodSandboxId:8d6b517e958ba42aedc04b4e350f3fadd7788b7f5f30417c4f2cdbf6f52f739e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722862808306612518,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b2db057-5262-4648-93ea-f2f0ed51a19b,},Annotations:map[string]string{io.kubernetes.container.hash: a22cb328,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8edf973c34f98728f31f5f81f4ad25b839ec3dd0f41ed930d65c3d4f2f191948,PodSandboxId:78c2f0eda34cccb01df09e520ae26a9b7bc2185b9f9d00a419136e01a3063a3a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722862787551881955,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 61652096-d612-4b1d-bac3-a0df9a0e629b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b96c50c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb,PodSandboxId:d632aafdacf52b10e9b2b7bf7f3deaf56aaefbff50c31ed27a9e3b8ffc07ccfc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722862785158048980,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wm7lh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3851d79-431c-4629-bfdc-ed9615cd46aa,},Annotations:map[string]string{io.kubernetes.container.hash: ca25e05e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0,PodSandboxId:3ea21207a040295af58068810ab0010cac2197b6c4ebf43384ac02addb445654,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722862777536935964,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shgv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a19c5991-505f-4105-8
c20-7afd63dd8e61,},Annotations:map[string]string{io.kubernetes.container.hash: ef26fde1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86,PodSandboxId:8d6b517e958ba42aedc04b4e350f3fadd7788b7f5f30417c4f2cdbf6f52f739e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722862777519087317,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b2db057-5262-4648-93ea-f2f0ed51a
19b,},Annotations:map[string]string{io.kubernetes.container.hash: a22cb328,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804,PodSandboxId:72626e53802dff9bc26788699e920a66326f3e39061ac44d3ff27a7dd7939fb6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722862772795199915,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-321139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac5d8f139dc62eb6728616077f9f3d55,},Annotations:map[string]string{io.kub
ernetes.container.hash: 82e6bf3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f,PodSandboxId:cdba31db1da5c242b90f5578d1c9b81ccee46b1bbed039c101dc116cc2ed72c5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722862772783826722,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-321139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d805150634b40a739cf75f6352c5c67,},Annotations:map[strin
g]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7,PodSandboxId:980b16fe922b81963439c38a6c9df44bd68292b9711e8ed086427a17428aab87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722862772722446344,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-321139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ac13876e8cffeed8789fb80a6043482,},Annotations:map[string]string{io.
kubernetes.container.hash: 4422576b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756,PodSandboxId:9b3e234d1497348d2f1230a7a8716892424592e79944981064e92a2ac2ce2de6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722862772672738287,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-321139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a9e377164f8f6abffa50cd66ffd3878,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=42490bed-7226-4c20-8d7d-a7e2aea010da name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:19:41 embed-certs-321139 crio[733]: time="2024-08-05 13:19:41.196698896Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bc03f5e4-6522-4ae6-9721-c98251b62bb1 name=/runtime.v1.RuntimeService/Version
	Aug 05 13:19:41 embed-certs-321139 crio[733]: time="2024-08-05 13:19:41.196980252Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bc03f5e4-6522-4ae6-9721-c98251b62bb1 name=/runtime.v1.RuntimeService/Version
	Aug 05 13:19:41 embed-certs-321139 crio[733]: time="2024-08-05 13:19:41.199861867Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0bcccd0e-e7d6-419b-b86e-7e5b5f416298 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:19:41 embed-certs-321139 crio[733]: time="2024-08-05 13:19:41.200220167Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863981200201204,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0bcccd0e-e7d6-419b-b86e-7e5b5f416298 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:19:41 embed-certs-321139 crio[733]: time="2024-08-05 13:19:41.200848126Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b314ff81-e953-419c-8432-a1cb8f19d577 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:19:41 embed-certs-321139 crio[733]: time="2024-08-05 13:19:41.201038274Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b314ff81-e953-419c-8432-a1cb8f19d577 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:19:41 embed-certs-321139 crio[733]: time="2024-08-05 13:19:41.201460348Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b,PodSandboxId:8d6b517e958ba42aedc04b4e350f3fadd7788b7f5f30417c4f2cdbf6f52f739e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722862808306612518,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b2db057-5262-4648-93ea-f2f0ed51a19b,},Annotations:map[string]string{io.kubernetes.container.hash: a22cb328,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8edf973c34f98728f31f5f81f4ad25b839ec3dd0f41ed930d65c3d4f2f191948,PodSandboxId:78c2f0eda34cccb01df09e520ae26a9b7bc2185b9f9d00a419136e01a3063a3a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722862787551881955,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 61652096-d612-4b1d-bac3-a0df9a0e629b,},Annotations:map[string]string{io.kubernetes.container.hash: 1b96c50c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb,PodSandboxId:d632aafdacf52b10e9b2b7bf7f3deaf56aaefbff50c31ed27a9e3b8ffc07ccfc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722862785158048980,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wm7lh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3851d79-431c-4629-bfdc-ed9615cd46aa,},Annotations:map[string]string{io.kubernetes.container.hash: ca25e05e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0,PodSandboxId:3ea21207a040295af58068810ab0010cac2197b6c4ebf43384ac02addb445654,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722862777536935964,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shgv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a19c5991-505f-4105-8
c20-7afd63dd8e61,},Annotations:map[string]string{io.kubernetes.container.hash: ef26fde1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86,PodSandboxId:8d6b517e958ba42aedc04b4e350f3fadd7788b7f5f30417c4f2cdbf6f52f739e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722862777519087317,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b2db057-5262-4648-93ea-f2f0ed51a
19b,},Annotations:map[string]string{io.kubernetes.container.hash: a22cb328,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804,PodSandboxId:72626e53802dff9bc26788699e920a66326f3e39061ac44d3ff27a7dd7939fb6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722862772795199915,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-321139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac5d8f139dc62eb6728616077f9f3d55,},Annotations:map[string]string{io.kub
ernetes.container.hash: 82e6bf3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f,PodSandboxId:cdba31db1da5c242b90f5578d1c9b81ccee46b1bbed039c101dc116cc2ed72c5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722862772783826722,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-321139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d805150634b40a739cf75f6352c5c67,},Annotations:map[strin
g]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7,PodSandboxId:980b16fe922b81963439c38a6c9df44bd68292b9711e8ed086427a17428aab87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722862772722446344,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-321139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ac13876e8cffeed8789fb80a6043482,},Annotations:map[string]string{io.
kubernetes.container.hash: 4422576b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756,PodSandboxId:9b3e234d1497348d2f1230a7a8716892424592e79944981064e92a2ac2ce2de6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722862772672738287,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-321139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a9e377164f8f6abffa50cd66ffd3878,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b314ff81-e953-419c-8432-a1cb8f19d577 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	07a14eee4cdae       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Running             storage-provisioner       2                   8d6b517e958ba       storage-provisioner
	8edf973c34f98       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Running             busybox                   1                   78c2f0eda34cc       busybox
	b22c1fc4aed8b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      19 minutes ago      Running             coredns                   1                   d632aafdacf52       coredns-7db6d8ff4d-wm7lh
	c905047116d6c       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      20 minutes ago      Running             kube-proxy                1                   3ea21207a0402       kube-proxy-shgv2
	2d096466c2e0d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Exited              storage-provisioner       1                   8d6b517e958ba       storage-provisioner
	85c424836db21       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      20 minutes ago      Running             etcd                      1                   72626e53802df       etcd-embed-certs-321139
	75f0d0c4ce468       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      20 minutes ago      Running             kube-controller-manager   1                   cdba31db1da5c       kube-controller-manager-embed-certs-321139
	be59c5f295285       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      20 minutes ago      Running             kube-apiserver            1                   980b16fe922b8       kube-apiserver-embed-certs-321139
	8b55325728604       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      20 minutes ago      Running             kube-scheduler            1                   9b3e234d14973       kube-scheduler-embed-certs-321139
	
	
	==> coredns [b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57604 - 63795 "HINFO IN 1122241197051515001.1866069707439365595. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022287696s
	
	
	==> describe nodes <==
	Name:               embed-certs-321139
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-321139
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f
	                    minikube.k8s.io/name=embed-certs-321139
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_05T12_50_26_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 12:50:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-321139
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 13:19:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 13:15:25 +0000   Mon, 05 Aug 2024 12:50:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 13:15:25 +0000   Mon, 05 Aug 2024 12:50:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 13:15:25 +0000   Mon, 05 Aug 2024 12:50:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 13:15:25 +0000   Mon, 05 Aug 2024 12:59:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.196
	  Hostname:    embed-certs-321139
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7d261453e09c4e6981750858662a1300
	  System UUID:                7d261453-e09c-4e69-8175-0858662a1300
	  Boot ID:                    9d8267a6-aa4e-40a9-b37c-a96dabe9dd0f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-7db6d8ff4d-wm7lh                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29m
	  kube-system                 etcd-embed-certs-321139                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-embed-certs-321139             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-embed-certs-321139    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-shgv2                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-scheduler-embed-certs-321139             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-569cc877fc-k8mrt               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 20m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node embed-certs-321139 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node embed-certs-321139 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node embed-certs-321139 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node embed-certs-321139 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node embed-certs-321139 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m                kubelet          Node embed-certs-321139 status is now: NodeHasSufficientPID
	  Normal  NodeReady                29m                kubelet          Node embed-certs-321139 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node embed-certs-321139 event: Registered Node embed-certs-321139 in Controller
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node embed-certs-321139 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node embed-certs-321139 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node embed-certs-321139 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node embed-certs-321139 event: Registered Node embed-certs-321139 in Controller
	
	
	==> dmesg <==
	[Aug 5 12:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055322] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042642] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.183394] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.655686] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.450586] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.914135] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.059399] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070737] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +0.202615] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +0.115666] systemd-fstab-generator[684]: Ignoring "noauto" option for root device
	[  +0.297815] systemd-fstab-generator[717]: Ignoring "noauto" option for root device
	[  +4.497767] systemd-fstab-generator[814]: Ignoring "noauto" option for root device
	[  +0.072010] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.193486] systemd-fstab-generator[936]: Ignoring "noauto" option for root device
	[  +5.605592] kauditd_printk_skb: 97 callbacks suppressed
	[  +1.966443] systemd-fstab-generator[1537]: Ignoring "noauto" option for root device
	[  +3.748656] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.626320] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804] <==
	{"level":"info","ts":"2024-08-05T12:59:35.006236Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"a14f9258d3b66c75","local-member-attributes":"{Name:embed-certs-321139 ClientURLs:[https://192.168.39.196:2379]}","request-path":"/0/members/a14f9258d3b66c75/attributes","cluster-id":"8309c60c27e527a4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-05T12:59:35.006428Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T12:59:35.006549Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T12:59:35.006913Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-05T12:59:35.006925Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-05T12:59:35.008857Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-05T12:59:35.009041Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.196:2379"}
	{"level":"info","ts":"2024-08-05T13:09:35.037471Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":844}
	{"level":"info","ts":"2024-08-05T13:09:35.048468Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":844,"took":"10.652872ms","hash":1339320383,"current-db-size-bytes":2232320,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2232320,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-08-05T13:09:35.048546Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1339320383,"revision":844,"compact-revision":-1}
	{"level":"info","ts":"2024-08-05T13:14:35.044566Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1086}
	{"level":"info","ts":"2024-08-05T13:14:35.048753Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1086,"took":"3.628847ms","hash":1603953756,"current-db-size-bytes":2232320,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1175552,"current-db-size-in-use":"1.2 MB"}
	{"level":"info","ts":"2024-08-05T13:14:35.048829Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1603953756,"revision":1086,"compact-revision":844}
	{"level":"warn","ts":"2024-08-05T13:18:57.779398Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.957291ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7815312306264623480 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.196\" mod_revision:1535 > success:<request_put:<key:\"/registry/masterleases/192.168.39.196\" value_size:67 lease:7815312306264623478 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.196\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-05T13:18:57.779829Z","caller":"traceutil/trace.go:171","msg":"trace[1098026121] transaction","detail":"{read_only:false; response_revision:1542; number_of_response:1; }","duration":"252.023382ms","start":"2024-08-05T13:18:57.527768Z","end":"2024-08-05T13:18:57.779791Z","steps":["trace[1098026121] 'process raft request'  (duration: 121.654815ms)","trace[1098026121] 'compare'  (duration: 128.72094ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-05T13:18:58.17025Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"258.4353ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7815312306264623485 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1541 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:522 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-05T13:18:58.170507Z","caller":"traceutil/trace.go:171","msg":"trace[1104801668] transaction","detail":"{read_only:false; response_revision:1544; number_of_response:1; }","duration":"283.001393ms","start":"2024-08-05T13:18:57.887492Z","end":"2024-08-05T13:18:58.170493Z","steps":["trace[1104801668] 'process raft request'  (duration: 282.942056ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T13:18:58.170918Z","caller":"traceutil/trace.go:171","msg":"trace[1378242890] transaction","detail":"{read_only:false; response_revision:1543; number_of_response:1; }","duration":"385.258155ms","start":"2024-08-05T13:18:57.785647Z","end":"2024-08-05T13:18:58.170906Z","steps":["trace[1378242890] 'process raft request'  (duration: 126.100358ms)","trace[1378242890] 'compare'  (duration: 258.178397ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-05T13:18:58.172022Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-05T13:18:57.785638Z","time spent":"386.323903ms","remote":"127.0.0.1:51022","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":595,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1541 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:522 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-08-05T13:18:58.170957Z","caller":"traceutil/trace.go:171","msg":"trace[2070467645] linearizableReadLoop","detail":"{readStateIndex:1815; appliedIndex:1814; }","duration":"291.329558ms","start":"2024-08-05T13:18:57.879616Z","end":"2024-08-05T13:18:58.170946Z","steps":["trace[2070467645] 'read index received'  (duration: 32.13899ms)","trace[2070467645] 'applied index is now lower than readState.Index'  (duration: 259.189744ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-05T13:18:58.17102Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"291.391806ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-05T13:18:58.172932Z","caller":"traceutil/trace.go:171","msg":"trace[412383931] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1544; }","duration":"293.326752ms","start":"2024-08-05T13:18:57.879591Z","end":"2024-08-05T13:18:58.172918Z","steps":["trace[412383931] 'agreement among raft nodes before linearized reading'  (duration: 291.394011ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T13:19:35.050387Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1330}
	{"level":"info","ts":"2024-08-05T13:19:35.054027Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1330,"took":"3.324424ms","hash":43076963,"current-db-size-bytes":2232320,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1163264,"current-db-size-in-use":"1.2 MB"}
	{"level":"info","ts":"2024-08-05T13:19:35.054129Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":43076963,"revision":1330,"compact-revision":1086}
	
	
	==> kernel <==
	 13:19:41 up 20 min,  0 users,  load average: 0.01, 0.20, 0.19
	Linux embed-certs-321139 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7] <==
	I0805 13:14:37.336991       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0805 13:15:37.335931       1 handler_proxy.go:93] no RequestInfo found in the context
	E0805 13:15:37.336074       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0805 13:15:37.336084       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0805 13:15:37.337359       1 handler_proxy.go:93] no RequestInfo found in the context
	E0805 13:15:37.337469       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0805 13:15:37.337502       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0805 13:17:37.336473       1 handler_proxy.go:93] no RequestInfo found in the context
	E0805 13:17:37.336548       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0805 13:17:37.336557       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0805 13:17:37.337800       1 handler_proxy.go:93] no RequestInfo found in the context
	E0805 13:17:37.337961       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0805 13:17:37.338016       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0805 13:19:36.338304       1 handler_proxy.go:93] no RequestInfo found in the context
	E0805 13:19:36.338415       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0805 13:19:37.339579       1 handler_proxy.go:93] no RequestInfo found in the context
	E0805 13:19:37.339726       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0805 13:19:37.339758       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0805 13:19:37.339625       1 handler_proxy.go:93] no RequestInfo found in the context
	E0805 13:19:37.339820       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0805 13:19:37.341127       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f] <==
	I0805 13:13:49.639787       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:14:19.106416       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0805 13:14:19.648790       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:14:49.111216       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0805 13:14:49.657509       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:15:19.116549       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0805 13:15:19.672465       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:15:49.123863       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0805 13:15:49.680187       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0805 13:15:56.114174       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="259.116µs"
	I0805 13:16:09.111864       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="170.271µs"
	E0805 13:16:19.128782       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0805 13:16:19.687252       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:16:49.134249       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0805 13:16:49.696488       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:17:19.139970       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0805 13:17:19.712887       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:17:49.145552       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0805 13:17:49.721125       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:18:19.152752       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0805 13:18:19.728613       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:18:49.157107       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0805 13:18:49.735852       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:19:19.162167       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0805 13:19:19.749077       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0] <==
	I0805 12:59:37.756894       1 server_linux.go:69] "Using iptables proxy"
	I0805 12:59:37.767237       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.196"]
	I0805 12:59:37.835578       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0805 12:59:37.835641       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0805 12:59:37.835666       1 server_linux.go:165] "Using iptables Proxier"
	I0805 12:59:37.846864       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0805 12:59:37.851436       1 server.go:872] "Version info" version="v1.30.3"
	I0805 12:59:37.851502       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 12:59:37.859743       1 config.go:192] "Starting service config controller"
	I0805 12:59:37.859759       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 12:59:37.859865       1 config.go:101] "Starting endpoint slice config controller"
	I0805 12:59:37.859870       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 12:59:37.861998       1 config.go:319] "Starting node config controller"
	I0805 12:59:37.862032       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 12:59:37.959920       1 shared_informer.go:320] Caches are synced for service config
	I0805 12:59:37.959983       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0805 12:59:37.962082       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756] <==
	I0805 12:59:34.246812       1 serving.go:380] Generated self-signed cert in-memory
	W0805 12:59:36.278609       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0805 12:59:36.278653       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0805 12:59:36.278666       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0805 12:59:36.278672       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0805 12:59:36.316202       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0805 12:59:36.316467       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 12:59:36.322690       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0805 12:59:36.322790       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0805 12:59:36.322819       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0805 12:59:36.322833       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0805 12:59:36.423010       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 05 13:17:32 embed-certs-321139 kubelet[943]: E0805 13:17:32.125139     943 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 13:17:32 embed-certs-321139 kubelet[943]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 13:17:32 embed-certs-321139 kubelet[943]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 13:17:32 embed-certs-321139 kubelet[943]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 13:17:32 embed-certs-321139 kubelet[943]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 13:17:34 embed-certs-321139 kubelet[943]: E0805 13:17:34.096510     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k8mrt" podUID="6d400b20-5de5-4046-b773-39766c67cdb4"
	Aug 05 13:17:49 embed-certs-321139 kubelet[943]: E0805 13:17:49.095980     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k8mrt" podUID="6d400b20-5de5-4046-b773-39766c67cdb4"
	Aug 05 13:18:00 embed-certs-321139 kubelet[943]: E0805 13:18:00.097572     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k8mrt" podUID="6d400b20-5de5-4046-b773-39766c67cdb4"
	Aug 05 13:18:13 embed-certs-321139 kubelet[943]: E0805 13:18:13.096441     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k8mrt" podUID="6d400b20-5de5-4046-b773-39766c67cdb4"
	Aug 05 13:18:28 embed-certs-321139 kubelet[943]: E0805 13:18:28.096105     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k8mrt" podUID="6d400b20-5de5-4046-b773-39766c67cdb4"
	Aug 05 13:18:32 embed-certs-321139 kubelet[943]: E0805 13:18:32.123904     943 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 13:18:32 embed-certs-321139 kubelet[943]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 13:18:32 embed-certs-321139 kubelet[943]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 13:18:32 embed-certs-321139 kubelet[943]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 13:18:32 embed-certs-321139 kubelet[943]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 13:18:42 embed-certs-321139 kubelet[943]: E0805 13:18:42.095893     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k8mrt" podUID="6d400b20-5de5-4046-b773-39766c67cdb4"
	Aug 05 13:18:53 embed-certs-321139 kubelet[943]: E0805 13:18:53.096108     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k8mrt" podUID="6d400b20-5de5-4046-b773-39766c67cdb4"
	Aug 05 13:19:05 embed-certs-321139 kubelet[943]: E0805 13:19:05.096474     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k8mrt" podUID="6d400b20-5de5-4046-b773-39766c67cdb4"
	Aug 05 13:19:17 embed-certs-321139 kubelet[943]: E0805 13:19:17.096669     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k8mrt" podUID="6d400b20-5de5-4046-b773-39766c67cdb4"
	Aug 05 13:19:31 embed-certs-321139 kubelet[943]: E0805 13:19:31.097153     943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k8mrt" podUID="6d400b20-5de5-4046-b773-39766c67cdb4"
	Aug 05 13:19:32 embed-certs-321139 kubelet[943]: E0805 13:19:32.124858     943 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 13:19:32 embed-certs-321139 kubelet[943]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 13:19:32 embed-certs-321139 kubelet[943]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 13:19:32 embed-certs-321139 kubelet[943]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 13:19:32 embed-certs-321139 kubelet[943]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b] <==
	I0805 13:00:08.414946       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0805 13:00:08.423918       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0805 13:00:08.424001       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0805 13:00:08.436404       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0805 13:00:08.436577       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-321139_802883f9-fd9c-4117-8935-6f2099d3f05c!
	I0805 13:00:08.436992       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d8dada66-6135-4655-a5db-5fefeff62831", APIVersion:"v1", ResourceVersion:"608", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-321139_802883f9-fd9c-4117-8935-6f2099d3f05c became leader
	I0805 13:00:08.537633       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-321139_802883f9-fd9c-4117-8935-6f2099d3f05c!
	
	
	==> storage-provisioner [2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86] <==
	I0805 12:59:37.701889       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0805 13:00:07.704372       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-321139 -n embed-certs-321139
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-321139 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-k8mrt
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-321139 describe pod metrics-server-569cc877fc-k8mrt
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-321139 describe pod metrics-server-569cc877fc-k8mrt: exit status 1 (62.623196ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-k8mrt" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-321139 describe pod metrics-server-569cc877fc-k8mrt: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (396.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (445.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-371585 -n default-k8s-diff-port-371585
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-05 13:20:34.558496382 +0000 UTC m=+6821.515820074
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-371585 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-371585 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-371585 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-371585 -n default-k8s-diff-port-371585
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-371585 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-371585 logs -n 25: (1.149418349s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                     | default-k8s-diff-port-371585 | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:51 UTC |
	|         | default-k8s-diff-port-371585                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-321139            | embed-certs-321139           | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-321139                                  | embed-certs-321139           | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-669469             | no-preload-669469            | jenkins | v1.33.1 | 05 Aug 24 12:51 UTC | 05 Aug 24 12:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-669469                                   | no-preload-669469            | jenkins | v1.33.1 | 05 Aug 24 12:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-371585  | default-k8s-diff-port-371585 | jenkins | v1.33.1 | 05 Aug 24 12:51 UTC | 05 Aug 24 12:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-371585 | jenkins | v1.33.1 | 05 Aug 24 12:51 UTC |                     |
	|         | default-k8s-diff-port-371585                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-321139                 | embed-certs-321139           | jenkins | v1.33.1 | 05 Aug 24 12:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-635707        | old-k8s-version-635707       | jenkins | v1.33.1 | 05 Aug 24 12:53 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-321139                                  | embed-certs-321139           | jenkins | v1.33.1 | 05 Aug 24 12:53 UTC | 05 Aug 24 13:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-669469                  | no-preload-669469            | jenkins | v1.33.1 | 05 Aug 24 12:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-669469                                   | no-preload-669469            | jenkins | v1.33.1 | 05 Aug 24 12:53 UTC | 05 Aug 24 13:03 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-371585       | default-k8s-diff-port-371585 | jenkins | v1.33.1 | 05 Aug 24 12:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-371585 | jenkins | v1.33.1 | 05 Aug 24 12:54 UTC | 05 Aug 24 13:04 UTC |
	|         | default-k8s-diff-port-371585                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-635707                              | old-k8s-version-635707       | jenkins | v1.33.1 | 05 Aug 24 12:55 UTC | 05 Aug 24 12:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-635707             | old-k8s-version-635707       | jenkins | v1.33.1 | 05 Aug 24 12:55 UTC | 05 Aug 24 12:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-635707                              | old-k8s-version-635707       | jenkins | v1.33.1 | 05 Aug 24 12:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-635707                              | old-k8s-version-635707       | jenkins | v1.33.1 | 05 Aug 24 13:18 UTC | 05 Aug 24 13:18 UTC |
	| start   | -p newest-cni-202226 --memory=2200 --alsologtostderr   | newest-cni-202226            | jenkins | v1.33.1 | 05 Aug 24 13:18 UTC | 05 Aug 24 13:19 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-669469                                   | no-preload-669469            | jenkins | v1.33.1 | 05 Aug 24 13:19 UTC | 05 Aug 24 13:19 UTC |
	| addons  | enable metrics-server -p newest-cni-202226             | newest-cni-202226            | jenkins | v1.33.1 | 05 Aug 24 13:19 UTC | 05 Aug 24 13:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-202226                                   | newest-cni-202226            | jenkins | v1.33.1 | 05 Aug 24 13:19 UTC | 05 Aug 24 13:19 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-202226                  | newest-cni-202226            | jenkins | v1.33.1 | 05 Aug 24 13:19 UTC | 05 Aug 24 13:19 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-202226 --memory=2200 --alsologtostderr   | newest-cni-202226            | jenkins | v1.33.1 | 05 Aug 24 13:19 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| delete  | -p embed-certs-321139                                  | embed-certs-321139           | jenkins | v1.33.1 | 05 Aug 24 13:19 UTC | 05 Aug 24 13:19 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 13:19:28
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 13:19:28.411598  458687 out.go:291] Setting OutFile to fd 1 ...
	I0805 13:19:28.411759  458687 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 13:19:28.411771  458687 out.go:304] Setting ErrFile to fd 2...
	I0805 13:19:28.411778  458687 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 13:19:28.412019  458687 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
	I0805 13:19:28.412593  458687 out.go:298] Setting JSON to false
	I0805 13:19:28.413675  458687 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10915,"bootTime":1722853053,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0805 13:19:28.413743  458687 start.go:139] virtualization: kvm guest
	I0805 13:19:28.415870  458687 out.go:177] * [newest-cni-202226] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0805 13:19:28.417210  458687 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 13:19:28.417213  458687 notify.go:220] Checking for updates...
	I0805 13:19:28.419857  458687 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 13:19:28.421460  458687 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 13:19:28.422956  458687 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 13:19:28.424264  458687 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0805 13:19:28.425536  458687 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 13:19:28.427181  458687 config.go:182] Loaded profile config "newest-cni-202226": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0805 13:19:28.427560  458687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:19:28.427629  458687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:19:28.444910  458687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46207
	I0805 13:19:28.445322  458687 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:19:28.445820  458687 main.go:141] libmachine: Using API Version  1
	I0805 13:19:28.445840  458687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:19:28.446250  458687 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:19:28.446455  458687 main.go:141] libmachine: (newest-cni-202226) Calling .DriverName
	I0805 13:19:28.446749  458687 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 13:19:28.447086  458687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:19:28.447127  458687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:19:28.462182  458687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40743
	I0805 13:19:28.462633  458687 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:19:28.463095  458687 main.go:141] libmachine: Using API Version  1
	I0805 13:19:28.463121  458687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:19:28.463534  458687 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:19:28.463695  458687 main.go:141] libmachine: (newest-cni-202226) Calling .DriverName
	I0805 13:19:28.499683  458687 out.go:177] * Using the kvm2 driver based on existing profile
	I0805 13:19:28.500969  458687 start.go:297] selected driver: kvm2
	I0805 13:19:28.500986  458687 start.go:901] validating driver "kvm2" against &{Name:newest-cni-202226 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0-rc.0 ClusterName:newest-cni-202226 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pod
s:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 13:19:28.501108  458687 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 13:19:28.501892  458687 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 13:19:28.501962  458687 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19377-383955/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0805 13:19:28.516954  458687 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0805 13:19:28.517305  458687 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0805 13:19:28.517368  458687 cni.go:84] Creating CNI manager for ""
	I0805 13:19:28.517383  458687 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 13:19:28.517431  458687 start.go:340] cluster config:
	{Name:newest-cni-202226 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-202226 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddres
s: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 13:19:28.517537  458687 iso.go:125] acquiring lock: {Name:mk78a4988ea0dfb86bb6f7367e362683a39fd912 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 13:19:28.519260  458687 out.go:177] * Starting "newest-cni-202226" primary control-plane node in "newest-cni-202226" cluster
	I0805 13:19:28.520358  458687 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0805 13:19:28.520390  458687 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0805 13:19:28.520396  458687 cache.go:56] Caching tarball of preloaded images
	I0805 13:19:28.520470  458687 preload.go:172] Found /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0805 13:19:28.520480  458687 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on crio
	I0805 13:19:28.520578  458687 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/newest-cni-202226/config.json ...
	I0805 13:19:28.520746  458687 start.go:360] acquireMachinesLock for newest-cni-202226: {Name:mk3babe91d55c30c0b650587cdec6489eb3a7ed6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 13:19:28.520784  458687 start.go:364] duration metric: took 21.589µs to acquireMachinesLock for "newest-cni-202226"
	I0805 13:19:28.520798  458687 start.go:96] Skipping create...Using existing machine configuration
	I0805 13:19:28.520806  458687 fix.go:54] fixHost starting: 
	I0805 13:19:28.521069  458687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:19:28.521097  458687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:19:28.534390  458687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38979
	I0805 13:19:28.534782  458687 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:19:28.535303  458687 main.go:141] libmachine: Using API Version  1
	I0805 13:19:28.535329  458687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:19:28.535805  458687 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:19:28.535981  458687 main.go:141] libmachine: (newest-cni-202226) Calling .DriverName
	I0805 13:19:28.536136  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetState
	I0805 13:19:28.537589  458687 fix.go:112] recreateIfNeeded on newest-cni-202226: state=Stopped err=<nil>
	I0805 13:19:28.537616  458687 main.go:141] libmachine: (newest-cni-202226) Calling .DriverName
	W0805 13:19:28.537781  458687 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 13:19:28.540526  458687 out.go:177] * Restarting existing kvm2 VM for "newest-cni-202226" ...
	I0805 13:19:28.541786  458687 main.go:141] libmachine: (newest-cni-202226) Calling .Start
	I0805 13:19:28.541950  458687 main.go:141] libmachine: (newest-cni-202226) Ensuring networks are active...
	I0805 13:19:28.542675  458687 main.go:141] libmachine: (newest-cni-202226) Ensuring network default is active
	I0805 13:19:28.542972  458687 main.go:141] libmachine: (newest-cni-202226) Ensuring network mk-newest-cni-202226 is active
	I0805 13:19:28.543330  458687 main.go:141] libmachine: (newest-cni-202226) Getting domain xml...
	I0805 13:19:28.544112  458687 main.go:141] libmachine: (newest-cni-202226) Creating domain...
	I0805 13:19:29.793563  458687 main.go:141] libmachine: (newest-cni-202226) Waiting to get IP...
	I0805 13:19:29.794452  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:29.794912  458687 main.go:141] libmachine: (newest-cni-202226) DBG | unable to find current IP address of domain newest-cni-202226 in network mk-newest-cni-202226
	I0805 13:19:29.794991  458687 main.go:141] libmachine: (newest-cni-202226) DBG | I0805 13:19:29.794889  458722 retry.go:31] will retry after 297.982316ms: waiting for machine to come up
	I0805 13:19:30.094516  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:30.094958  458687 main.go:141] libmachine: (newest-cni-202226) DBG | unable to find current IP address of domain newest-cni-202226 in network mk-newest-cni-202226
	I0805 13:19:30.094990  458687 main.go:141] libmachine: (newest-cni-202226) DBG | I0805 13:19:30.094906  458722 retry.go:31] will retry after 280.896278ms: waiting for machine to come up
	I0805 13:19:30.377490  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:30.378038  458687 main.go:141] libmachine: (newest-cni-202226) DBG | unable to find current IP address of domain newest-cni-202226 in network mk-newest-cni-202226
	I0805 13:19:30.378069  458687 main.go:141] libmachine: (newest-cni-202226) DBG | I0805 13:19:30.377971  458722 retry.go:31] will retry after 411.104523ms: waiting for machine to come up
	I0805 13:19:30.790711  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:30.791176  458687 main.go:141] libmachine: (newest-cni-202226) DBG | unable to find current IP address of domain newest-cni-202226 in network mk-newest-cni-202226
	I0805 13:19:30.791196  458687 main.go:141] libmachine: (newest-cni-202226) DBG | I0805 13:19:30.791117  458722 retry.go:31] will retry after 571.44048ms: waiting for machine to come up
	I0805 13:19:31.363967  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:31.364476  458687 main.go:141] libmachine: (newest-cni-202226) DBG | unable to find current IP address of domain newest-cni-202226 in network mk-newest-cni-202226
	I0805 13:19:31.364503  458687 main.go:141] libmachine: (newest-cni-202226) DBG | I0805 13:19:31.364420  458722 retry.go:31] will retry after 544.978197ms: waiting for machine to come up
	I0805 13:19:31.911115  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:31.911548  458687 main.go:141] libmachine: (newest-cni-202226) DBG | unable to find current IP address of domain newest-cni-202226 in network mk-newest-cni-202226
	I0805 13:19:31.911577  458687 main.go:141] libmachine: (newest-cni-202226) DBG | I0805 13:19:31.911498  458722 retry.go:31] will retry after 928.057354ms: waiting for machine to come up
	I0805 13:19:32.840923  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:32.841319  458687 main.go:141] libmachine: (newest-cni-202226) DBG | unable to find current IP address of domain newest-cni-202226 in network mk-newest-cni-202226
	I0805 13:19:32.841348  458687 main.go:141] libmachine: (newest-cni-202226) DBG | I0805 13:19:32.841262  458722 retry.go:31] will retry after 1.147019331s: waiting for machine to come up
	I0805 13:19:33.989869  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:33.990318  458687 main.go:141] libmachine: (newest-cni-202226) DBG | unable to find current IP address of domain newest-cni-202226 in network mk-newest-cni-202226
	I0805 13:19:33.990364  458687 main.go:141] libmachine: (newest-cni-202226) DBG | I0805 13:19:33.990255  458722 retry.go:31] will retry after 1.085377326s: waiting for machine to come up
	I0805 13:19:35.076930  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:35.077397  458687 main.go:141] libmachine: (newest-cni-202226) DBG | unable to find current IP address of domain newest-cni-202226 in network mk-newest-cni-202226
	I0805 13:19:35.077424  458687 main.go:141] libmachine: (newest-cni-202226) DBG | I0805 13:19:35.077350  458722 retry.go:31] will retry after 1.586226868s: waiting for machine to come up
	I0805 13:19:36.664862  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:36.665276  458687 main.go:141] libmachine: (newest-cni-202226) DBG | unable to find current IP address of domain newest-cni-202226 in network mk-newest-cni-202226
	I0805 13:19:36.665296  458687 main.go:141] libmachine: (newest-cni-202226) DBG | I0805 13:19:36.665236  458722 retry.go:31] will retry after 1.87061941s: waiting for machine to come up
	I0805 13:19:38.538087  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:38.538641  458687 main.go:141] libmachine: (newest-cni-202226) DBG | unable to find current IP address of domain newest-cni-202226 in network mk-newest-cni-202226
	I0805 13:19:38.538668  458687 main.go:141] libmachine: (newest-cni-202226) DBG | I0805 13:19:38.538583  458722 retry.go:31] will retry after 2.33307517s: waiting for machine to come up
	I0805 13:19:40.873278  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:40.873774  458687 main.go:141] libmachine: (newest-cni-202226) DBG | unable to find current IP address of domain newest-cni-202226 in network mk-newest-cni-202226
	I0805 13:19:40.873808  458687 main.go:141] libmachine: (newest-cni-202226) DBG | I0805 13:19:40.873727  458722 retry.go:31] will retry after 3.400675563s: waiting for machine to come up
	I0805 13:19:44.276248  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:44.276907  458687 main.go:141] libmachine: (newest-cni-202226) DBG | unable to find current IP address of domain newest-cni-202226 in network mk-newest-cni-202226
	I0805 13:19:44.276956  458687 main.go:141] libmachine: (newest-cni-202226) DBG | I0805 13:19:44.276866  458722 retry.go:31] will retry after 4.301052426s: waiting for machine to come up
	I0805 13:19:48.579304  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:48.579656  458687 main.go:141] libmachine: (newest-cni-202226) Found IP for machine: 192.168.61.136
	I0805 13:19:48.579688  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has current primary IP address 192.168.61.136 and MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:48.579701  458687 main.go:141] libmachine: (newest-cni-202226) Reserving static IP address...
	I0805 13:19:48.580167  458687 main.go:141] libmachine: (newest-cni-202226) DBG | found host DHCP lease matching {name: "newest-cni-202226", mac: "52:54:00:13:72:ff", ip: "192.168.61.136"} in network mk-newest-cni-202226: {Iface:virbr3 ExpiryTime:2024-08-05 14:19:39 +0000 UTC Type:0 Mac:52:54:00:13:72:ff Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-202226 Clientid:01:52:54:00:13:72:ff}
	I0805 13:19:48.580205  458687 main.go:141] libmachine: (newest-cni-202226) DBG | skip adding static IP to network mk-newest-cni-202226 - found existing host DHCP lease matching {name: "newest-cni-202226", mac: "52:54:00:13:72:ff", ip: "192.168.61.136"}
	I0805 13:19:48.580218  458687 main.go:141] libmachine: (newest-cni-202226) Reserved static IP address: 192.168.61.136
	I0805 13:19:48.580231  458687 main.go:141] libmachine: (newest-cni-202226) Waiting for SSH to be available...
	I0805 13:19:48.580245  458687 main.go:141] libmachine: (newest-cni-202226) DBG | Getting to WaitForSSH function...
	I0805 13:19:48.582324  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:48.582621  458687 main.go:141] libmachine: (newest-cni-202226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:72:ff", ip: ""} in network mk-newest-cni-202226: {Iface:virbr3 ExpiryTime:2024-08-05 14:19:39 +0000 UTC Type:0 Mac:52:54:00:13:72:ff Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-202226 Clientid:01:52:54:00:13:72:ff}
	I0805 13:19:48.582646  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined IP address 192.168.61.136 and MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:48.582759  458687 main.go:141] libmachine: (newest-cni-202226) DBG | Using SSH client type: external
	I0805 13:19:48.582786  458687 main.go:141] libmachine: (newest-cni-202226) DBG | Using SSH private key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/newest-cni-202226/id_rsa (-rw-------)
	I0805 13:19:48.582826  458687 main.go:141] libmachine: (newest-cni-202226) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19377-383955/.minikube/machines/newest-cni-202226/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 13:19:48.582841  458687 main.go:141] libmachine: (newest-cni-202226) DBG | About to run SSH command:
	I0805 13:19:48.582853  458687 main.go:141] libmachine: (newest-cni-202226) DBG | exit 0
	I0805 13:19:48.711897  458687 main.go:141] libmachine: (newest-cni-202226) DBG | SSH cmd err, output: <nil>: 
	I0805 13:19:48.712303  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetConfigRaw
	I0805 13:19:48.712971  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetIP
	I0805 13:19:48.715865  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:48.716253  458687 main.go:141] libmachine: (newest-cni-202226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:72:ff", ip: ""} in network mk-newest-cni-202226: {Iface:virbr3 ExpiryTime:2024-08-05 14:19:39 +0000 UTC Type:0 Mac:52:54:00:13:72:ff Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-202226 Clientid:01:52:54:00:13:72:ff}
	I0805 13:19:48.716292  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined IP address 192.168.61.136 and MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:48.716528  458687 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/newest-cni-202226/config.json ...
	I0805 13:19:48.716714  458687 machine.go:94] provisionDockerMachine start ...
	I0805 13:19:48.716732  458687 main.go:141] libmachine: (newest-cni-202226) Calling .DriverName
	I0805 13:19:48.716946  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHHostname
	I0805 13:19:48.719281  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:48.719629  458687 main.go:141] libmachine: (newest-cni-202226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:72:ff", ip: ""} in network mk-newest-cni-202226: {Iface:virbr3 ExpiryTime:2024-08-05 14:19:39 +0000 UTC Type:0 Mac:52:54:00:13:72:ff Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-202226 Clientid:01:52:54:00:13:72:ff}
	I0805 13:19:48.719656  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined IP address 192.168.61.136 and MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:48.719793  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHPort
	I0805 13:19:48.719954  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHKeyPath
	I0805 13:19:48.720066  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHKeyPath
	I0805 13:19:48.720210  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHUsername
	I0805 13:19:48.720346  458687 main.go:141] libmachine: Using SSH client type: native
	I0805 13:19:48.720588  458687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0805 13:19:48.720601  458687 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 13:19:48.831864  458687 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 13:19:48.831908  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetMachineName
	I0805 13:19:48.832207  458687 buildroot.go:166] provisioning hostname "newest-cni-202226"
	I0805 13:19:48.832242  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetMachineName
	I0805 13:19:48.832450  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHHostname
	I0805 13:19:48.834938  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:48.835319  458687 main.go:141] libmachine: (newest-cni-202226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:72:ff", ip: ""} in network mk-newest-cni-202226: {Iface:virbr3 ExpiryTime:2024-08-05 14:19:39 +0000 UTC Type:0 Mac:52:54:00:13:72:ff Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-202226 Clientid:01:52:54:00:13:72:ff}
	I0805 13:19:48.835369  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined IP address 192.168.61.136 and MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:48.835487  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHPort
	I0805 13:19:48.835674  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHKeyPath
	I0805 13:19:48.835857  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHKeyPath
	I0805 13:19:48.836016  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHUsername
	I0805 13:19:48.836161  458687 main.go:141] libmachine: Using SSH client type: native
	I0805 13:19:48.836346  458687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0805 13:19:48.836362  458687 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-202226 && echo "newest-cni-202226" | sudo tee /etc/hostname
	I0805 13:19:48.963494  458687 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-202226
	
	I0805 13:19:48.963533  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHHostname
	I0805 13:19:48.966434  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:48.966815  458687 main.go:141] libmachine: (newest-cni-202226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:72:ff", ip: ""} in network mk-newest-cni-202226: {Iface:virbr3 ExpiryTime:2024-08-05 14:19:39 +0000 UTC Type:0 Mac:52:54:00:13:72:ff Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-202226 Clientid:01:52:54:00:13:72:ff}
	I0805 13:19:48.966845  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined IP address 192.168.61.136 and MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:48.967006  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHPort
	I0805 13:19:48.967214  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHKeyPath
	I0805 13:19:48.967469  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHKeyPath
	I0805 13:19:48.967643  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHUsername
	I0805 13:19:48.967860  458687 main.go:141] libmachine: Using SSH client type: native
	I0805 13:19:48.968039  458687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0805 13:19:48.968057  458687 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-202226' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-202226/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-202226' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 13:19:49.090917  458687 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 13:19:49.090954  458687 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19377-383955/.minikube CaCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19377-383955/.minikube}
	I0805 13:19:49.090986  458687 buildroot.go:174] setting up certificates
	I0805 13:19:49.090994  458687 provision.go:84] configureAuth start
	I0805 13:19:49.091005  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetMachineName
	I0805 13:19:49.091288  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetIP
	I0805 13:19:49.093974  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:49.094299  458687 main.go:141] libmachine: (newest-cni-202226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:72:ff", ip: ""} in network mk-newest-cni-202226: {Iface:virbr3 ExpiryTime:2024-08-05 14:19:39 +0000 UTC Type:0 Mac:52:54:00:13:72:ff Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-202226 Clientid:01:52:54:00:13:72:ff}
	I0805 13:19:49.094329  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined IP address 192.168.61.136 and MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:49.094450  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHHostname
	I0805 13:19:49.096537  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:49.096889  458687 main.go:141] libmachine: (newest-cni-202226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:72:ff", ip: ""} in network mk-newest-cni-202226: {Iface:virbr3 ExpiryTime:2024-08-05 14:19:39 +0000 UTC Type:0 Mac:52:54:00:13:72:ff Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-202226 Clientid:01:52:54:00:13:72:ff}
	I0805 13:19:49.096917  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined IP address 192.168.61.136 and MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:49.097097  458687 provision.go:143] copyHostCerts
	I0805 13:19:49.097162  458687 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem, removing ...
	I0805 13:19:49.097178  458687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 13:19:49.097240  458687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem (1123 bytes)
	I0805 13:19:49.097362  458687 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem, removing ...
	I0805 13:19:49.097370  458687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 13:19:49.097422  458687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem (1675 bytes)
	I0805 13:19:49.097503  458687 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem, removing ...
	I0805 13:19:49.097511  458687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 13:19:49.097540  458687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem (1082 bytes)
	I0805 13:19:49.097602  458687 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem org=jenkins.newest-cni-202226 san=[127.0.0.1 192.168.61.136 localhost minikube newest-cni-202226]
	I0805 13:19:49.304927  458687 provision.go:177] copyRemoteCerts
	I0805 13:19:49.304995  458687 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 13:19:49.305025  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHHostname
	I0805 13:19:49.307802  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:49.308119  458687 main.go:141] libmachine: (newest-cni-202226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:72:ff", ip: ""} in network mk-newest-cni-202226: {Iface:virbr3 ExpiryTime:2024-08-05 14:19:39 +0000 UTC Type:0 Mac:52:54:00:13:72:ff Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-202226 Clientid:01:52:54:00:13:72:ff}
	I0805 13:19:49.308150  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined IP address 192.168.61.136 and MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:49.308296  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHPort
	I0805 13:19:49.308519  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHKeyPath
	I0805 13:19:49.308670  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHUsername
	I0805 13:19:49.308817  458687 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/newest-cni-202226/id_rsa Username:docker}
	I0805 13:19:49.398862  458687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 13:19:49.423424  458687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0805 13:19:49.446694  458687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 13:19:49.471063  458687 provision.go:87] duration metric: took 380.053293ms to configureAuth
	I0805 13:19:49.471102  458687 buildroot.go:189] setting minikube options for container-runtime
	I0805 13:19:49.471349  458687 config.go:182] Loaded profile config "newest-cni-202226": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0805 13:19:49.471464  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHHostname
	I0805 13:19:49.474441  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:49.474797  458687 main.go:141] libmachine: (newest-cni-202226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:72:ff", ip: ""} in network mk-newest-cni-202226: {Iface:virbr3 ExpiryTime:2024-08-05 14:19:39 +0000 UTC Type:0 Mac:52:54:00:13:72:ff Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-202226 Clientid:01:52:54:00:13:72:ff}
	I0805 13:19:49.474831  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined IP address 192.168.61.136 and MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:49.475010  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHPort
	I0805 13:19:49.475210  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHKeyPath
	I0805 13:19:49.475468  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHKeyPath
	I0805 13:19:49.475643  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHUsername
	I0805 13:19:49.475876  458687 main.go:141] libmachine: Using SSH client type: native
	I0805 13:19:49.476168  458687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0805 13:19:49.476198  458687 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 13:19:49.764525  458687 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 13:19:49.764562  458687 machine.go:97] duration metric: took 1.047833063s to provisionDockerMachine
	I0805 13:19:49.764580  458687 start.go:293] postStartSetup for "newest-cni-202226" (driver="kvm2")
	I0805 13:19:49.764594  458687 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 13:19:49.764619  458687 main.go:141] libmachine: (newest-cni-202226) Calling .DriverName
	I0805 13:19:49.764993  458687 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 13:19:49.765023  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHHostname
	I0805 13:19:49.767719  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:49.768020  458687 main.go:141] libmachine: (newest-cni-202226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:72:ff", ip: ""} in network mk-newest-cni-202226: {Iface:virbr3 ExpiryTime:2024-08-05 14:19:39 +0000 UTC Type:0 Mac:52:54:00:13:72:ff Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-202226 Clientid:01:52:54:00:13:72:ff}
	I0805 13:19:49.768048  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined IP address 192.168.61.136 and MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:49.768247  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHPort
	I0805 13:19:49.768462  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHKeyPath
	I0805 13:19:49.768636  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHUsername
	I0805 13:19:49.768787  458687 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/newest-cni-202226/id_rsa Username:docker}
	I0805 13:19:49.854825  458687 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 13:19:49.859088  458687 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 13:19:49.859111  458687 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/addons for local assets ...
	I0805 13:19:49.859206  458687 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/files for local assets ...
	I0805 13:19:49.859324  458687 filesync.go:149] local asset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> 3912192.pem in /etc/ssl/certs
	I0805 13:19:49.859502  458687 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 13:19:49.869551  458687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 13:19:49.894915  458687 start.go:296] duration metric: took 130.319257ms for postStartSetup
	I0805 13:19:49.894958  458687 fix.go:56] duration metric: took 21.374152024s for fixHost
	I0805 13:19:49.894981  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHHostname
	I0805 13:19:49.897602  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:49.897874  458687 main.go:141] libmachine: (newest-cni-202226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:72:ff", ip: ""} in network mk-newest-cni-202226: {Iface:virbr3 ExpiryTime:2024-08-05 14:19:39 +0000 UTC Type:0 Mac:52:54:00:13:72:ff Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-202226 Clientid:01:52:54:00:13:72:ff}
	I0805 13:19:49.897904  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined IP address 192.168.61.136 and MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:49.898030  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHPort
	I0805 13:19:49.898218  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHKeyPath
	I0805 13:19:49.898343  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHKeyPath
	I0805 13:19:49.898450  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHUsername
	I0805 13:19:49.898667  458687 main.go:141] libmachine: Using SSH client type: native
	I0805 13:19:49.898828  458687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0805 13:19:49.898838  458687 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 13:19:50.012823  458687 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722863989.969752848
	
	I0805 13:19:50.012852  458687 fix.go:216] guest clock: 1722863989.969752848
	I0805 13:19:50.012860  458687 fix.go:229] Guest: 2024-08-05 13:19:49.969752848 +0000 UTC Remote: 2024-08-05 13:19:49.894961481 +0000 UTC m=+21.520272776 (delta=74.791367ms)
	I0805 13:19:50.012901  458687 fix.go:200] guest clock delta is within tolerance: 74.791367ms
	I0805 13:19:50.012907  458687 start.go:83] releasing machines lock for "newest-cni-202226", held for 21.492114342s
	I0805 13:19:50.012931  458687 main.go:141] libmachine: (newest-cni-202226) Calling .DriverName
	I0805 13:19:50.013228  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetIP
	I0805 13:19:50.015718  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:50.016085  458687 main.go:141] libmachine: (newest-cni-202226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:72:ff", ip: ""} in network mk-newest-cni-202226: {Iface:virbr3 ExpiryTime:2024-08-05 14:19:39 +0000 UTC Type:0 Mac:52:54:00:13:72:ff Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-202226 Clientid:01:52:54:00:13:72:ff}
	I0805 13:19:50.016118  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined IP address 192.168.61.136 and MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:50.016247  458687 main.go:141] libmachine: (newest-cni-202226) Calling .DriverName
	I0805 13:19:50.016754  458687 main.go:141] libmachine: (newest-cni-202226) Calling .DriverName
	I0805 13:19:50.016967  458687 main.go:141] libmachine: (newest-cni-202226) Calling .DriverName
	I0805 13:19:50.017032  458687 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 13:19:50.017076  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHHostname
	I0805 13:19:50.017194  458687 ssh_runner.go:195] Run: cat /version.json
	I0805 13:19:50.017215  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHHostname
	I0805 13:19:50.019594  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:50.019821  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:50.019978  458687 main.go:141] libmachine: (newest-cni-202226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:72:ff", ip: ""} in network mk-newest-cni-202226: {Iface:virbr3 ExpiryTime:2024-08-05 14:19:39 +0000 UTC Type:0 Mac:52:54:00:13:72:ff Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-202226 Clientid:01:52:54:00:13:72:ff}
	I0805 13:19:50.020005  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined IP address 192.168.61.136 and MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:50.020213  458687 main.go:141] libmachine: (newest-cni-202226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:72:ff", ip: ""} in network mk-newest-cni-202226: {Iface:virbr3 ExpiryTime:2024-08-05 14:19:39 +0000 UTC Type:0 Mac:52:54:00:13:72:ff Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-202226 Clientid:01:52:54:00:13:72:ff}
	I0805 13:19:50.020235  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined IP address 192.168.61.136 and MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:50.020248  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHPort
	I0805 13:19:50.020459  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHPort
	I0805 13:19:50.020500  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHKeyPath
	I0805 13:19:50.020636  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHKeyPath
	I0805 13:19:50.020781  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHUsername
	I0805 13:19:50.020781  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetSSHUsername
	I0805 13:19:50.020916  458687 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/newest-cni-202226/id_rsa Username:docker}
	I0805 13:19:50.021060  458687 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/newest-cni-202226/id_rsa Username:docker}
	I0805 13:19:50.100760  458687 ssh_runner.go:195] Run: systemctl --version
	I0805 13:19:50.125179  458687 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 13:19:50.270723  458687 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 13:19:50.277031  458687 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 13:19:50.277109  458687 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 13:19:50.294947  458687 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 13:19:50.294974  458687 start.go:495] detecting cgroup driver to use...
	I0805 13:19:50.295068  458687 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 13:19:50.311998  458687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 13:19:50.326020  458687 docker.go:217] disabling cri-docker service (if available) ...
	I0805 13:19:50.326079  458687 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 13:19:50.340451  458687 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 13:19:50.354530  458687 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 13:19:50.472780  458687 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 13:19:50.620214  458687 docker.go:233] disabling docker service ...
	I0805 13:19:50.620278  458687 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 13:19:50.635028  458687 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 13:19:50.647905  458687 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 13:19:50.782991  458687 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 13:19:50.893353  458687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 13:19:50.908795  458687 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 13:19:50.927000  458687 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0805 13:19:51.217578  458687 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0805 13:19:51.217674  458687 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 13:19:51.228801  458687 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 13:19:51.228872  458687 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 13:19:51.239196  458687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 13:19:51.249493  458687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 13:19:51.259455  458687 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 13:19:51.269801  458687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 13:19:51.280799  458687 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 13:19:51.299582  458687 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 13:19:51.309846  458687 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 13:19:51.318808  458687 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0805 13:19:51.318864  458687 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0805 13:19:51.331962  458687 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 13:19:51.341127  458687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 13:19:51.462366  458687 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 13:19:51.612225  458687 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 13:19:51.612298  458687 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 13:19:51.616962  458687 start.go:563] Will wait 60s for crictl version
	I0805 13:19:51.617014  458687 ssh_runner.go:195] Run: which crictl
	I0805 13:19:51.620596  458687 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 13:19:51.662990  458687 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 13:19:51.663108  458687 ssh_runner.go:195] Run: crio --version
	I0805 13:19:51.691464  458687 ssh_runner.go:195] Run: crio --version
	I0805 13:19:51.722661  458687 out.go:177] * Preparing Kubernetes v1.31.0-rc.0 on CRI-O 1.29.1 ...
	I0805 13:19:51.723929  458687 main.go:141] libmachine: (newest-cni-202226) Calling .GetIP
	I0805 13:19:51.726597  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:51.726991  458687 main.go:141] libmachine: (newest-cni-202226) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:72:ff", ip: ""} in network mk-newest-cni-202226: {Iface:virbr3 ExpiryTime:2024-08-05 14:19:39 +0000 UTC Type:0 Mac:52:54:00:13:72:ff Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-202226 Clientid:01:52:54:00:13:72:ff}
	I0805 13:19:51.727016  458687 main.go:141] libmachine: (newest-cni-202226) DBG | domain newest-cni-202226 has defined IP address 192.168.61.136 and MAC address 52:54:00:13:72:ff in network mk-newest-cni-202226
	I0805 13:19:51.727198  458687 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0805 13:19:51.731385  458687 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 13:19:51.745138  458687 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0805 13:19:51.746422  458687 kubeadm.go:883] updating cluster {Name:newest-cni-202226 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-rc.0 ClusterName:newest-cni-202226 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHo
stTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 13:19:51.746665  458687 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0805 13:19:52.047286  458687 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0805 13:19:52.344928  458687 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0805 13:19:52.629553  458687 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0805 13:19:52.629753  458687 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0805 13:19:52.919060  458687 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0805 13:19:53.198871  458687 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0805 13:19:53.488360  458687 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 13:19:53.527787  458687 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-rc.0". assuming images are not preloaded.
	I0805 13:19:53.527853  458687 ssh_runner.go:195] Run: which lz4
	I0805 13:19:53.532257  458687 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0805 13:19:53.536473  458687 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 13:19:53.536511  458687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389126804 bytes)
	I0805 13:19:54.893008  458687 crio.go:462] duration metric: took 1.360784473s to copy over tarball
	I0805 13:19:54.893093  458687 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 13:19:57.013013  458687 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.119889041s)
	I0805 13:19:57.013042  458687 crio.go:469] duration metric: took 2.11999985s to extract the tarball
	I0805 13:19:57.013053  458687 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0805 13:19:57.049430  458687 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 13:19:57.202728  458687 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 13:19:57.202753  458687 cache_images.go:84] Images are preloaded, skipping loading
	I0805 13:19:57.202763  458687 kubeadm.go:934] updating node { 192.168.61.136 8443 v1.31.0-rc.0 crio true true} ...
	I0805 13:19:57.202884  458687 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-202226 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-202226 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 13:19:57.202987  458687 ssh_runner.go:195] Run: crio config
	I0805 13:19:57.258139  458687 cni.go:84] Creating CNI manager for ""
	I0805 13:19:57.258165  458687 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 13:19:57.258175  458687 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0805 13:19:57.258205  458687 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.136 APIServerPort:8443 KubernetesVersion:v1.31.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-202226 NodeName:newest-cni-202226 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureAr
gs:map[] NodeIP:192.168.61.136 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 13:19:57.258385  458687 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.136
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-202226"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.136
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.136"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 13:19:57.258461  458687 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-rc.0
	I0805 13:19:57.269722  458687 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 13:19:57.269810  458687 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 13:19:57.279576  458687 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0805 13:19:57.296633  458687 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0805 13:19:57.312950  458687 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0805 13:19:57.330447  458687 ssh_runner.go:195] Run: grep 192.168.61.136	control-plane.minikube.internal$ /etc/hosts
	I0805 13:19:57.334344  458687 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.136	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 13:19:57.346981  458687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 13:19:57.478065  458687 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 13:19:57.497187  458687 certs.go:68] Setting up /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/newest-cni-202226 for IP: 192.168.61.136
	I0805 13:19:57.497214  458687 certs.go:194] generating shared ca certs ...
	I0805 13:19:57.497233  458687 certs.go:226] acquiring lock for ca certs: {Name:mk0abfcaff3883fbb5243c47b487f9200d9166d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 13:19:57.497430  458687 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key
	I0805 13:19:57.497542  458687 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key
	I0805 13:19:57.497567  458687 certs.go:256] generating profile certs ...
	I0805 13:19:57.497692  458687 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/newest-cni-202226/client.key
	I0805 13:19:57.497777  458687 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/newest-cni-202226/apiserver.key.64b1698f
	I0805 13:19:57.497831  458687 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/newest-cni-202226/proxy-client.key
	I0805 13:19:57.497974  458687 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem (1338 bytes)
	W0805 13:19:57.498014  458687 certs.go:480] ignoring /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219_empty.pem, impossibly tiny 0 bytes
	I0805 13:19:57.498026  458687 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 13:19:57.498061  458687 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem (1082 bytes)
	I0805 13:19:57.498093  458687 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem (1123 bytes)
	I0805 13:19:57.498123  458687 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem (1675 bytes)
	I0805 13:19:57.498178  458687 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 13:19:57.499028  458687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 13:19:57.543833  458687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 13:19:57.581999  458687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 13:19:57.617993  458687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 13:19:57.660615  458687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/newest-cni-202226/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0805 13:19:57.686392  458687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/newest-cni-202226/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0805 13:19:57.710518  458687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/newest-cni-202226/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 13:19:57.733744  458687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/newest-cni-202226/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 13:19:57.757219  458687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /usr/share/ca-certificates/3912192.pem (1708 bytes)
	I0805 13:19:57.783635  458687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 13:19:57.810024  458687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem --> /usr/share/ca-certificates/391219.pem (1338 bytes)
	I0805 13:19:57.834617  458687 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 13:19:57.852014  458687 ssh_runner.go:195] Run: openssl version
	I0805 13:19:57.858087  458687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3912192.pem && ln -fs /usr/share/ca-certificates/3912192.pem /etc/ssl/certs/3912192.pem"
	I0805 13:19:57.870103  458687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3912192.pem
	I0805 13:19:57.874927  458687 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 11:39 /usr/share/ca-certificates/3912192.pem
	I0805 13:19:57.875004  458687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3912192.pem
	I0805 13:19:57.881061  458687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3912192.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 13:19:57.893730  458687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 13:19:57.905507  458687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 13:19:57.911542  458687 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0805 13:19:57.911604  458687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 13:19:57.918304  458687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 13:19:57.930110  458687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391219.pem && ln -fs /usr/share/ca-certificates/391219.pem /etc/ssl/certs/391219.pem"
	I0805 13:19:57.941549  458687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/391219.pem
	I0805 13:19:57.946371  458687 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 11:39 /usr/share/ca-certificates/391219.pem
	I0805 13:19:57.946428  458687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391219.pem
	I0805 13:19:57.952618  458687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391219.pem /etc/ssl/certs/51391683.0"
	I0805 13:19:57.964310  458687 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 13:19:57.969098  458687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 13:19:57.975410  458687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 13:19:57.981502  458687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 13:19:57.987735  458687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 13:19:57.993634  458687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 13:19:57.999364  458687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 13:19:58.005457  458687 kubeadm.go:392] StartCluster: {Name:newest-cni-202226 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-rc.0 ClusterName:newest-cni-202226 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostT
imeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 13:19:58.005552  458687 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 13:19:58.005606  458687 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 13:19:58.051258  458687 cri.go:89] found id: ""
	I0805 13:19:58.051332  458687 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 13:19:58.063683  458687 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0805 13:19:58.063712  458687 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0805 13:19:58.063771  458687 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0805 13:19:58.075546  458687 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0805 13:19:58.076404  458687 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-202226" does not appear in /home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 13:19:58.076751  458687 kubeconfig.go:62] /home/jenkins/minikube-integration/19377-383955/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-202226" cluster setting kubeconfig missing "newest-cni-202226" context setting]
	I0805 13:19:58.077410  458687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/kubeconfig: {Name:mkf2ea766e58530103015ce4ba9d1ed3336f3926 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 13:19:58.144871  458687 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0805 13:19:58.157558  458687 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.136
	I0805 13:19:58.157595  458687 kubeadm.go:1160] stopping kube-system containers ...
	I0805 13:19:58.157610  458687 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0805 13:19:58.157679  458687 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 13:19:58.206155  458687 cri.go:89] found id: ""
	I0805 13:19:58.206222  458687 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0805 13:19:58.227281  458687 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 13:19:58.238858  458687 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 13:19:58.238877  458687 kubeadm.go:157] found existing configuration files:
	
	I0805 13:19:58.238931  458687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 13:19:58.249891  458687 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 13:19:58.249956  458687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 13:19:58.262219  458687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 13:19:58.273305  458687 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 13:19:58.273363  458687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 13:19:58.285168  458687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 13:19:58.296463  458687 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 13:19:58.296515  458687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 13:19:58.307870  458687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 13:19:58.318429  458687 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 13:19:58.318474  458687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 13:19:58.329409  458687 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 13:19:58.340538  458687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 13:19:58.466825  458687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 13:19:59.272899  458687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0805 13:19:59.480498  458687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 13:19:59.541661  458687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0805 13:19:59.623804  458687 api_server.go:52] waiting for apiserver process to appear ...
	I0805 13:19:59.623899  458687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:20:00.124539  458687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:20:00.624509  458687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:20:00.644067  458687 api_server.go:72] duration metric: took 1.020263785s to wait for apiserver process to appear ...
	I0805 13:20:00.644105  458687 api_server.go:88] waiting for apiserver healthz status ...
	I0805 13:20:00.644136  458687 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0805 13:20:05.644523  458687 api_server.go:269] stopped: https://192.168.61.136:8443/healthz: Get "https://192.168.61.136:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 13:20:05.644585  458687 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0805 13:20:10.645032  458687 api_server.go:269] stopped: https://192.168.61.136:8443/healthz: Get "https://192.168.61.136:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 13:20:10.645088  458687 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0805 13:20:15.645312  458687 api_server.go:269] stopped: https://192.168.61.136:8443/healthz: Get "https://192.168.61.136:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 13:20:15.645360  458687 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0805 13:20:20.645813  458687 api_server.go:269] stopped: https://192.168.61.136:8443/healthz: Get "https://192.168.61.136:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 13:20:20.645858  458687 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0805 13:20:21.230168  458687 api_server.go:269] stopped: https://192.168.61.136:8443/healthz: Get "https://192.168.61.136:8443/healthz": read tcp 192.168.61.1:57498->192.168.61.136:8443: read: connection reset by peer
	I0805 13:20:21.230219  458687 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0805 13:20:21.230855  458687 api_server.go:269] stopped: https://192.168.61.136:8443/healthz: Get "https://192.168.61.136:8443/healthz": dial tcp 192.168.61.136:8443: connect: connection refused
	I0805 13:20:21.644251  458687 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0805 13:20:21.644877  458687 api_server.go:269] stopped: https://192.168.61.136:8443/healthz: Get "https://192.168.61.136:8443/healthz": dial tcp 192.168.61.136:8443: connect: connection refused
	I0805 13:20:22.144466  458687 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0805 13:20:27.145795  458687 api_server.go:269] stopped: https://192.168.61.136:8443/healthz: Get "https://192.168.61.136:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 13:20:27.145865  458687 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0805 13:20:32.146773  458687 api_server.go:269] stopped: https://192.168.61.136:8443/healthz: Get "https://192.168.61.136:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 13:20:32.146821  458687 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	
	
	==> CRI-O <==
	Aug 05 13:20:35 default-k8s-diff-port-371585 crio[728]: time="2024-08-05 13:20:35.145937840Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722864035145881466,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=03dbdcb7-bfb6-4528-a943-cd68e46f8d1c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:20:35 default-k8s-diff-port-371585 crio[728]: time="2024-08-05 13:20:35.146775085Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7ab5f329-0c02-493c-872b-311d93c37f97 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:20:35 default-k8s-diff-port-371585 crio[728]: time="2024-08-05 13:20:35.146832607Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7ab5f329-0c02-493c-872b-311d93c37f97 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:20:35 default-k8s-diff-port-371585 crio[728]: time="2024-08-05 13:20:35.147005652Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1116fb42f7d411e469d722adffd8ba7bf79322eabd75d66df0f7dc83f8811592,PodSandboxId:02d39de3ad0a62de0832c560a36b7c1b7b6a163fe6477ab3ce7a1f406e5cc732,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722863043852812507,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f3de3fc-9b34-4a46-a7cf-5487647b06ca,},Annotations:map[string]string{io.kubernetes.container.hash: effae2af,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:897074922bcfd326f191a84410e8303aed84e33b3973c6e4d825139733379ae1,PodSandboxId:5ca89a57de01359ec982f461d451756cf2846c5af49d1759e8001d37ab291401,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722863043160628624,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5vxpl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f6aa906-d76f-4f92-8de4-4d3a4a1ee733,},Annotations:map[string]string{io.kubernetes.container.hash: 6596e46f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae5a9ec4aaae6f94d84456461e05b6809eba096807bee66b7e93cc7be633d593,PodSandboxId:6e0294ae4fb3e5c08b9f5e297746ddaf6211ea5555fc85f3bf4945493c9a697e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722863042916935208,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qtt9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 8dcadd0b-af8c-4d76-a1d1-ceeaffb725b8,},Annotations:map[string]string{io.kubernetes.container.hash: 8a966db1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6a75d2e01ad7329f9fb3c03f149c3f4888aedb2d018815471e53b33eda0c5e1,PodSandboxId:f766239566395fb73fdd0176cc0814edf40deb921cea7b36a6753630fcdfd73c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING
,CreatedAt:1722863041804354684,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4v6sn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 497a1512-cdee-49ff-92ea-ea523d3de2a4,},Annotations:map[string]string{io.kubernetes.container.hash: d9fcce48,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cbbbd91c583002208ae7fcce84f734e068b113b0be7adf24c99938212088dca,PodSandboxId:437c0d82e3552aa7c0a8934650f942eb96476ff56d5df6facf17a8dd09036aa4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722863022402012575,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-371585,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f73de6958734815f839b54482962df70,},Annotations:map[string]string{io.kubernetes.container.hash: ecc30e00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e042c530805a04720b16fd04e723152d67c28e78732ac823e9e29ccd368eb5,PodSandboxId:959ebdea65b37ea7c851dc05e5cfb1d0676184d98af9be3ae672f253784a8dac,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722863022360230567,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-371585,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f7dae362ca8e66156643a6c11b9c286,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc1d9ae10f71a4ca57796b2ebb02b9ab1d598c3d4ce6aafd0b5e9d143ecbe2c9,PodSandboxId:31b7ccd50e4d1fa94f51572ae633cb1afcc7006c199f5ea2ee5c18801369c095,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722863022367210315,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-371585,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9260e1be1654581fec665fd54ad4bcb,},Annotations:map[string]string{io.kubernetes.container.hash: fa1be0c9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aab94bc76b4e689f42e1dcfd7779a9cef2e2cd34d3e887e9847458c0fa130f32,PodSandboxId:d5f7a49f52f8e8e96ab379fed95e85d019178bd5214e580b31cd3e6a8498e1fb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722863022271973773,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-371585,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2cc68ee0da609e8d11e788f77345eaf,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7ab5f329-0c02-493c-872b-311d93c37f97 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:20:35 default-k8s-diff-port-371585 crio[728]: time="2024-08-05 13:20:35.182740629Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dccfe295-6e35-4988-b21f-5c6b6573bb2b name=/runtime.v1.RuntimeService/Version
	Aug 05 13:20:35 default-k8s-diff-port-371585 crio[728]: time="2024-08-05 13:20:35.182822369Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dccfe295-6e35-4988-b21f-5c6b6573bb2b name=/runtime.v1.RuntimeService/Version
	Aug 05 13:20:35 default-k8s-diff-port-371585 crio[728]: time="2024-08-05 13:20:35.184420941Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b78536fd-bd1f-43e5-9cab-c9eca4916e3e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:20:35 default-k8s-diff-port-371585 crio[728]: time="2024-08-05 13:20:35.184893125Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722864035184870341,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b78536fd-bd1f-43e5-9cab-c9eca4916e3e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:20:35 default-k8s-diff-port-371585 crio[728]: time="2024-08-05 13:20:35.185389815Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3e82b3f7-99f8-45f4-83c5-9a166b18df74 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:20:35 default-k8s-diff-port-371585 crio[728]: time="2024-08-05 13:20:35.185493428Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3e82b3f7-99f8-45f4-83c5-9a166b18df74 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:20:35 default-k8s-diff-port-371585 crio[728]: time="2024-08-05 13:20:35.185682552Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1116fb42f7d411e469d722adffd8ba7bf79322eabd75d66df0f7dc83f8811592,PodSandboxId:02d39de3ad0a62de0832c560a36b7c1b7b6a163fe6477ab3ce7a1f406e5cc732,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722863043852812507,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f3de3fc-9b34-4a46-a7cf-5487647b06ca,},Annotations:map[string]string{io.kubernetes.container.hash: effae2af,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:897074922bcfd326f191a84410e8303aed84e33b3973c6e4d825139733379ae1,PodSandboxId:5ca89a57de01359ec982f461d451756cf2846c5af49d1759e8001d37ab291401,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722863043160628624,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5vxpl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f6aa906-d76f-4f92-8de4-4d3a4a1ee733,},Annotations:map[string]string{io.kubernetes.container.hash: 6596e46f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae5a9ec4aaae6f94d84456461e05b6809eba096807bee66b7e93cc7be633d593,PodSandboxId:6e0294ae4fb3e5c08b9f5e297746ddaf6211ea5555fc85f3bf4945493c9a697e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722863042916935208,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qtt9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 8dcadd0b-af8c-4d76-a1d1-ceeaffb725b8,},Annotations:map[string]string{io.kubernetes.container.hash: 8a966db1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6a75d2e01ad7329f9fb3c03f149c3f4888aedb2d018815471e53b33eda0c5e1,PodSandboxId:f766239566395fb73fdd0176cc0814edf40deb921cea7b36a6753630fcdfd73c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING
,CreatedAt:1722863041804354684,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4v6sn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 497a1512-cdee-49ff-92ea-ea523d3de2a4,},Annotations:map[string]string{io.kubernetes.container.hash: d9fcce48,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cbbbd91c583002208ae7fcce84f734e068b113b0be7adf24c99938212088dca,PodSandboxId:437c0d82e3552aa7c0a8934650f942eb96476ff56d5df6facf17a8dd09036aa4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722863022402012575,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-371585,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f73de6958734815f839b54482962df70,},Annotations:map[string]string{io.kubernetes.container.hash: ecc30e00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e042c530805a04720b16fd04e723152d67c28e78732ac823e9e29ccd368eb5,PodSandboxId:959ebdea65b37ea7c851dc05e5cfb1d0676184d98af9be3ae672f253784a8dac,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722863022360230567,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-371585,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f7dae362ca8e66156643a6c11b9c286,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc1d9ae10f71a4ca57796b2ebb02b9ab1d598c3d4ce6aafd0b5e9d143ecbe2c9,PodSandboxId:31b7ccd50e4d1fa94f51572ae633cb1afcc7006c199f5ea2ee5c18801369c095,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722863022367210315,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-371585,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9260e1be1654581fec665fd54ad4bcb,},Annotations:map[string]string{io.kubernetes.container.hash: fa1be0c9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aab94bc76b4e689f42e1dcfd7779a9cef2e2cd34d3e887e9847458c0fa130f32,PodSandboxId:d5f7a49f52f8e8e96ab379fed95e85d019178bd5214e580b31cd3e6a8498e1fb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722863022271973773,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-371585,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2cc68ee0da609e8d11e788f77345eaf,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3e82b3f7-99f8-45f4-83c5-9a166b18df74 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:20:35 default-k8s-diff-port-371585 crio[728]: time="2024-08-05 13:20:35.223054095Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0e6664ae-c557-4473-a041-43c1deaec1da name=/runtime.v1.RuntimeService/Version
	Aug 05 13:20:35 default-k8s-diff-port-371585 crio[728]: time="2024-08-05 13:20:35.223131171Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0e6664ae-c557-4473-a041-43c1deaec1da name=/runtime.v1.RuntimeService/Version
	Aug 05 13:20:35 default-k8s-diff-port-371585 crio[728]: time="2024-08-05 13:20:35.224297374Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=db6ce304-31a9-4369-85f8-e4d6dcbd2b8c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:20:35 default-k8s-diff-port-371585 crio[728]: time="2024-08-05 13:20:35.224757069Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722864035224731085,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=db6ce304-31a9-4369-85f8-e4d6dcbd2b8c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:20:35 default-k8s-diff-port-371585 crio[728]: time="2024-08-05 13:20:35.225288039Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3f7756f4-e63f-41ad-84f5-3b878f4282f3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:20:35 default-k8s-diff-port-371585 crio[728]: time="2024-08-05 13:20:35.225344179Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3f7756f4-e63f-41ad-84f5-3b878f4282f3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:20:35 default-k8s-diff-port-371585 crio[728]: time="2024-08-05 13:20:35.225592878Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1116fb42f7d411e469d722adffd8ba7bf79322eabd75d66df0f7dc83f8811592,PodSandboxId:02d39de3ad0a62de0832c560a36b7c1b7b6a163fe6477ab3ce7a1f406e5cc732,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722863043852812507,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f3de3fc-9b34-4a46-a7cf-5487647b06ca,},Annotations:map[string]string{io.kubernetes.container.hash: effae2af,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:897074922bcfd326f191a84410e8303aed84e33b3973c6e4d825139733379ae1,PodSandboxId:5ca89a57de01359ec982f461d451756cf2846c5af49d1759e8001d37ab291401,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722863043160628624,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5vxpl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f6aa906-d76f-4f92-8de4-4d3a4a1ee733,},Annotations:map[string]string{io.kubernetes.container.hash: 6596e46f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae5a9ec4aaae6f94d84456461e05b6809eba096807bee66b7e93cc7be633d593,PodSandboxId:6e0294ae4fb3e5c08b9f5e297746ddaf6211ea5555fc85f3bf4945493c9a697e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722863042916935208,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qtt9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 8dcadd0b-af8c-4d76-a1d1-ceeaffb725b8,},Annotations:map[string]string{io.kubernetes.container.hash: 8a966db1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6a75d2e01ad7329f9fb3c03f149c3f4888aedb2d018815471e53b33eda0c5e1,PodSandboxId:f766239566395fb73fdd0176cc0814edf40deb921cea7b36a6753630fcdfd73c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING
,CreatedAt:1722863041804354684,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4v6sn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 497a1512-cdee-49ff-92ea-ea523d3de2a4,},Annotations:map[string]string{io.kubernetes.container.hash: d9fcce48,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cbbbd91c583002208ae7fcce84f734e068b113b0be7adf24c99938212088dca,PodSandboxId:437c0d82e3552aa7c0a8934650f942eb96476ff56d5df6facf17a8dd09036aa4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722863022402012575,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-371585,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f73de6958734815f839b54482962df70,},Annotations:map[string]string{io.kubernetes.container.hash: ecc30e00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e042c530805a04720b16fd04e723152d67c28e78732ac823e9e29ccd368eb5,PodSandboxId:959ebdea65b37ea7c851dc05e5cfb1d0676184d98af9be3ae672f253784a8dac,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722863022360230567,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-371585,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f7dae362ca8e66156643a6c11b9c286,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc1d9ae10f71a4ca57796b2ebb02b9ab1d598c3d4ce6aafd0b5e9d143ecbe2c9,PodSandboxId:31b7ccd50e4d1fa94f51572ae633cb1afcc7006c199f5ea2ee5c18801369c095,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722863022367210315,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-371585,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9260e1be1654581fec665fd54ad4bcb,},Annotations:map[string]string{io.kubernetes.container.hash: fa1be0c9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aab94bc76b4e689f42e1dcfd7779a9cef2e2cd34d3e887e9847458c0fa130f32,PodSandboxId:d5f7a49f52f8e8e96ab379fed95e85d019178bd5214e580b31cd3e6a8498e1fb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722863022271973773,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-371585,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2cc68ee0da609e8d11e788f77345eaf,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3f7756f4-e63f-41ad-84f5-3b878f4282f3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:20:35 default-k8s-diff-port-371585 crio[728]: time="2024-08-05 13:20:35.260042091Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=097464aa-ade1-467f-bff2-edbcb3b333e9 name=/runtime.v1.RuntimeService/Version
	Aug 05 13:20:35 default-k8s-diff-port-371585 crio[728]: time="2024-08-05 13:20:35.260113879Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=097464aa-ade1-467f-bff2-edbcb3b333e9 name=/runtime.v1.RuntimeService/Version
	Aug 05 13:20:35 default-k8s-diff-port-371585 crio[728]: time="2024-08-05 13:20:35.261579770Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ae977edf-2a5b-452b-843b-d9a25cbb3047 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:20:35 default-k8s-diff-port-371585 crio[728]: time="2024-08-05 13:20:35.261967254Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722864035261945293,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ae977edf-2a5b-452b-843b-d9a25cbb3047 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:20:35 default-k8s-diff-port-371585 crio[728]: time="2024-08-05 13:20:35.262733721Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b155a87a-fab8-4e6c-b683-95080926be23 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:20:35 default-k8s-diff-port-371585 crio[728]: time="2024-08-05 13:20:35.262818605Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b155a87a-fab8-4e6c-b683-95080926be23 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:20:35 default-k8s-diff-port-371585 crio[728]: time="2024-08-05 13:20:35.263013796Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1116fb42f7d411e469d722adffd8ba7bf79322eabd75d66df0f7dc83f8811592,PodSandboxId:02d39de3ad0a62de0832c560a36b7c1b7b6a163fe6477ab3ce7a1f406e5cc732,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722863043852812507,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f3de3fc-9b34-4a46-a7cf-5487647b06ca,},Annotations:map[string]string{io.kubernetes.container.hash: effae2af,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:897074922bcfd326f191a84410e8303aed84e33b3973c6e4d825139733379ae1,PodSandboxId:5ca89a57de01359ec982f461d451756cf2846c5af49d1759e8001d37ab291401,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722863043160628624,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5vxpl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f6aa906-d76f-4f92-8de4-4d3a4a1ee733,},Annotations:map[string]string{io.kubernetes.container.hash: 6596e46f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae5a9ec4aaae6f94d84456461e05b6809eba096807bee66b7e93cc7be633d593,PodSandboxId:6e0294ae4fb3e5c08b9f5e297746ddaf6211ea5555fc85f3bf4945493c9a697e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722863042916935208,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qtt9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 8dcadd0b-af8c-4d76-a1d1-ceeaffb725b8,},Annotations:map[string]string{io.kubernetes.container.hash: 8a966db1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6a75d2e01ad7329f9fb3c03f149c3f4888aedb2d018815471e53b33eda0c5e1,PodSandboxId:f766239566395fb73fdd0176cc0814edf40deb921cea7b36a6753630fcdfd73c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING
,CreatedAt:1722863041804354684,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4v6sn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 497a1512-cdee-49ff-92ea-ea523d3de2a4,},Annotations:map[string]string{io.kubernetes.container.hash: d9fcce48,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cbbbd91c583002208ae7fcce84f734e068b113b0be7adf24c99938212088dca,PodSandboxId:437c0d82e3552aa7c0a8934650f942eb96476ff56d5df6facf17a8dd09036aa4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722863022402012575,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-371585,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f73de6958734815f839b54482962df70,},Annotations:map[string]string{io.kubernetes.container.hash: ecc30e00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e042c530805a04720b16fd04e723152d67c28e78732ac823e9e29ccd368eb5,PodSandboxId:959ebdea65b37ea7c851dc05e5cfb1d0676184d98af9be3ae672f253784a8dac,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722863022360230567,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-371585,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f7dae362ca8e66156643a6c11b9c286,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc1d9ae10f71a4ca57796b2ebb02b9ab1d598c3d4ce6aafd0b5e9d143ecbe2c9,PodSandboxId:31b7ccd50e4d1fa94f51572ae633cb1afcc7006c199f5ea2ee5c18801369c095,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722863022367210315,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-371585,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9260e1be1654581fec665fd54ad4bcb,},Annotations:map[string]string{io.kubernetes.container.hash: fa1be0c9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aab94bc76b4e689f42e1dcfd7779a9cef2e2cd34d3e887e9847458c0fa130f32,PodSandboxId:d5f7a49f52f8e8e96ab379fed95e85d019178bd5214e580b31cd3e6a8498e1fb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722863022271973773,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-371585,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2cc68ee0da609e8d11e788f77345eaf,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b155a87a-fab8-4e6c-b683-95080926be23 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1116fb42f7d41       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   02d39de3ad0a6       storage-provisioner
	897074922bcfd       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   5ca89a57de013       coredns-7db6d8ff4d-5vxpl
	ae5a9ec4aaae6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   6e0294ae4fb3e       coredns-7db6d8ff4d-qtt9j
	d6a75d2e01ad7       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   16 minutes ago      Running             kube-proxy                0                   f766239566395       kube-proxy-4v6sn
	3cbbbd91c5830       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   16 minutes ago      Running             etcd                      2                   437c0d82e3552       etcd-default-k8s-diff-port-371585
	dc1d9ae10f71a       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   16 minutes ago      Running             kube-apiserver            2                   31b7ccd50e4d1       kube-apiserver-default-k8s-diff-port-371585
	82e042c530805       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   16 minutes ago      Running             kube-scheduler            2                   959ebdea65b37       kube-scheduler-default-k8s-diff-port-371585
	aab94bc76b4e6       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   16 minutes ago      Running             kube-controller-manager   2                   d5f7a49f52f8e       kube-controller-manager-default-k8s-diff-port-371585
	
	
	==> coredns [897074922bcfd326f191a84410e8303aed84e33b3973c6e4d825139733379ae1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [ae5a9ec4aaae6f94d84456461e05b6809eba096807bee66b7e93cc7be633d593] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-371585
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-371585
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f
	                    minikube.k8s.io/name=default-k8s-diff-port-371585
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_05T13_03_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 13:03:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-371585
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 13:20:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 13:19:27 +0000   Mon, 05 Aug 2024 13:03:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 13:19:27 +0000   Mon, 05 Aug 2024 13:03:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 13:19:27 +0000   Mon, 05 Aug 2024 13:03:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 13:19:27 +0000   Mon, 05 Aug 2024 13:03:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.228
	  Hostname:    default-k8s-diff-port-371585
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 74d91729f35f4a63ad357597796476dc
	  System UUID:                74d91729-f35f-4a63-ad35-7597796476dc
	  Boot ID:                    dfc844bf-7a50-44db-8c15-aa02cd2e61bf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-5vxpl                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-7db6d8ff4d-qtt9j                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-default-k8s-diff-port-371585                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kube-apiserver-default-k8s-diff-port-371585             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-371585    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-4v6sn                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-default-k8s-diff-port-371585             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 metrics-server-569cc877fc-xf92r                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 16m   kube-proxy       
	  Normal  Starting                 16m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m   kubelet          Node default-k8s-diff-port-371585 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m   kubelet          Node default-k8s-diff-port-371585 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m   kubelet          Node default-k8s-diff-port-371585 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m   node-controller  Node default-k8s-diff-port-371585 event: Registered Node default-k8s-diff-port-371585 in Controller
	
	
	==> dmesg <==
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050767] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040956] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.813351] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.608000] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.397969] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.202541] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.123638] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.228260] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.149949] systemd-fstab-generator[681]: Ignoring "noauto" option for root device
	[  +0.382615] systemd-fstab-generator[712]: Ignoring "noauto" option for root device
	[  +4.864588] systemd-fstab-generator[809]: Ignoring "noauto" option for root device
	[  +0.057541] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.046870] systemd-fstab-generator[933]: Ignoring "noauto" option for root device
	[  +5.622921] kauditd_printk_skb: 97 callbacks suppressed
	[Aug 5 12:59] kauditd_printk_skb: 79 callbacks suppressed
	[Aug 5 13:03] kauditd_printk_skb: 9 callbacks suppressed
	[  +1.474131] systemd-fstab-generator[3564]: Ignoring "noauto" option for root device
	[  +4.536707] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.012069] systemd-fstab-generator[3887]: Ignoring "noauto" option for root device
	[Aug 5 13:04] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.431057] systemd-fstab-generator[4201]: Ignoring "noauto" option for root device
	[Aug 5 13:05] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [3cbbbd91c583002208ae7fcce84f734e068b113b0be7adf24c99938212088dca] <==
	{"level":"info","ts":"2024-08-05T13:03:43.713643Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T13:03:43.719541Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-05T13:03:43.719603Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-05T13:03:43.75675Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.228:2379"}
	{"level":"info","ts":"2024-08-05T13:13:43.782287Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":682}
	{"level":"info","ts":"2024-08-05T13:13:43.797397Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":682,"took":"14.738886ms","hash":1093932133,"current-db-size-bytes":2195456,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2195456,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-08-05T13:13:43.797642Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1093932133,"revision":682,"compact-revision":-1}
	{"level":"info","ts":"2024-08-05T13:18:43.79283Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":925}
	{"level":"info","ts":"2024-08-05T13:18:43.797076Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":925,"took":"3.734171ms","hash":2439778799,"current-db-size-bytes":2195456,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1564672,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-08-05T13:18:43.797149Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2439778799,"revision":925,"compact-revision":682}
	{"level":"info","ts":"2024-08-05T13:18:58.804851Z","caller":"traceutil/trace.go:171","msg":"trace[1235055037] transaction","detail":"{read_only:false; response_revision:1182; number_of_response:1; }","duration":"127.687099ms","start":"2024-08-05T13:18:58.677075Z","end":"2024-08-05T13:18:58.804762Z","steps":["trace[1235055037] 'process raft request'  (duration: 127.573336ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T13:19:57.164586Z","caller":"traceutil/trace.go:171","msg":"trace[562157438] transaction","detail":"{read_only:false; response_revision:1229; number_of_response:1; }","duration":"399.613731ms","start":"2024-08-05T13:19:56.764938Z","end":"2024-08-05T13:19:57.164552Z","steps":["trace[562157438] 'process raft request'  (duration: 399.308928ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-05T13:19:57.166158Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-05T13:19:56.76492Z","time spent":"400.440915ms","remote":"127.0.0.1:43690","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":601,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-371585\" mod_revision:1221 > success:<request_put:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-371585\" value_size:532 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-371585\" > >"}
	{"level":"warn","ts":"2024-08-05T13:19:57.507876Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"250.626033ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17903656946615117155 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1228 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-05T13:19:57.508238Z","caller":"traceutil/trace.go:171","msg":"trace[1793593960] linearizableReadLoop","detail":"{readStateIndex:1438; appliedIndex:1437; }","duration":"246.422164ms","start":"2024-08-05T13:19:57.261792Z","end":"2024-08-05T13:19:57.508214Z","steps":["trace[1793593960] 'read index received'  (duration: 34.22µs)","trace[1793593960] 'applied index is now lower than readState.Index'  (duration: 246.386204ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-05T13:19:57.508501Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"246.629341ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.50.228\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-08-05T13:19:57.508651Z","caller":"traceutil/trace.go:171","msg":"trace[245876405] range","detail":"{range_begin:/registry/masterleases/192.168.50.228; range_end:; response_count:1; response_revision:1230; }","duration":"246.880445ms","start":"2024-08-05T13:19:57.261758Z","end":"2024-08-05T13:19:57.508638Z","steps":["trace[245876405] 'agreement among raft nodes before linearized reading'  (duration: 246.56849ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T13:19:57.508921Z","caller":"traceutil/trace.go:171","msg":"trace[398627774] transaction","detail":"{read_only:false; response_revision:1230; number_of_response:1; }","duration":"339.720604ms","start":"2024-08-05T13:19:57.169188Z","end":"2024-08-05T13:19:57.508909Z","steps":["trace[398627774] 'process raft request'  (duration: 87.775831ms)","trace[398627774] 'compare'  (duration: 250.378951ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-05T13:19:57.509075Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-05T13:19:57.169173Z","time spent":"339.830088ms","remote":"127.0.0.1:43588","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1228 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-08-05T13:19:57.758049Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.174713ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17903656946615117158 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:78769122a372e565>","response":"size:40"}
	{"level":"info","ts":"2024-08-05T13:19:57.75813Z","caller":"traceutil/trace.go:171","msg":"trace[311183213] linearizableReadLoop","detail":"{readStateIndex:1439; appliedIndex:1438; }","duration":"189.274384ms","start":"2024-08-05T13:19:57.568846Z","end":"2024-08-05T13:19:57.75812Z","steps":["trace[311183213] 'read index received'  (duration: 64.985074ms)","trace[311183213] 'applied index is now lower than readState.Index'  (duration: 124.288469ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-05T13:19:57.758352Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"156.955891ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/\" range_end:\"/registry/leases0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-05T13:19:57.758414Z","caller":"traceutil/trace.go:171","msg":"trace[132747145] range","detail":"{range_begin:/registry/leases/; range_end:/registry/leases0; response_count:0; response_revision:1230; }","duration":"157.055375ms","start":"2024-08-05T13:19:57.60135Z","end":"2024-08-05T13:19:57.758405Z","steps":["trace[132747145] 'agreement among raft nodes before linearized reading'  (duration: 156.95183ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-05T13:19:57.758421Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.571224ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-05T13:19:57.758653Z","caller":"traceutil/trace.go:171","msg":"trace[1599478127] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1230; }","duration":"189.826757ms","start":"2024-08-05T13:19:57.568819Z","end":"2024-08-05T13:19:57.758645Z","steps":["trace[1599478127] 'agreement among raft nodes before linearized reading'  (duration: 189.575972ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:20:35 up 22 min,  0 users,  load average: 0.25, 0.17, 0.11
	Linux default-k8s-diff-port-371585 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [dc1d9ae10f71a4ca57796b2ebb02b9ab1d598c3d4ce6aafd0b5e9d143ecbe2c9] <==
	W0805 13:16:46.505200       1 handler_proxy.go:93] no RequestInfo found in the context
	E0805 13:16:46.505280       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0805 13:16:46.505307       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0805 13:18:45.503941       1 handler_proxy.go:93] no RequestInfo found in the context
	E0805 13:18:45.504429       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0805 13:18:46.504973       1 handler_proxy.go:93] no RequestInfo found in the context
	E0805 13:18:46.505075       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0805 13:18:46.505104       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0805 13:18:46.505181       1 handler_proxy.go:93] no RequestInfo found in the context
	E0805 13:18:46.505245       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0805 13:18:46.506376       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0805 13:19:46.505937       1 handler_proxy.go:93] no RequestInfo found in the context
	E0805 13:19:46.506099       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0805 13:19:46.506116       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0805 13:19:46.507246       1 handler_proxy.go:93] no RequestInfo found in the context
	E0805 13:19:46.507551       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0805 13:19:46.507644       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0805 13:19:57.842313       1 trace.go:236] Trace[1761627549]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.50.228,type:*v1.Endpoints,resource:apiServerIPInfo (05-Aug-2024 13:19:57.261) (total time: 581ms):
	Trace[1761627549]: ---"initial value restored" 248ms (13:19:57.509)
	Trace[1761627549]: ---"Transaction prepared" 249ms (13:19:57.759)
	Trace[1761627549]: ---"Txn call completed" 82ms (13:19:57.842)
	Trace[1761627549]: [581.124771ms] [581.124771ms] END
	
	
	==> kube-controller-manager [aab94bc76b4e689f42e1dcfd7779a9cef2e2cd34d3e887e9847458c0fa130f32] <==
	I0805 13:15:08.030919       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="72.908µs"
	E0805 13:15:31.123513       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0805 13:15:31.595672       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:16:01.128912       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0805 13:16:01.604091       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:16:31.133960       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0805 13:16:31.613795       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:17:01.140007       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0805 13:17:01.621072       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:17:31.147026       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0805 13:17:31.635227       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:18:01.152667       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0805 13:18:01.643287       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:18:31.157587       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0805 13:18:31.652077       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:19:01.162778       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0805 13:19:01.659627       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:19:31.168144       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0805 13:19:31.669603       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0805 13:20:01.174338       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0805 13:20:01.678172       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0805 13:20:08.033853       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="493.107µs"
	I0805 13:20:20.031670       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="322.207µs"
	E0805 13:20:31.179006       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0805 13:20:31.686037       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [d6a75d2e01ad7329f9fb3c03f149c3f4888aedb2d018815471e53b33eda0c5e1] <==
	I0805 13:04:02.016712       1 server_linux.go:69] "Using iptables proxy"
	I0805 13:04:02.026784       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.228"]
	I0805 13:04:02.177715       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0805 13:04:02.177761       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0805 13:04:02.177777       1 server_linux.go:165] "Using iptables Proxier"
	I0805 13:04:02.190427       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0805 13:04:02.190692       1 server.go:872] "Version info" version="v1.30.3"
	I0805 13:04:02.190704       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 13:04:02.193503       1 config.go:192] "Starting service config controller"
	I0805 13:04:02.193520       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 13:04:02.193556       1 config.go:101] "Starting endpoint slice config controller"
	I0805 13:04:02.193560       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 13:04:02.193873       1 config.go:319] "Starting node config controller"
	I0805 13:04:02.193878       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 13:04:02.294627       1 shared_informer.go:320] Caches are synced for node config
	I0805 13:04:02.294654       1 shared_informer.go:320] Caches are synced for service config
	I0805 13:04:02.294681       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [82e042c530805a04720b16fd04e723152d67c28e78732ac823e9e29ccd368eb5] <==
	W0805 13:03:46.344338       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0805 13:03:46.344525       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0805 13:03:46.346886       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0805 13:03:46.346975       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0805 13:03:46.347160       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0805 13:03:46.347213       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0805 13:03:46.485968       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0805 13:03:46.486057       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0805 13:03:46.500755       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0805 13:03:46.500957       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0805 13:03:46.528303       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0805 13:03:46.528354       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0805 13:03:46.567148       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0805 13:03:46.567692       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0805 13:03:46.589652       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0805 13:03:46.589706       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0805 13:03:46.670338       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0805 13:03:46.670509       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0805 13:03:46.684013       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0805 13:03:46.684171       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0805 13:03:46.764336       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0805 13:03:46.764580       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0805 13:03:46.816399       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0805 13:03:46.816589       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0805 13:03:48.893758       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 05 13:18:11 default-k8s-diff-port-371585 kubelet[3894]: E0805 13:18:11.014916    3894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xf92r" podUID="edb560ac-ddb1-4afa-b3a3-aa054ea38162"
	Aug 05 13:18:24 default-k8s-diff-port-371585 kubelet[3894]: E0805 13:18:24.016717    3894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xf92r" podUID="edb560ac-ddb1-4afa-b3a3-aa054ea38162"
	Aug 05 13:18:37 default-k8s-diff-port-371585 kubelet[3894]: E0805 13:18:37.014874    3894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xf92r" podUID="edb560ac-ddb1-4afa-b3a3-aa054ea38162"
	Aug 05 13:18:48 default-k8s-diff-port-371585 kubelet[3894]: E0805 13:18:48.030743    3894 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 13:18:48 default-k8s-diff-port-371585 kubelet[3894]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 13:18:48 default-k8s-diff-port-371585 kubelet[3894]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 13:18:48 default-k8s-diff-port-371585 kubelet[3894]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 13:18:48 default-k8s-diff-port-371585 kubelet[3894]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 13:18:49 default-k8s-diff-port-371585 kubelet[3894]: E0805 13:18:49.014160    3894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xf92r" podUID="edb560ac-ddb1-4afa-b3a3-aa054ea38162"
	Aug 05 13:19:02 default-k8s-diff-port-371585 kubelet[3894]: E0805 13:19:02.015120    3894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xf92r" podUID="edb560ac-ddb1-4afa-b3a3-aa054ea38162"
	Aug 05 13:19:15 default-k8s-diff-port-371585 kubelet[3894]: E0805 13:19:15.014499    3894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xf92r" podUID="edb560ac-ddb1-4afa-b3a3-aa054ea38162"
	Aug 05 13:19:30 default-k8s-diff-port-371585 kubelet[3894]: E0805 13:19:30.016342    3894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xf92r" podUID="edb560ac-ddb1-4afa-b3a3-aa054ea38162"
	Aug 05 13:19:41 default-k8s-diff-port-371585 kubelet[3894]: E0805 13:19:41.015690    3894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xf92r" podUID="edb560ac-ddb1-4afa-b3a3-aa054ea38162"
	Aug 05 13:19:48 default-k8s-diff-port-371585 kubelet[3894]: E0805 13:19:48.030970    3894 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 13:19:48 default-k8s-diff-port-371585 kubelet[3894]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 13:19:48 default-k8s-diff-port-371585 kubelet[3894]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 13:19:48 default-k8s-diff-port-371585 kubelet[3894]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 13:19:48 default-k8s-diff-port-371585 kubelet[3894]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 13:19:56 default-k8s-diff-port-371585 kubelet[3894]: E0805 13:19:56.040709    3894 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Aug 05 13:19:56 default-k8s-diff-port-371585 kubelet[3894]: E0805 13:19:56.041154    3894 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Aug 05 13:19:56 default-k8s-diff-port-371585 kubelet[3894]: E0805 13:19:56.041840    3894 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7nl4w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathE
xpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,Stdi
nOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-xf92r_kube-system(edb560ac-ddb1-4afa-b3a3-aa054ea38162): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Aug 05 13:19:56 default-k8s-diff-port-371585 kubelet[3894]: E0805 13:19:56.042017    3894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-xf92r" podUID="edb560ac-ddb1-4afa-b3a3-aa054ea38162"
	Aug 05 13:20:08 default-k8s-diff-port-371585 kubelet[3894]: E0805 13:20:08.015045    3894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xf92r" podUID="edb560ac-ddb1-4afa-b3a3-aa054ea38162"
	Aug 05 13:20:20 default-k8s-diff-port-371585 kubelet[3894]: E0805 13:20:20.015084    3894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xf92r" podUID="edb560ac-ddb1-4afa-b3a3-aa054ea38162"
	Aug 05 13:20:31 default-k8s-diff-port-371585 kubelet[3894]: E0805 13:20:31.014665    3894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xf92r" podUID="edb560ac-ddb1-4afa-b3a3-aa054ea38162"
	
	
	==> storage-provisioner [1116fb42f7d411e469d722adffd8ba7bf79322eabd75d66df0f7dc83f8811592] <==
	I0805 13:04:03.950730       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0805 13:04:03.967146       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0805 13:04:03.968129       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0805 13:04:03.985849       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0805 13:04:03.986122       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-371585_c048ac55-484c-4617-9c53-a4047f8fdf69!
	I0805 13:04:03.989891       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6205fb02-bba8-4c3a-9d67-bf47b061a534", APIVersion:"v1", ResourceVersion:"406", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-371585_c048ac55-484c-4617-9c53-a4047f8fdf69 became leader
	I0805 13:04:04.087213       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-371585_c048ac55-484c-4617-9c53-a4047f8fdf69!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-371585 -n default-k8s-diff-port-371585
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-371585 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-xf92r
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-371585 describe pod metrics-server-569cc877fc-xf92r
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-371585 describe pod metrics-server-569cc877fc-xf92r: exit status 1 (60.20796ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-xf92r" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-371585 describe pod metrics-server-569cc877fc-xf92r: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (445.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (128.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
E0805 13:17:09.900737  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/calico-119870/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
E0805 13:17:48.268006  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/custom-flannel-119870/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
E0805 13:17:52.926468  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/functional-014296/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.41:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.41:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-635707 -n old-k8s-version-635707
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-635707 -n old-k8s-version-635707: exit status 2 (230.974221ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-635707" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-635707 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-635707 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.071µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-635707 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-635707 -n old-k8s-version-635707
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-635707 -n old-k8s-version-635707: exit status 2 (216.586781ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-635707 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-635707 logs -n 25: (1.654488894s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-119870 sudo cat                              | bridge-119870                | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-119870 sudo                                  | bridge-119870                | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-119870 sudo                                  | bridge-119870                | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-119870 sudo                                  | bridge-119870                | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-119870 sudo find                             | bridge-119870                | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-119870 sudo crio                             | bridge-119870                | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-119870                                       | bridge-119870                | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	| delete  | -p                                                     | disable-driver-mounts-130994 | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | disable-driver-mounts-130994                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-371585 | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:51 UTC |
	|         | default-k8s-diff-port-371585                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-321139            | embed-certs-321139           | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC | 05 Aug 24 12:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-321139                                  | embed-certs-321139           | jenkins | v1.33.1 | 05 Aug 24 12:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-669469             | no-preload-669469            | jenkins | v1.33.1 | 05 Aug 24 12:51 UTC | 05 Aug 24 12:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-669469                                   | no-preload-669469            | jenkins | v1.33.1 | 05 Aug 24 12:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-371585  | default-k8s-diff-port-371585 | jenkins | v1.33.1 | 05 Aug 24 12:51 UTC | 05 Aug 24 12:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-371585 | jenkins | v1.33.1 | 05 Aug 24 12:51 UTC |                     |
	|         | default-k8s-diff-port-371585                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-321139                 | embed-certs-321139           | jenkins | v1.33.1 | 05 Aug 24 12:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-635707        | old-k8s-version-635707       | jenkins | v1.33.1 | 05 Aug 24 12:53 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-321139                                  | embed-certs-321139           | jenkins | v1.33.1 | 05 Aug 24 12:53 UTC | 05 Aug 24 13:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-669469                  | no-preload-669469            | jenkins | v1.33.1 | 05 Aug 24 12:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-669469                                   | no-preload-669469            | jenkins | v1.33.1 | 05 Aug 24 12:53 UTC | 05 Aug 24 13:03 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-371585       | default-k8s-diff-port-371585 | jenkins | v1.33.1 | 05 Aug 24 12:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-371585 | jenkins | v1.33.1 | 05 Aug 24 12:54 UTC | 05 Aug 24 13:04 UTC |
	|         | default-k8s-diff-port-371585                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-635707                              | old-k8s-version-635707       | jenkins | v1.33.1 | 05 Aug 24 12:55 UTC | 05 Aug 24 12:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-635707             | old-k8s-version-635707       | jenkins | v1.33.1 | 05 Aug 24 12:55 UTC | 05 Aug 24 12:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-635707                              | old-k8s-version-635707       | jenkins | v1.33.1 | 05 Aug 24 12:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 12:55:11
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 12:55:11.960192  451238 out.go:291] Setting OutFile to fd 1 ...
	I0805 12:55:11.960471  451238 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:55:11.960479  451238 out.go:304] Setting ErrFile to fd 2...
	I0805 12:55:11.960484  451238 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:55:11.960646  451238 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
	I0805 12:55:11.961145  451238 out.go:298] Setting JSON to false
	I0805 12:55:11.962063  451238 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":9459,"bootTime":1722853053,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0805 12:55:11.962121  451238 start.go:139] virtualization: kvm guest
	I0805 12:55:11.964372  451238 out.go:177] * [old-k8s-version-635707] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0805 12:55:11.965770  451238 notify.go:220] Checking for updates...
	I0805 12:55:11.965787  451238 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 12:55:11.967106  451238 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 12:55:11.968790  451238 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 12:55:11.970181  451238 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 12:55:11.971500  451238 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0805 12:55:11.973243  451238 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 12:55:11.974825  451238 config.go:182] Loaded profile config "old-k8s-version-635707": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0805 12:55:11.975239  451238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:55:11.975319  451238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:55:11.990296  451238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40583
	I0805 12:55:11.990704  451238 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:55:11.991235  451238 main.go:141] libmachine: Using API Version  1
	I0805 12:55:11.991259  451238 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:55:11.991575  451238 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:55:11.991765  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:55:11.993484  451238 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0805 12:55:11.994687  451238 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 12:55:11.994952  451238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:55:11.994984  451238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:55:12.009528  451238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37395
	I0805 12:55:12.009879  451238 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:55:12.010353  451238 main.go:141] libmachine: Using API Version  1
	I0805 12:55:12.010375  451238 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:55:12.010670  451238 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:55:12.010857  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:55:12.044634  451238 out.go:177] * Using the kvm2 driver based on existing profile
	I0805 12:55:12.045859  451238 start.go:297] selected driver: kvm2
	I0805 12:55:12.045876  451238 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-635707 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-635707 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.41 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:55:12.045987  451238 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 12:55:12.046662  451238 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 12:55:12.046731  451238 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19377-383955/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0805 12:55:12.061918  451238 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0805 12:55:12.062400  451238 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 12:55:12.062484  451238 cni.go:84] Creating CNI manager for ""
	I0805 12:55:12.062502  451238 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:55:12.062572  451238 start.go:340] cluster config:
	{Name:old-k8s-version-635707 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-635707 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.41 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:55:12.062722  451238 iso.go:125] acquiring lock: {Name:mk78a4988ea0dfb86bb6f7367e362683a39fd912 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 12:55:12.064478  451238 out.go:177] * Starting "old-k8s-version-635707" primary control-plane node in "old-k8s-version-635707" cluster
	I0805 12:55:10.820047  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:13.892041  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:12.065640  451238 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0805 12:55:12.065680  451238 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0805 12:55:12.065701  451238 cache.go:56] Caching tarball of preloaded images
	I0805 12:55:12.065786  451238 preload.go:172] Found /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0805 12:55:12.065797  451238 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0805 12:55:12.065897  451238 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/config.json ...
	I0805 12:55:12.066073  451238 start.go:360] acquireMachinesLock for old-k8s-version-635707: {Name:mk3babe91d55c30c0b650587cdec6489eb3a7ed6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 12:55:19.971977  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:23.044092  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:29.124041  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:32.196124  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:38.276045  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:41.348117  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:47.428042  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:50.500022  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:56.580074  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:55:59.652091  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:05.732072  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:08.804128  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:14.884085  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:17.956073  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:24.036067  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:27.108059  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:33.188012  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:36.260134  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:42.340036  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:45.412038  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:51.492022  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:56:54.564068  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:00.644018  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:03.716112  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:09.796041  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:12.868080  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:18.948054  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:22.020023  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:28.100099  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:31.172076  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:37.251997  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:40.324080  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:46.404055  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:49.476072  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:55.556045  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:57:58.627984  450393 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0805 12:58:01.632326  450576 start.go:364] duration metric: took 4m17.994768704s to acquireMachinesLock for "no-preload-669469"
	I0805 12:58:01.632391  450576 start.go:96] Skipping create...Using existing machine configuration
	I0805 12:58:01.632403  450576 fix.go:54] fixHost starting: 
	I0805 12:58:01.632845  450576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:58:01.632880  450576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:58:01.648358  450576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43013
	I0805 12:58:01.648860  450576 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:58:01.649387  450576 main.go:141] libmachine: Using API Version  1
	I0805 12:58:01.649410  450576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:58:01.649779  450576 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:58:01.649963  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 12:58:01.650176  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetState
	I0805 12:58:01.651681  450576 fix.go:112] recreateIfNeeded on no-preload-669469: state=Stopped err=<nil>
	I0805 12:58:01.651715  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	W0805 12:58:01.651903  450576 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 12:58:01.653860  450576 out.go:177] * Restarting existing kvm2 VM for "no-preload-669469" ...
	I0805 12:58:01.655338  450576 main.go:141] libmachine: (no-preload-669469) Calling .Start
	I0805 12:58:01.655475  450576 main.go:141] libmachine: (no-preload-669469) Ensuring networks are active...
	I0805 12:58:01.656224  450576 main.go:141] libmachine: (no-preload-669469) Ensuring network default is active
	I0805 12:58:01.656565  450576 main.go:141] libmachine: (no-preload-669469) Ensuring network mk-no-preload-669469 is active
	I0805 12:58:01.656898  450576 main.go:141] libmachine: (no-preload-669469) Getting domain xml...
	I0805 12:58:01.657537  450576 main.go:141] libmachine: (no-preload-669469) Creating domain...
	I0805 12:58:02.879809  450576 main.go:141] libmachine: (no-preload-669469) Waiting to get IP...
	I0805 12:58:02.880800  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:02.881194  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:02.881270  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:02.881175  451829 retry.go:31] will retry after 303.380177ms: waiting for machine to come up
	I0805 12:58:03.185834  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:03.186259  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:03.186288  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:03.186214  451829 retry.go:31] will retry after 263.494141ms: waiting for machine to come up
	I0805 12:58:03.451923  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:03.452263  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:03.452340  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:03.452217  451829 retry.go:31] will retry after 310.615163ms: waiting for machine to come up
	I0805 12:58:01.629832  450393 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 12:58:01.629873  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetMachineName
	I0805 12:58:01.630250  450393 buildroot.go:166] provisioning hostname "embed-certs-321139"
	I0805 12:58:01.630295  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetMachineName
	I0805 12:58:01.630511  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:58:01.632158  450393 machine.go:97] duration metric: took 4m37.422562602s to provisionDockerMachine
	I0805 12:58:01.632208  450393 fix.go:56] duration metric: took 4m37.444588707s for fixHost
	I0805 12:58:01.632226  450393 start.go:83] releasing machines lock for "embed-certs-321139", held for 4m37.44461751s
	W0805 12:58:01.632250  450393 start.go:714] error starting host: provision: host is not running
	W0805 12:58:01.632431  450393 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0805 12:58:01.632445  450393 start.go:729] Will try again in 5 seconds ...
	I0805 12:58:03.764803  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:03.765280  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:03.765305  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:03.765243  451829 retry.go:31] will retry after 570.955722ms: waiting for machine to come up
	I0805 12:58:04.338423  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:04.338863  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:04.338893  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:04.338811  451829 retry.go:31] will retry after 485.490715ms: waiting for machine to come up
	I0805 12:58:04.825511  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:04.825882  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:04.825911  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:04.825823  451829 retry.go:31] will retry after 671.109731ms: waiting for machine to come up
	I0805 12:58:05.498113  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:05.498529  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:05.498557  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:05.498467  451829 retry.go:31] will retry after 997.668856ms: waiting for machine to come up
	I0805 12:58:06.497843  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:06.498144  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:06.498161  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:06.498120  451829 retry.go:31] will retry after 996.614411ms: waiting for machine to come up
	I0805 12:58:07.496801  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:07.497298  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:07.497334  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:07.497249  451829 retry.go:31] will retry after 1.384682595s: waiting for machine to come up
	I0805 12:58:06.634410  450393 start.go:360] acquireMachinesLock for embed-certs-321139: {Name:mk3babe91d55c30c0b650587cdec6489eb3a7ed6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 12:58:08.883309  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:08.883701  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:08.883732  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:08.883642  451829 retry.go:31] will retry after 2.017073843s: waiting for machine to come up
	I0805 12:58:10.903852  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:10.904279  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:10.904310  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:10.904233  451829 retry.go:31] will retry after 2.485880433s: waiting for machine to come up
	I0805 12:58:13.392693  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:13.393169  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:13.393199  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:13.393116  451829 retry.go:31] will retry after 2.986076236s: waiting for machine to come up
	I0805 12:58:16.380921  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:16.381475  450576 main.go:141] libmachine: (no-preload-669469) DBG | unable to find current IP address of domain no-preload-669469 in network mk-no-preload-669469
	I0805 12:58:16.381508  450576 main.go:141] libmachine: (no-preload-669469) DBG | I0805 12:58:16.381432  451829 retry.go:31] will retry after 4.291617536s: waiting for machine to come up
	I0805 12:58:21.948770  450884 start.go:364] duration metric: took 4m4.773878111s to acquireMachinesLock for "default-k8s-diff-port-371585"
	I0805 12:58:21.948843  450884 start.go:96] Skipping create...Using existing machine configuration
	I0805 12:58:21.948851  450884 fix.go:54] fixHost starting: 
	I0805 12:58:21.949291  450884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:58:21.949337  450884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:58:21.966933  450884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34223
	I0805 12:58:21.967356  450884 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:58:21.967874  450884 main.go:141] libmachine: Using API Version  1
	I0805 12:58:21.967899  450884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:58:21.968326  450884 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:58:21.968638  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 12:58:21.968874  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetState
	I0805 12:58:21.970608  450884 fix.go:112] recreateIfNeeded on default-k8s-diff-port-371585: state=Stopped err=<nil>
	I0805 12:58:21.970631  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	W0805 12:58:21.970789  450884 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 12:58:21.973235  450884 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-371585" ...
	I0805 12:58:21.974564  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .Start
	I0805 12:58:21.974751  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Ensuring networks are active...
	I0805 12:58:21.975581  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Ensuring network default is active
	I0805 12:58:21.976001  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Ensuring network mk-default-k8s-diff-port-371585 is active
	I0805 12:58:21.976376  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Getting domain xml...
	I0805 12:58:21.977078  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Creating domain...
	I0805 12:58:20.678231  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.678743  450576 main.go:141] libmachine: (no-preload-669469) Found IP for machine: 192.168.72.223
	I0805 12:58:20.678771  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has current primary IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.678786  450576 main.go:141] libmachine: (no-preload-669469) Reserving static IP address...
	I0805 12:58:20.679230  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "no-preload-669469", mac: "52:54:00:55:38:0a", ip: "192.168.72.223"} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:20.679266  450576 main.go:141] libmachine: (no-preload-669469) Reserved static IP address: 192.168.72.223
	I0805 12:58:20.679288  450576 main.go:141] libmachine: (no-preload-669469) DBG | skip adding static IP to network mk-no-preload-669469 - found existing host DHCP lease matching {name: "no-preload-669469", mac: "52:54:00:55:38:0a", ip: "192.168.72.223"}
	I0805 12:58:20.679302  450576 main.go:141] libmachine: (no-preload-669469) DBG | Getting to WaitForSSH function...
	I0805 12:58:20.679317  450576 main.go:141] libmachine: (no-preload-669469) Waiting for SSH to be available...
	I0805 12:58:20.681864  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.682263  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:20.682297  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.682447  450576 main.go:141] libmachine: (no-preload-669469) DBG | Using SSH client type: external
	I0805 12:58:20.682484  450576 main.go:141] libmachine: (no-preload-669469) DBG | Using SSH private key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa (-rw-------)
	I0805 12:58:20.682539  450576 main.go:141] libmachine: (no-preload-669469) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.223 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 12:58:20.682557  450576 main.go:141] libmachine: (no-preload-669469) DBG | About to run SSH command:
	I0805 12:58:20.682568  450576 main.go:141] libmachine: (no-preload-669469) DBG | exit 0
	I0805 12:58:20.807791  450576 main.go:141] libmachine: (no-preload-669469) DBG | SSH cmd err, output: <nil>: 
	I0805 12:58:20.808168  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetConfigRaw
	I0805 12:58:20.808767  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetIP
	I0805 12:58:20.811170  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.811486  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:20.811517  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.811738  450576 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469/config.json ...
	I0805 12:58:20.811957  450576 machine.go:94] provisionDockerMachine start ...
	I0805 12:58:20.811976  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 12:58:20.812203  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:20.814305  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.814656  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:20.814693  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.814823  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:20.814996  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:20.815156  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:20.815329  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:20.815503  450576 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:20.815871  450576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.223 22 <nil> <nil>}
	I0805 12:58:20.815887  450576 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 12:58:20.920311  450576 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 12:58:20.920344  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetMachineName
	I0805 12:58:20.920642  450576 buildroot.go:166] provisioning hostname "no-preload-669469"
	I0805 12:58:20.920695  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetMachineName
	I0805 12:58:20.920951  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:20.924029  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.924583  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:20.924611  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:20.924770  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:20.925001  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:20.925190  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:20.925334  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:20.925514  450576 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:20.925755  450576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.223 22 <nil> <nil>}
	I0805 12:58:20.925774  450576 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-669469 && echo "no-preload-669469" | sudo tee /etc/hostname
	I0805 12:58:21.046579  450576 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-669469
	
	I0805 12:58:21.046614  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:21.049322  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.049657  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.049687  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.049851  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:21.050049  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.050239  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.050412  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:21.050588  450576 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:21.050755  450576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.223 22 <nil> <nil>}
	I0805 12:58:21.050771  450576 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-669469' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-669469/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-669469' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 12:58:21.165100  450576 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 12:58:21.165134  450576 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19377-383955/.minikube CaCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19377-383955/.minikube}
	I0805 12:58:21.165170  450576 buildroot.go:174] setting up certificates
	I0805 12:58:21.165180  450576 provision.go:84] configureAuth start
	I0805 12:58:21.165191  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetMachineName
	I0805 12:58:21.165477  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetIP
	I0805 12:58:21.168018  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.168399  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.168443  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.168703  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:21.171168  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.171536  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.171565  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.171638  450576 provision.go:143] copyHostCerts
	I0805 12:58:21.171713  450576 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem, removing ...
	I0805 12:58:21.171724  450576 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 12:58:21.171807  450576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem (1082 bytes)
	I0805 12:58:21.171920  450576 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem, removing ...
	I0805 12:58:21.171930  450576 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 12:58:21.171955  450576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem (1123 bytes)
	I0805 12:58:21.172010  450576 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem, removing ...
	I0805 12:58:21.172016  450576 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 12:58:21.172037  450576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem (1675 bytes)
	I0805 12:58:21.172095  450576 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem org=jenkins.no-preload-669469 san=[127.0.0.1 192.168.72.223 localhost minikube no-preload-669469]
	I0805 12:58:21.287395  450576 provision.go:177] copyRemoteCerts
	I0805 12:58:21.287463  450576 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 12:58:21.287505  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:21.290416  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.290765  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.290796  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.290962  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:21.291169  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.291323  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:21.291460  450576 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa Username:docker}
	I0805 12:58:21.373992  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0805 12:58:21.398249  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 12:58:21.422950  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0805 12:58:21.446469  450576 provision.go:87] duration metric: took 281.275299ms to configureAuth
	I0805 12:58:21.446500  450576 buildroot.go:189] setting minikube options for container-runtime
	I0805 12:58:21.446688  450576 config.go:182] Loaded profile config "no-preload-669469": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0805 12:58:21.446813  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:21.449833  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.450219  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.450235  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.450526  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:21.450814  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.450993  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.451168  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:21.451342  450576 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:21.451515  450576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.223 22 <nil> <nil>}
	I0805 12:58:21.451532  450576 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 12:58:21.714813  450576 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 12:58:21.714842  450576 machine.go:97] duration metric: took 902.872257ms to provisionDockerMachine
	I0805 12:58:21.714858  450576 start.go:293] postStartSetup for "no-preload-669469" (driver="kvm2")
	I0805 12:58:21.714889  450576 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 12:58:21.714940  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 12:58:21.715304  450576 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 12:58:21.715333  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:21.717989  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.718405  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.718427  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.718597  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:21.718832  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.718993  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:21.719152  450576 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa Username:docker}
	I0805 12:58:21.802634  450576 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 12:58:21.806957  450576 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 12:58:21.806985  450576 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/addons for local assets ...
	I0805 12:58:21.807079  450576 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/files for local assets ...
	I0805 12:58:21.807186  450576 filesync.go:149] local asset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> 3912192.pem in /etc/ssl/certs
	I0805 12:58:21.807293  450576 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 12:58:21.816690  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:58:21.839848  450576 start.go:296] duration metric: took 124.973515ms for postStartSetup
	I0805 12:58:21.839903  450576 fix.go:56] duration metric: took 20.207499572s for fixHost
	I0805 12:58:21.839934  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:21.842548  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.842869  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.842893  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.843090  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:21.843310  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.843502  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.843640  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:21.843815  450576 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:21.844015  450576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.223 22 <nil> <nil>}
	I0805 12:58:21.844029  450576 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 12:58:21.948584  450576 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722862701.921979093
	
	I0805 12:58:21.948613  450576 fix.go:216] guest clock: 1722862701.921979093
	I0805 12:58:21.948623  450576 fix.go:229] Guest: 2024-08-05 12:58:21.921979093 +0000 UTC Remote: 2024-08-05 12:58:21.83991063 +0000 UTC m=+278.340267839 (delta=82.068463ms)
	I0805 12:58:21.948671  450576 fix.go:200] guest clock delta is within tolerance: 82.068463ms
	I0805 12:58:21.948680  450576 start.go:83] releasing machines lock for "no-preload-669469", held for 20.316310092s
	I0805 12:58:21.948713  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 12:58:21.948990  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetIP
	I0805 12:58:21.951624  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.952086  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.952136  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.952256  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 12:58:21.952797  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 12:58:21.952984  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 12:58:21.953065  450576 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 12:58:21.953113  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:21.953227  450576 ssh_runner.go:195] Run: cat /version.json
	I0805 12:58:21.953255  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 12:58:21.955837  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.956081  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.956200  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.956227  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.956370  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:21.956504  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:21.956528  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.956568  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:21.956670  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:21.956760  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 12:58:21.956857  450576 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa Username:docker}
	I0805 12:58:21.956906  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 12:58:21.957058  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 12:58:21.957205  450576 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa Username:docker}
	I0805 12:58:22.058847  450576 ssh_runner.go:195] Run: systemctl --version
	I0805 12:58:22.065110  450576 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 12:58:22.211415  450576 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 12:58:22.219405  450576 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 12:58:22.219492  450576 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 12:58:22.240631  450576 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 12:58:22.240659  450576 start.go:495] detecting cgroup driver to use...
	I0805 12:58:22.240764  450576 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 12:58:22.258777  450576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 12:58:22.273312  450576 docker.go:217] disabling cri-docker service (if available) ...
	I0805 12:58:22.273400  450576 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 12:58:22.288455  450576 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 12:58:22.305028  450576 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 12:58:22.428098  450576 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 12:58:22.586232  450576 docker.go:233] disabling docker service ...
	I0805 12:58:22.586318  450576 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 12:58:22.611888  450576 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 12:58:22.627393  450576 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 12:58:22.757335  450576 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 12:58:22.878168  450576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 12:58:22.896174  450576 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 12:58:22.914395  450576 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0805 12:58:23.229202  450576 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0805 12:58:23.229300  450576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:23.242180  450576 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 12:58:23.242262  450576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:23.254577  450576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:23.265805  450576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:23.276522  450576 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 12:58:23.287288  450576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:23.297863  450576 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:23.314322  450576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:23.324662  450576 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 12:58:23.334125  450576 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0805 12:58:23.334192  450576 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0805 12:58:23.346701  450576 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 12:58:23.356256  450576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:58:23.474046  450576 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 12:58:23.617276  450576 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 12:58:23.617363  450576 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 12:58:23.622001  450576 start.go:563] Will wait 60s for crictl version
	I0805 12:58:23.622047  450576 ssh_runner.go:195] Run: which crictl
	I0805 12:58:23.626041  450576 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 12:58:23.670186  450576 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 12:58:23.670267  450576 ssh_runner.go:195] Run: crio --version
	I0805 12:58:23.700616  450576 ssh_runner.go:195] Run: crio --version
	I0805 12:58:23.733411  450576 out.go:177] * Preparing Kubernetes v1.31.0-rc.0 on CRI-O 1.29.1 ...
	I0805 12:58:23.254293  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting to get IP...
	I0805 12:58:23.255331  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:23.255802  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:23.255880  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:23.255773  451963 retry.go:31] will retry after 245.269435ms: waiting for machine to come up
	I0805 12:58:23.502617  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:23.503105  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:23.503130  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:23.503068  451963 retry.go:31] will retry after 243.155673ms: waiting for machine to come up
	I0805 12:58:23.747498  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:23.747913  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:23.747950  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:23.747867  451963 retry.go:31] will retry after 459.286566ms: waiting for machine to come up
	I0805 12:58:24.208594  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:24.209076  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:24.209127  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:24.209003  451963 retry.go:31] will retry after 499.069946ms: waiting for machine to come up
	I0805 12:58:24.709128  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:24.709554  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:24.709577  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:24.709512  451963 retry.go:31] will retry after 732.735525ms: waiting for machine to come up
	I0805 12:58:25.443632  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:25.444185  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:25.444216  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:25.444125  451963 retry.go:31] will retry after 883.69375ms: waiting for machine to come up
	I0805 12:58:26.329477  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:26.330010  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:26.330045  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:26.329947  451963 retry.go:31] will retry after 1.157298734s: waiting for machine to come up
	I0805 12:58:23.734875  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetIP
	I0805 12:58:23.737945  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:23.738460  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 12:58:23.738487  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 12:58:23.738646  450576 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0805 12:58:23.742894  450576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:58:23.756164  450576 kubeadm.go:883] updating cluster {Name:no-preload-669469 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-rc.0 ClusterName:no-preload-669469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.223 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 12:58:23.756435  450576 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0805 12:58:24.035575  450576 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0805 12:58:24.352144  450576 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0805 12:58:24.657175  450576 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0805 12:58:24.657266  450576 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:58:24.694685  450576 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-rc.0". assuming images are not preloaded.
	I0805 12:58:24.694720  450576 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-rc.0 registry.k8s.io/kube-controller-manager:v1.31.0-rc.0 registry.k8s.io/kube-scheduler:v1.31.0-rc.0 registry.k8s.io/kube-proxy:v1.31.0-rc.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0805 12:58:24.694809  450576 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0805 12:58:24.694831  450576 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0805 12:58:24.694845  450576 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0805 12:58:24.694867  450576 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0805 12:58:24.694835  450576 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:58:24.694815  450576 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0805 12:58:24.694801  450576 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0805 12:58:24.694917  450576 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0805 12:58:24.696852  450576 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0805 12:58:24.696859  450576 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0805 12:58:24.696860  450576 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0805 12:58:24.696902  450576 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0805 12:58:24.696904  450576 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:58:24.696852  450576 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0805 12:58:24.696881  450576 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0805 12:58:24.696852  450576 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0805 12:58:24.864249  450576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0805 12:58:24.867334  450576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0805 12:58:24.905018  450576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0805 12:58:24.920294  450576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0805 12:58:24.925405  450576 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" does not exist at hash "fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c" in container runtime
	I0805 12:58:24.925440  450576 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" does not exist at hash "c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0" in container runtime
	I0805 12:58:24.925456  450576 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0805 12:58:24.925476  450576 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0805 12:58:24.925508  450576 ssh_runner.go:195] Run: which crictl
	I0805 12:58:24.925520  450576 ssh_runner.go:195] Run: which crictl
	I0805 12:58:24.973191  450576 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-rc.0" does not exist at hash "41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318" in container runtime
	I0805 12:58:24.973240  450576 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0805 12:58:24.973304  450576 ssh_runner.go:195] Run: which crictl
	I0805 12:58:24.986642  450576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0805 12:58:24.986685  450576 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0805 12:58:24.986706  450576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0805 12:58:24.986723  450576 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0805 12:58:24.986642  450576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0805 12:58:24.986772  450576 ssh_runner.go:195] Run: which crictl
	I0805 12:58:25.037012  450576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0
	I0805 12:58:25.037066  450576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0805 12:58:25.037132  450576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0805 12:58:25.067311  450576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0805 12:58:25.068528  450576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0805 12:58:25.073769  450576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0
	I0805 12:58:25.073831  450576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-rc.0
	I0805 12:58:25.073872  450576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0805 12:58:25.073933  450576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0805 12:58:25.082476  450576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0805 12:58:25.126044  450576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0 (exists)
	I0805 12:58:25.126080  450576 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0805 12:58:25.126127  450576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0805 12:58:25.126144  450576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0805 12:58:25.126230  450576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0805 12:58:25.149903  450576 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0805 12:58:25.149965  450576 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0805 12:58:25.150028  450576 ssh_runner.go:195] Run: which crictl
	I0805 12:58:25.196288  450576 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" does not exist at hash "0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c" in container runtime
	I0805 12:58:25.196336  450576 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0805 12:58:25.196388  450576 ssh_runner.go:195] Run: which crictl
	I0805 12:58:25.196416  450576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0 (exists)
	I0805 12:58:25.196510  450576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0 (exists)
	I0805 12:58:25.651632  450576 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:58:27.532922  450576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0: (2.406747514s)
	I0805 12:58:27.532959  450576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 from cache
	I0805 12:58:27.532994  450576 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0805 12:58:27.533010  450576 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.406755032s)
	I0805 12:58:27.533048  450576 ssh_runner.go:235] Completed: which crictl: (2.383000552s)
	I0805 12:58:27.533050  450576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0805 12:58:27.533082  450576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0805 12:58:27.533082  450576 ssh_runner.go:235] Completed: which crictl: (2.336681164s)
	I0805 12:58:27.533095  450576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0805 12:58:27.533117  450576 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.88145852s)
	I0805 12:58:27.533139  450576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0805 12:58:27.533161  450576 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0805 12:58:27.533198  450576 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:58:27.533234  450576 ssh_runner.go:195] Run: which crictl
	I0805 12:58:27.488683  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:27.489080  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:27.489108  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:27.489027  451963 retry.go:31] will retry after 997.566168ms: waiting for machine to come up
	I0805 12:58:28.488397  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:28.488846  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:28.488878  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:28.488794  451963 retry.go:31] will retry after 1.327498575s: waiting for machine to come up
	I0805 12:58:29.818339  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:29.818705  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:29.818735  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:29.818660  451963 retry.go:31] will retry after 2.105158858s: waiting for machine to come up
	I0805 12:58:31.925036  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:31.925564  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:31.925601  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:31.925492  451963 retry.go:31] will retry after 2.860711737s: waiting for machine to come up
	I0805 12:58:29.629896  450576 ssh_runner.go:235] Completed: which crictl: (2.096633143s)
	I0805 12:58:29.630000  450576 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:58:29.630084  450576 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0: (2.096969259s)
	I0805 12:58:29.630184  450576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0805 12:58:29.630102  450576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0: (2.09697893s)
	I0805 12:58:29.630255  450576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 from cache
	I0805 12:58:29.630121  450576 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-rc.0: (2.096957841s)
	I0805 12:58:29.630282  450576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.15-0
	I0805 12:58:29.630286  450576 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0805 12:58:29.630313  450576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0
	I0805 12:58:29.630322  450576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0805 12:58:29.630381  450576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0805 12:58:29.675831  450576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0805 12:58:29.675914  450576 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0805 12:58:29.676019  450576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0805 12:58:31.695376  450576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0: (2.06501136s)
	I0805 12:58:31.695429  450576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 from cache
	I0805 12:58:31.695458  450576 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0805 12:58:31.695476  450576 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.019437866s)
	I0805 12:58:31.695382  450576 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0: (2.064967299s)
	I0805 12:58:31.695510  450576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0805 12:58:31.695523  450576 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0 (exists)
	I0805 12:58:31.695536  450576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0805 12:58:34.789126  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:34.789644  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:34.789673  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:34.789592  451963 retry.go:31] will retry after 2.763937018s: waiting for machine to come up
	I0805 12:58:33.659147  450576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.963588438s)
	I0805 12:58:33.659183  450576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0805 12:58:33.659216  450576 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0805 12:58:33.659263  450576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0805 12:58:37.466579  450576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.807281649s)
	I0805 12:58:37.466623  450576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0805 12:58:37.466657  450576 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0805 12:58:37.466709  450576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0805 12:58:38.111584  450576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0805 12:58:38.111633  450576 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0805 12:58:38.111678  450576 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0805 12:58:37.554827  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:37.555233  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | unable to find current IP address of domain default-k8s-diff-port-371585 in network mk-default-k8s-diff-port-371585
	I0805 12:58:37.555263  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | I0805 12:58:37.555184  451963 retry.go:31] will retry after 3.143735106s: waiting for machine to come up
	I0805 12:58:40.701139  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.701615  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Found IP for machine: 192.168.50.228
	I0805 12:58:40.701649  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has current primary IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.701660  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Reserving static IP address...
	I0805 12:58:40.702105  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-371585", mac: "52:54:00:f4:9f:83", ip: "192.168.50.228"} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:40.702126  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Reserved static IP address: 192.168.50.228
	I0805 12:58:40.702146  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | skip adding static IP to network mk-default-k8s-diff-port-371585 - found existing host DHCP lease matching {name: "default-k8s-diff-port-371585", mac: "52:54:00:f4:9f:83", ip: "192.168.50.228"}
	I0805 12:58:40.702156  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Waiting for SSH to be available...
	I0805 12:58:40.702198  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | Getting to WaitForSSH function...
	I0805 12:58:40.704600  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.704920  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:40.704950  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.705091  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | Using SSH client type: external
	I0805 12:58:40.705129  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | Using SSH private key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa (-rw-------)
	I0805 12:58:40.705179  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.228 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 12:58:40.705200  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | About to run SSH command:
	I0805 12:58:40.705218  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | exit 0
	I0805 12:58:40.836818  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | SSH cmd err, output: <nil>: 
	I0805 12:58:40.837228  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetConfigRaw
	I0805 12:58:40.837884  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetIP
	I0805 12:58:40.840503  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.840843  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:40.840870  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.841129  450884 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585/config.json ...
	I0805 12:58:40.841353  450884 machine.go:94] provisionDockerMachine start ...
	I0805 12:58:40.841373  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 12:58:40.841587  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:40.843943  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.844308  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:40.844336  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.844448  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:40.844614  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:40.844782  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:40.844922  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:40.845067  450884 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:40.845322  450884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.228 22 <nil> <nil>}
	I0805 12:58:40.845333  450884 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 12:58:40.952367  450884 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 12:58:40.952410  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetMachineName
	I0805 12:58:40.952733  450884 buildroot.go:166] provisioning hostname "default-k8s-diff-port-371585"
	I0805 12:58:40.952762  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetMachineName
	I0805 12:58:40.952968  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:40.955642  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.956045  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:40.956077  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:40.956216  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:40.956493  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:40.956651  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:40.956804  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:40.957027  450884 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:40.957239  450884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.228 22 <nil> <nil>}
	I0805 12:58:40.957255  450884 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-371585 && echo "default-k8s-diff-port-371585" | sudo tee /etc/hostname
	I0805 12:58:41.077775  450884 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-371585
	
	I0805 12:58:41.077808  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:41.080777  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.081230  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:41.081273  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.081406  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:41.081631  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:41.081782  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:41.081963  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:41.082139  450884 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:41.082315  450884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.228 22 <nil> <nil>}
	I0805 12:58:41.082333  450884 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-371585' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-371585/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-371585' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 12:58:41.200835  450884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 12:58:41.200871  450884 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19377-383955/.minikube CaCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19377-383955/.minikube}
	I0805 12:58:41.200923  450884 buildroot.go:174] setting up certificates
	I0805 12:58:41.200934  450884 provision.go:84] configureAuth start
	I0805 12:58:41.200945  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetMachineName
	I0805 12:58:41.201284  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetIP
	I0805 12:58:41.204107  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.204460  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:41.204494  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.204631  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:41.206634  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.206948  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:41.206977  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.207048  450884 provision.go:143] copyHostCerts
	I0805 12:58:41.207139  450884 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem, removing ...
	I0805 12:58:41.207151  450884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 12:58:41.207215  450884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem (1082 bytes)
	I0805 12:58:41.207333  450884 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem, removing ...
	I0805 12:58:41.207345  450884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 12:58:41.207372  450884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem (1123 bytes)
	I0805 12:58:41.207451  450884 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem, removing ...
	I0805 12:58:41.207462  450884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 12:58:41.207502  450884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem (1675 bytes)
	I0805 12:58:41.207573  450884 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-371585 san=[127.0.0.1 192.168.50.228 default-k8s-diff-port-371585 localhost minikube]
	I0805 12:58:41.357243  450884 provision.go:177] copyRemoteCerts
	I0805 12:58:41.357344  450884 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 12:58:41.357386  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:41.360309  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.360697  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:41.360738  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.360933  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:41.361120  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:41.361295  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:41.361474  450884 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa Username:docker}
	I0805 12:58:41.454251  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 12:58:41.480595  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0805 12:58:41.506729  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 12:58:41.533349  450884 provision.go:87] duration metric: took 332.399026ms to configureAuth
	I0805 12:58:41.533402  450884 buildroot.go:189] setting minikube options for container-runtime
	I0805 12:58:41.533575  450884 config.go:182] Loaded profile config "default-k8s-diff-port-371585": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:58:41.533655  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:41.536469  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.536831  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:41.536862  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.537006  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:41.537197  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:41.537386  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:41.537541  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:41.537734  450884 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:41.537946  450884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.228 22 <nil> <nil>}
	I0805 12:58:41.537968  450884 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 12:58:41.827043  450884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 12:58:41.827078  450884 machine.go:97] duration metric: took 985.710155ms to provisionDockerMachine
	I0805 12:58:41.827095  450884 start.go:293] postStartSetup for "default-k8s-diff-port-371585" (driver="kvm2")
	I0805 12:58:41.827109  450884 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 12:58:41.827145  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 12:58:41.827564  450884 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 12:58:41.827605  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:41.830350  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.830724  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:41.830761  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.830853  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:41.831034  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:41.831206  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:41.831329  450884 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa Username:docker}
	I0805 12:58:41.915261  450884 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 12:58:41.919719  450884 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 12:58:41.919760  450884 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/addons for local assets ...
	I0805 12:58:41.919835  450884 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/files for local assets ...
	I0805 12:58:41.919930  450884 filesync.go:149] local asset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> 3912192.pem in /etc/ssl/certs
	I0805 12:58:41.920062  450884 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 12:58:41.929842  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:58:41.958933  450884 start.go:296] duration metric: took 131.820227ms for postStartSetup
	I0805 12:58:41.958981  450884 fix.go:56] duration metric: took 20.010130311s for fixHost
	I0805 12:58:41.959012  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:41.962092  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.962510  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:41.962540  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:41.962726  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:41.962968  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:41.963153  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:41.963309  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:41.963479  450884 main.go:141] libmachine: Using SSH client type: native
	I0805 12:58:41.963687  450884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.228 22 <nil> <nil>}
	I0805 12:58:41.963700  450884 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 12:58:42.080993  451238 start.go:364] duration metric: took 3m30.014883629s to acquireMachinesLock for "old-k8s-version-635707"
	I0805 12:58:42.081066  451238 start.go:96] Skipping create...Using existing machine configuration
	I0805 12:58:42.081076  451238 fix.go:54] fixHost starting: 
	I0805 12:58:42.081569  451238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:58:42.081611  451238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:58:42.101889  451238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43379
	I0805 12:58:42.102366  451238 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:58:42.102910  451238 main.go:141] libmachine: Using API Version  1
	I0805 12:58:42.102947  451238 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:58:42.103310  451238 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:58:42.103552  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:58:42.103718  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetState
	I0805 12:58:42.105465  451238 fix.go:112] recreateIfNeeded on old-k8s-version-635707: state=Stopped err=<nil>
	I0805 12:58:42.105504  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	W0805 12:58:42.105674  451238 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 12:58:42.107563  451238 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-635707" ...
	I0805 12:58:39.567840  450576 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0: (1.456137011s)
	I0805 12:58:39.567879  450576 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 from cache
	I0805 12:58:39.567905  450576 cache_images.go:123] Successfully loaded all cached images
	I0805 12:58:39.567911  450576 cache_images.go:92] duration metric: took 14.873174481s to LoadCachedImages
	I0805 12:58:39.567921  450576 kubeadm.go:934] updating node { 192.168.72.223 8443 v1.31.0-rc.0 crio true true} ...
	I0805 12:58:39.568053  450576 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-669469 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.223
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-669469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 12:58:39.568137  450576 ssh_runner.go:195] Run: crio config
	I0805 12:58:39.616607  450576 cni.go:84] Creating CNI manager for ""
	I0805 12:58:39.616634  450576 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:58:39.616660  450576 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 12:58:39.616683  450576 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.223 APIServerPort:8443 KubernetesVersion:v1.31.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-669469 NodeName:no-preload-669469 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.223"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.223 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 12:58:39.616822  450576 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.223
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-669469"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.223
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.223"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 12:58:39.616896  450576 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-rc.0
	I0805 12:58:39.627827  450576 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 12:58:39.627901  450576 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 12:58:39.637348  450576 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0805 12:58:39.653917  450576 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0805 12:58:39.670196  450576 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0805 12:58:39.686922  450576 ssh_runner.go:195] Run: grep 192.168.72.223	control-plane.minikube.internal$ /etc/hosts
	I0805 12:58:39.690804  450576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.223	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:58:39.703146  450576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:58:39.834718  450576 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 12:58:39.857015  450576 certs.go:68] Setting up /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469 for IP: 192.168.72.223
	I0805 12:58:39.857036  450576 certs.go:194] generating shared ca certs ...
	I0805 12:58:39.857057  450576 certs.go:226] acquiring lock for ca certs: {Name:mk0abfcaff3883fbb5243c47b487f9200d9166d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:58:39.857229  450576 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key
	I0805 12:58:39.857286  450576 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key
	I0805 12:58:39.857300  450576 certs.go:256] generating profile certs ...
	I0805 12:58:39.857431  450576 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469/client.key
	I0805 12:58:39.857489  450576 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469/apiserver.key.dd0884bb
	I0805 12:58:39.857535  450576 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469/proxy-client.key
	I0805 12:58:39.857683  450576 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem (1338 bytes)
	W0805 12:58:39.857723  450576 certs.go:480] ignoring /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219_empty.pem, impossibly tiny 0 bytes
	I0805 12:58:39.857739  450576 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 12:58:39.857769  450576 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem (1082 bytes)
	I0805 12:58:39.857834  450576 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem (1123 bytes)
	I0805 12:58:39.857872  450576 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem (1675 bytes)
	I0805 12:58:39.857923  450576 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:58:39.858695  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 12:58:39.895944  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 12:58:39.925816  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 12:58:39.960150  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 12:58:39.993307  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0805 12:58:40.027900  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0805 12:58:40.053492  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 12:58:40.077331  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/no-preload-669469/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 12:58:40.101010  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /usr/share/ca-certificates/3912192.pem (1708 bytes)
	I0805 12:58:40.123991  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 12:58:40.147563  450576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem --> /usr/share/ca-certificates/391219.pem (1338 bytes)
	I0805 12:58:40.170414  450576 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 12:58:40.188256  450576 ssh_runner.go:195] Run: openssl version
	I0805 12:58:40.193955  450576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3912192.pem && ln -fs /usr/share/ca-certificates/3912192.pem /etc/ssl/certs/3912192.pem"
	I0805 12:58:40.204793  450576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3912192.pem
	I0805 12:58:40.209061  450576 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 11:39 /usr/share/ca-certificates/3912192.pem
	I0805 12:58:40.209115  450576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3912192.pem
	I0805 12:58:40.214948  450576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3912192.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 12:58:40.226193  450576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 12:58:40.237723  450576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:58:40.241960  450576 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:58:40.242019  450576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:58:40.247502  450576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 12:58:40.258791  450576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391219.pem && ln -fs /usr/share/ca-certificates/391219.pem /etc/ssl/certs/391219.pem"
	I0805 12:58:40.270176  450576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/391219.pem
	I0805 12:58:40.274717  450576 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 11:39 /usr/share/ca-certificates/391219.pem
	I0805 12:58:40.274786  450576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391219.pem
	I0805 12:58:40.280457  450576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391219.pem /etc/ssl/certs/51391683.0"
	I0805 12:58:40.292091  450576 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 12:58:40.296842  450576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 12:58:40.303003  450576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 12:58:40.309009  450576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 12:58:40.314951  450576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 12:58:40.320674  450576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 12:58:40.326433  450576 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 12:58:40.331848  450576 kubeadm.go:392] StartCluster: {Name:no-preload-669469 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-rc.0 ClusterName:no-preload-669469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.223 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:58:40.331938  450576 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 12:58:40.331975  450576 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:58:40.374390  450576 cri.go:89] found id: ""
	I0805 12:58:40.374482  450576 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 12:58:40.385467  450576 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0805 12:58:40.385485  450576 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0805 12:58:40.385531  450576 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0805 12:58:40.395411  450576 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0805 12:58:40.396455  450576 kubeconfig.go:125] found "no-preload-669469" server: "https://192.168.72.223:8443"
	I0805 12:58:40.400090  450576 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0805 12:58:40.410942  450576 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.223
	I0805 12:58:40.410971  450576 kubeadm.go:1160] stopping kube-system containers ...
	I0805 12:58:40.410985  450576 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0805 12:58:40.411032  450576 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:58:40.453021  450576 cri.go:89] found id: ""
	I0805 12:58:40.453115  450576 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0805 12:58:40.470389  450576 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 12:58:40.480421  450576 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 12:58:40.480445  450576 kubeadm.go:157] found existing configuration files:
	
	I0805 12:58:40.480502  450576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 12:58:40.489625  450576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 12:58:40.489672  450576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 12:58:40.499261  450576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 12:58:40.508571  450576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 12:58:40.508634  450576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 12:58:40.517811  450576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 12:58:40.526563  450576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 12:58:40.526620  450576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 12:58:40.535753  450576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 12:58:40.544981  450576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 12:58:40.545040  450576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 12:58:40.555237  450576 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 12:58:40.565180  450576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:40.683889  450576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:41.632122  450576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:41.866665  450576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:41.944022  450576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:42.048030  450576 api_server.go:52] waiting for apiserver process to appear ...
	I0805 12:58:42.048127  450576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:58:42.548995  450576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:58:43.048336  450576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:58:43.086457  450576 api_server.go:72] duration metric: took 1.038426772s to wait for apiserver process to appear ...
	I0805 12:58:43.086487  450576 api_server.go:88] waiting for apiserver healthz status ...
	I0805 12:58:43.086509  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:43.086982  450576 api_server.go:269] stopped: https://192.168.72.223:8443/healthz: Get "https://192.168.72.223:8443/healthz": dial tcp 192.168.72.223:8443: connect: connection refused
	I0805 12:58:42.080800  450884 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722862722.053648046
	
	I0805 12:58:42.080828  450884 fix.go:216] guest clock: 1722862722.053648046
	I0805 12:58:42.080839  450884 fix.go:229] Guest: 2024-08-05 12:58:42.053648046 +0000 UTC Remote: 2024-08-05 12:58:41.958987261 +0000 UTC m=+264.923354352 (delta=94.660785ms)
	I0805 12:58:42.080867  450884 fix.go:200] guest clock delta is within tolerance: 94.660785ms
	I0805 12:58:42.080876  450884 start.go:83] releasing machines lock for "default-k8s-diff-port-371585", held for 20.132054114s
	I0805 12:58:42.080916  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 12:58:42.081260  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetIP
	I0805 12:58:42.084196  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:42.084662  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:42.084695  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:42.084867  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 12:58:42.085589  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 12:58:42.085786  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 12:58:42.085875  450884 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 12:58:42.085925  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:42.086064  450884 ssh_runner.go:195] Run: cat /version.json
	I0805 12:58:42.086091  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 12:58:42.088693  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:42.089018  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:42.089042  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:42.089197  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:42.089260  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:42.089455  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:42.089729  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:42.089730  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:42.089785  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:42.089881  450884 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa Username:docker}
	I0805 12:58:42.089970  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 12:58:42.090128  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 12:58:42.090286  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 12:58:42.090457  450884 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa Username:docker}
	I0805 12:58:42.193160  450884 ssh_runner.go:195] Run: systemctl --version
	I0805 12:58:42.199341  450884 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 12:58:42.344713  450884 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 12:58:42.350944  450884 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 12:58:42.351026  450884 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 12:58:42.368162  450884 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 12:58:42.368196  450884 start.go:495] detecting cgroup driver to use...
	I0805 12:58:42.368260  450884 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 12:58:42.384477  450884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 12:58:42.401847  450884 docker.go:217] disabling cri-docker service (if available) ...
	I0805 12:58:42.401907  450884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 12:58:42.416318  450884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 12:58:42.430994  450884 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 12:58:42.545944  450884 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 12:58:42.721877  450884 docker.go:233] disabling docker service ...
	I0805 12:58:42.721961  450884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 12:58:42.743504  450884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 12:58:42.763111  450884 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 12:58:42.914270  450884 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 12:58:43.064816  450884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 12:58:43.090748  450884 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 12:58:43.115493  450884 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0805 12:58:43.115565  450884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:43.132497  450884 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 12:58:43.132583  450884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:43.146700  450884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:43.159880  450884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:43.175598  450884 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 12:58:43.191263  450884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:43.207573  450884 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:43.229567  450884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:58:43.248604  450884 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 12:58:43.261272  450884 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0805 12:58:43.261350  450884 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0805 12:58:43.276740  450884 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 12:58:43.288473  450884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:58:43.436066  450884 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 12:58:43.593264  450884 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 12:58:43.593355  450884 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 12:58:43.599342  450884 start.go:563] Will wait 60s for crictl version
	I0805 12:58:43.599419  450884 ssh_runner.go:195] Run: which crictl
	I0805 12:58:43.603681  450884 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 12:58:43.651181  450884 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 12:58:43.651296  450884 ssh_runner.go:195] Run: crio --version
	I0805 12:58:43.691418  450884 ssh_runner.go:195] Run: crio --version
	I0805 12:58:43.725036  450884 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0805 12:58:42.109016  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .Start
	I0805 12:58:42.109214  451238 main.go:141] libmachine: (old-k8s-version-635707) Ensuring networks are active...
	I0805 12:58:42.110192  451238 main.go:141] libmachine: (old-k8s-version-635707) Ensuring network default is active
	I0805 12:58:42.110686  451238 main.go:141] libmachine: (old-k8s-version-635707) Ensuring network mk-old-k8s-version-635707 is active
	I0805 12:58:42.111108  451238 main.go:141] libmachine: (old-k8s-version-635707) Getting domain xml...
	I0805 12:58:42.112194  451238 main.go:141] libmachine: (old-k8s-version-635707) Creating domain...
	I0805 12:58:43.453015  451238 main.go:141] libmachine: (old-k8s-version-635707) Waiting to get IP...
	I0805 12:58:43.453994  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:43.454435  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:43.454504  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:43.454435  452186 retry.go:31] will retry after 270.355403ms: waiting for machine to come up
	I0805 12:58:43.727101  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:43.727583  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:43.727641  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:43.727568  452186 retry.go:31] will retry after 313.75466ms: waiting for machine to come up
	I0805 12:58:44.043303  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:44.043954  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:44.043981  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:44.043855  452186 retry.go:31] will retry after 308.608573ms: waiting for machine to come up
	I0805 12:58:44.354830  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:44.355396  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:44.355421  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:44.355305  452186 retry.go:31] will retry after 510.256657ms: waiting for machine to come up
	I0805 12:58:44.866970  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:44.867534  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:44.867559  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:44.867424  452186 retry.go:31] will retry after 668.55006ms: waiting for machine to come up
	I0805 12:58:45.537377  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:45.537959  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:45.537989  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:45.537909  452186 retry.go:31] will retry after 677.549944ms: waiting for machine to come up
	I0805 12:58:46.217077  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:46.217591  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:46.217625  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:46.217483  452186 retry.go:31] will retry after 847.636867ms: waiting for machine to come up
	I0805 12:58:43.726277  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetIP
	I0805 12:58:43.729689  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:43.730162  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 12:58:43.730195  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 12:58:43.730391  450884 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0805 12:58:43.735448  450884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:58:43.749640  450884 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-371585 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-371585 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.228 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 12:58:43.749808  450884 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 12:58:43.749886  450884 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:58:43.798507  450884 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0805 12:58:43.798584  450884 ssh_runner.go:195] Run: which lz4
	I0805 12:58:43.803306  450884 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0805 12:58:43.809104  450884 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 12:58:43.809144  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0805 12:58:45.333758  450884 crio.go:462] duration metric: took 1.530500213s to copy over tarball
	I0805 12:58:45.333831  450884 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 12:58:43.587275  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:46.303995  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:46.304038  450576 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:46.304057  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:46.308815  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:46.308849  450576 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:46.587239  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:46.595116  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:46.595151  450576 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:47.087372  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:47.094319  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:47.094363  450576 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:47.586909  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:47.592210  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:47.592252  450576 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:48.086763  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:48.095151  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:48.095182  450576 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:48.586840  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:48.593834  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:48.593870  450576 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:49.087516  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:49.093647  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:49.093677  450576 api_server.go:103] status: https://192.168.72.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:49.587309  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 12:58:49.593592  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 200:
	ok
	I0805 12:58:49.602960  450576 api_server.go:141] control plane version: v1.31.0-rc.0
	I0805 12:58:49.603001  450576 api_server.go:131] duration metric: took 6.516505116s to wait for apiserver health ...
	I0805 12:58:49.603013  450576 cni.go:84] Creating CNI manager for ""
	I0805 12:58:49.603024  450576 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:58:49.851135  450576 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0805 12:58:47.067245  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:47.067895  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:47.067930  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:47.067838  452186 retry.go:31] will retry after 1.275228928s: waiting for machine to come up
	I0805 12:58:48.344881  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:48.345295  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:48.345319  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:48.345258  452186 retry.go:31] will retry after 1.826891386s: waiting for machine to come up
	I0805 12:58:50.174583  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:50.175111  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:50.175138  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:50.175074  452186 retry.go:31] will retry after 1.53756677s: waiting for machine to come up
	I0805 12:58:51.714025  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:51.714529  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:51.714553  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:51.714485  452186 retry.go:31] will retry after 2.762270002s: waiting for machine to come up
	I0805 12:58:47.908896  450884 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.575029516s)
	I0805 12:58:47.908929  450884 crio.go:469] duration metric: took 2.575138566s to extract the tarball
	I0805 12:58:47.908938  450884 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0805 12:58:47.964757  450884 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:58:48.013358  450884 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 12:58:48.013392  450884 cache_images.go:84] Images are preloaded, skipping loading
	I0805 12:58:48.013404  450884 kubeadm.go:934] updating node { 192.168.50.228 8444 v1.30.3 crio true true} ...
	I0805 12:58:48.013533  450884 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-371585 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.228
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-371585 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 12:58:48.013623  450884 ssh_runner.go:195] Run: crio config
	I0805 12:58:48.062183  450884 cni.go:84] Creating CNI manager for ""
	I0805 12:58:48.062219  450884 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:58:48.062238  450884 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 12:58:48.062274  450884 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.228 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-371585 NodeName:default-k8s-diff-port-371585 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.228"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.228 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 12:58:48.062474  450884 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.228
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-371585"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.228
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.228"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 12:58:48.062552  450884 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 12:58:48.076490  450884 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 12:58:48.076583  450884 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 12:58:48.090058  450884 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0805 12:58:48.110202  450884 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 12:58:48.131420  450884 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0805 12:58:48.151774  450884 ssh_runner.go:195] Run: grep 192.168.50.228	control-plane.minikube.internal$ /etc/hosts
	I0805 12:58:48.156904  450884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.228	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:58:48.172398  450884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:58:48.292999  450884 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 12:58:48.310331  450884 certs.go:68] Setting up /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585 for IP: 192.168.50.228
	I0805 12:58:48.310366  450884 certs.go:194] generating shared ca certs ...
	I0805 12:58:48.310389  450884 certs.go:226] acquiring lock for ca certs: {Name:mk0abfcaff3883fbb5243c47b487f9200d9166d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:58:48.310576  450884 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key
	I0805 12:58:48.310640  450884 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key
	I0805 12:58:48.310658  450884 certs.go:256] generating profile certs ...
	I0805 12:58:48.310803  450884 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585/client.key
	I0805 12:58:48.310881  450884 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585/apiserver.key.f7891227
	I0805 12:58:48.310946  450884 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585/proxy-client.key
	I0805 12:58:48.311231  450884 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem (1338 bytes)
	W0805 12:58:48.311317  450884 certs.go:480] ignoring /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219_empty.pem, impossibly tiny 0 bytes
	I0805 12:58:48.311354  450884 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 12:58:48.311408  450884 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem (1082 bytes)
	I0805 12:58:48.311447  450884 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem (1123 bytes)
	I0805 12:58:48.311485  450884 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem (1675 bytes)
	I0805 12:58:48.311545  450884 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:58:48.312365  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 12:58:48.363733  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 12:58:48.395662  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 12:58:48.450822  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 12:58:48.495611  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0805 12:58:48.529393  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0805 12:58:48.557543  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 12:58:48.584777  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/default-k8s-diff-port-371585/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 12:58:48.611987  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /usr/share/ca-certificates/3912192.pem (1708 bytes)
	I0805 12:58:48.637500  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 12:58:48.664469  450884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem --> /usr/share/ca-certificates/391219.pem (1338 bytes)
	I0805 12:58:48.690221  450884 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 12:58:48.709082  450884 ssh_runner.go:195] Run: openssl version
	I0805 12:58:48.716181  450884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3912192.pem && ln -fs /usr/share/ca-certificates/3912192.pem /etc/ssl/certs/3912192.pem"
	I0805 12:58:48.728455  450884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3912192.pem
	I0805 12:58:48.733395  450884 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 11:39 /usr/share/ca-certificates/3912192.pem
	I0805 12:58:48.733456  450884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3912192.pem
	I0805 12:58:48.739295  450884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3912192.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 12:58:48.750515  450884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 12:58:48.761506  450884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:58:48.765995  450884 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:58:48.766052  450884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:58:48.772121  450884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 12:58:48.783123  450884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391219.pem && ln -fs /usr/share/ca-certificates/391219.pem /etc/ssl/certs/391219.pem"
	I0805 12:58:48.794318  450884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/391219.pem
	I0805 12:58:48.798795  450884 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 11:39 /usr/share/ca-certificates/391219.pem
	I0805 12:58:48.798843  450884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391219.pem
	I0805 12:58:48.804878  450884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391219.pem /etc/ssl/certs/51391683.0"
	I0805 12:58:48.816757  450884 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 12:58:48.821686  450884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 12:58:48.828121  450884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 12:58:48.834386  450884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 12:58:48.840425  450884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 12:58:48.846218  450884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 12:58:48.852035  450884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 12:58:48.857997  450884 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-371585 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-371585 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.228 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:58:48.858131  450884 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 12:58:48.858179  450884 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:58:48.908402  450884 cri.go:89] found id: ""
	I0805 12:58:48.908471  450884 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 12:58:48.921185  450884 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0805 12:58:48.921207  450884 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0805 12:58:48.921258  450884 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0805 12:58:48.932907  450884 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0805 12:58:48.933927  450884 kubeconfig.go:125] found "default-k8s-diff-port-371585" server: "https://192.168.50.228:8444"
	I0805 12:58:48.936058  450884 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0805 12:58:48.947233  450884 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.228
	I0805 12:58:48.947262  450884 kubeadm.go:1160] stopping kube-system containers ...
	I0805 12:58:48.947273  450884 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0805 12:58:48.947313  450884 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:58:48.988179  450884 cri.go:89] found id: ""
	I0805 12:58:48.988281  450884 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0805 12:58:49.005901  450884 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 12:58:49.016576  450884 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 12:58:49.016597  450884 kubeadm.go:157] found existing configuration files:
	
	I0805 12:58:49.016648  450884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0805 12:58:49.029718  450884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 12:58:49.029822  450884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 12:58:49.041670  450884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0805 12:58:49.051650  450884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 12:58:49.051724  450884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 12:58:49.061671  450884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0805 12:58:49.071671  450884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 12:58:49.071755  450884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 12:58:49.082022  450884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0805 12:58:49.092013  450884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 12:58:49.092103  450884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 12:58:49.105446  450884 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 12:58:49.118581  450884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:49.233260  450884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:50.199462  450884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:50.418823  450884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:50.500350  450884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:50.594991  450884 api_server.go:52] waiting for apiserver process to appear ...
	I0805 12:58:50.595109  450884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:58:51.096171  450884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:58:51.596111  450884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:58:51.633309  450884 api_server.go:72] duration metric: took 1.038316986s to wait for apiserver process to appear ...
	I0805 12:58:51.633350  450884 api_server.go:88] waiting for apiserver healthz status ...
	I0805 12:58:51.633377  450884 api_server.go:253] Checking apiserver healthz at https://192.168.50.228:8444/healthz ...
	I0805 12:58:51.634005  450884 api_server.go:269] stopped: https://192.168.50.228:8444/healthz: Get "https://192.168.50.228:8444/healthz": dial tcp 192.168.50.228:8444: connect: connection refused
	I0805 12:58:50.021635  450576 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0805 12:58:50.036338  450576 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0805 12:58:50.060746  450576 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 12:58:50.159670  450576 system_pods.go:59] 8 kube-system pods found
	I0805 12:58:50.159724  450576 system_pods.go:61] "coredns-6f6b679f8f-nkv88" [ee7e59fb-2500-4d7a-9537-e38e08fb2445] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0805 12:58:50.159737  450576 system_pods.go:61] "etcd-no-preload-669469" [095df0f1-069a-419f-815b-ddbec3a2291f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0805 12:58:50.159762  450576 system_pods.go:61] "kube-apiserver-no-preload-669469" [20b45902-b807-457a-93b3-d2b9b76d2598] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0805 12:58:50.159772  450576 system_pods.go:61] "kube-controller-manager-no-preload-669469" [122a47ed-7f6f-4b2e-980a-45f41b997dda] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0805 12:58:50.159780  450576 system_pods.go:61] "kube-proxy-cwq69" [78e0333b-a0f4-40a6-a04d-6971bb4d09a8] Running
	I0805 12:58:50.159788  450576 system_pods.go:61] "kube-scheduler-no-preload-669469" [88010c2b-b32f-4fe1-952d-262e881b76dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0805 12:58:50.159796  450576 system_pods.go:61] "metrics-server-6867b74b74-p7b2r" [7e4dd805-07c8-4339-bf1a-57a98fd674cd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 12:58:50.159808  450576 system_pods.go:61] "storage-provisioner" [207c46c5-c3c0-4f0b-b3ea-9b42b9e5f761] Running
	I0805 12:58:50.159817  450576 system_pods.go:74] duration metric: took 99.038765ms to wait for pod list to return data ...
	I0805 12:58:50.159830  450576 node_conditions.go:102] verifying NodePressure condition ...
	I0805 12:58:50.163888  450576 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 12:58:50.163923  450576 node_conditions.go:123] node cpu capacity is 2
	I0805 12:58:50.163956  450576 node_conditions.go:105] duration metric: took 4.11869ms to run NodePressure ...
	I0805 12:58:50.163980  450576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:50.849885  450576 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0805 12:58:50.854483  450576 kubeadm.go:739] kubelet initialised
	I0805 12:58:50.854505  450576 kubeadm.go:740] duration metric: took 4.588388ms waiting for restarted kubelet to initialise ...
	I0805 12:58:50.854514  450576 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 12:58:50.861245  450576 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-nkv88" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:52.869370  450576 pod_ready.go:102] pod "coredns-6f6b679f8f-nkv88" in "kube-system" namespace has status "Ready":"False"
	I0805 12:58:52.134427  450884 api_server.go:253] Checking apiserver healthz at https://192.168.50.228:8444/healthz ...
	I0805 12:58:54.933253  450884 api_server.go:279] https://192.168.50.228:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0805 12:58:54.933288  450884 api_server.go:103] status: https://192.168.50.228:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0805 12:58:54.933305  450884 api_server.go:253] Checking apiserver healthz at https://192.168.50.228:8444/healthz ...
	I0805 12:58:54.970883  450884 api_server.go:279] https://192.168.50.228:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0805 12:58:54.970928  450884 api_server.go:103] status: https://192.168.50.228:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0805 12:58:55.134250  450884 api_server.go:253] Checking apiserver healthz at https://192.168.50.228:8444/healthz ...
	I0805 12:58:55.139762  450884 api_server.go:279] https://192.168.50.228:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:55.139798  450884 api_server.go:103] status: https://192.168.50.228:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:55.634499  450884 api_server.go:253] Checking apiserver healthz at https://192.168.50.228:8444/healthz ...
	I0805 12:58:55.644495  450884 api_server.go:279] https://192.168.50.228:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:55.644532  450884 api_server.go:103] status: https://192.168.50.228:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:56.134123  450884 api_server.go:253] Checking apiserver healthz at https://192.168.50.228:8444/healthz ...
	I0805 12:58:56.141958  450884 api_server.go:279] https://192.168.50.228:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:58:56.142002  450884 api_server.go:103] status: https://192.168.50.228:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:58:56.633573  450884 api_server.go:253] Checking apiserver healthz at https://192.168.50.228:8444/healthz ...
	I0805 12:58:56.640578  450884 api_server.go:279] https://192.168.50.228:8444/healthz returned 200:
	ok
	I0805 12:58:56.649624  450884 api_server.go:141] control plane version: v1.30.3
	I0805 12:58:56.649659  450884 api_server.go:131] duration metric: took 5.016299114s to wait for apiserver health ...
	I0805 12:58:56.649671  450884 cni.go:84] Creating CNI manager for ""
	I0805 12:58:56.649681  450884 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:58:56.651587  450884 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0805 12:58:54.478201  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:54.478619  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:54.478650  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:54.478579  452186 retry.go:31] will retry after 2.992766963s: waiting for machine to come up
	I0805 12:58:56.652853  450884 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0805 12:58:56.663878  450884 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0805 12:58:56.699765  450884 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 12:58:56.715040  450884 system_pods.go:59] 8 kube-system pods found
	I0805 12:58:56.715078  450884 system_pods.go:61] "coredns-7db6d8ff4d-8rzb7" [df42e41d-4544-493f-a09d-678df1fb5258] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0805 12:58:56.715085  450884 system_pods.go:61] "etcd-default-k8s-diff-port-371585" [1ab6cd59-432a-44b8-95f2-948c585d9bbf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0805 12:58:56.715092  450884 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-371585" [c9173b98-c77e-4ad0-aea5-c894c045e0c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0805 12:58:56.715101  450884 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-371585" [283737ec-1afa-4994-9cee-b655a8397a37] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0805 12:58:56.715105  450884 system_pods.go:61] "kube-proxy-5dr9v" [767ccb8b-2db0-4b59-b3b0-e099185bc725] Running
	I0805 12:58:56.715111  450884 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-371585" [fb3cfdea-9370-4842-a5ab-5ac24804f59e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0805 12:58:56.715116  450884 system_pods.go:61] "metrics-server-569cc877fc-dsrqr" [0d4c79e4-aa6c-42f5-840b-91b9d714d078] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 12:58:56.715125  450884 system_pods.go:61] "storage-provisioner" [2dba6f50-5cdc-4195-8daf-c19dac38f488] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0805 12:58:56.715133  450884 system_pods.go:74] duration metric: took 15.343284ms to wait for pod list to return data ...
	I0805 12:58:56.715144  450884 node_conditions.go:102] verifying NodePressure condition ...
	I0805 12:58:56.720006  450884 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 12:58:56.720031  450884 node_conditions.go:123] node cpu capacity is 2
	I0805 12:58:56.720042  450884 node_conditions.go:105] duration metric: took 4.893566ms to run NodePressure ...
	I0805 12:58:56.720059  450884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:58:56.985822  450884 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0805 12:58:56.990461  450884 kubeadm.go:739] kubelet initialised
	I0805 12:58:56.990484  450884 kubeadm.go:740] duration metric: took 4.636814ms waiting for restarted kubelet to initialise ...
	I0805 12:58:56.990493  450884 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 12:58:56.996266  450884 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-8rzb7" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:57.001407  450884 pod_ready.go:97] node "default-k8s-diff-port-371585" hosting pod "coredns-7db6d8ff4d-8rzb7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-371585" has status "Ready":"False"
	I0805 12:58:57.001434  450884 pod_ready.go:81] duration metric: took 5.140963ms for pod "coredns-7db6d8ff4d-8rzb7" in "kube-system" namespace to be "Ready" ...
	E0805 12:58:57.001446  450884 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-371585" hosting pod "coredns-7db6d8ff4d-8rzb7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-371585" has status "Ready":"False"
	I0805 12:58:57.001456  450884 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:57.005437  450884 pod_ready.go:97] node "default-k8s-diff-port-371585" hosting pod "etcd-default-k8s-diff-port-371585" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-371585" has status "Ready":"False"
	I0805 12:58:57.005473  450884 pod_ready.go:81] duration metric: took 3.995646ms for pod "etcd-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	E0805 12:58:57.005486  450884 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-371585" hosting pod "etcd-default-k8s-diff-port-371585" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-371585" has status "Ready":"False"
	I0805 12:58:57.005495  450884 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:57.009923  450884 pod_ready.go:97] node "default-k8s-diff-port-371585" hosting pod "kube-apiserver-default-k8s-diff-port-371585" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-371585" has status "Ready":"False"
	I0805 12:58:57.009943  450884 pod_ready.go:81] duration metric: took 4.439871ms for pod "kube-apiserver-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	E0805 12:58:57.009952  450884 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-371585" hosting pod "kube-apiserver-default-k8s-diff-port-371585" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-371585" has status "Ready":"False"
	I0805 12:58:57.009958  450884 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:54.869534  450576 pod_ready.go:102] pod "coredns-6f6b679f8f-nkv88" in "kube-system" namespace has status "Ready":"False"
	I0805 12:58:56.370007  450576 pod_ready.go:92] pod "coredns-6f6b679f8f-nkv88" in "kube-system" namespace has status "Ready":"True"
	I0805 12:58:56.370035  450576 pod_ready.go:81] duration metric: took 5.508756413s for pod "coredns-6f6b679f8f-nkv88" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:56.370045  450576 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.376357  450576 pod_ready.go:92] pod "etcd-no-preload-669469" in "kube-system" namespace has status "Ready":"True"
	I0805 12:58:58.376386  450576 pod_ready.go:81] duration metric: took 2.006334873s for pod "etcd-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.376396  450576 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:57.473094  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:58:57.473555  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | unable to find current IP address of domain old-k8s-version-635707 in network mk-old-k8s-version-635707
	I0805 12:58:57.473587  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | I0805 12:58:57.473495  452186 retry.go:31] will retry after 4.27138033s: waiting for machine to come up
	I0805 12:59:01.750111  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.750558  451238 main.go:141] libmachine: (old-k8s-version-635707) Found IP for machine: 192.168.61.41
	I0805 12:59:01.750586  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has current primary IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.750593  451238 main.go:141] libmachine: (old-k8s-version-635707) Reserving static IP address...
	I0805 12:59:01.751003  451238 main.go:141] libmachine: (old-k8s-version-635707) Reserved static IP address: 192.168.61.41
	I0805 12:59:01.751061  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "old-k8s-version-635707", mac: "52:54:00:2a:da:c5", ip: "192.168.61.41"} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:01.751081  451238 main.go:141] libmachine: (old-k8s-version-635707) Waiting for SSH to be available...
	I0805 12:59:01.751112  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | skip adding static IP to network mk-old-k8s-version-635707 - found existing host DHCP lease matching {name: "old-k8s-version-635707", mac: "52:54:00:2a:da:c5", ip: "192.168.61.41"}
	I0805 12:59:01.751130  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | Getting to WaitForSSH function...
	I0805 12:59:01.753240  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.753634  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:01.753672  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.753810  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | Using SSH client type: external
	I0805 12:59:01.753854  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | Using SSH private key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/id_rsa (-rw-------)
	I0805 12:59:01.753900  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.41 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 12:59:01.753919  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | About to run SSH command:
	I0805 12:59:01.753933  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | exit 0
	I0805 12:59:01.875919  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | SSH cmd err, output: <nil>: 
	I0805 12:59:01.876298  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetConfigRaw
	I0805 12:59:01.877028  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetIP
	I0805 12:59:01.879644  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.880120  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:01.880164  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.880508  451238 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/config.json ...
	I0805 12:59:01.880778  451238 machine.go:94] provisionDockerMachine start ...
	I0805 12:59:01.880805  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:59:01.881039  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:01.882998  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.883362  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:01.883389  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.883553  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:01.883755  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:01.883900  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:01.884012  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:01.884248  451238 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:01.884496  451238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I0805 12:59:01.884511  451238 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 12:58:57.103049  450884 pod_ready.go:97] node "default-k8s-diff-port-371585" hosting pod "kube-controller-manager-default-k8s-diff-port-371585" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-371585" has status "Ready":"False"
	I0805 12:58:57.103095  450884 pod_ready.go:81] duration metric: took 93.113727ms for pod "kube-controller-manager-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	E0805 12:58:57.103109  450884 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-371585" hosting pod "kube-controller-manager-default-k8s-diff-port-371585" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-371585" has status "Ready":"False"
	I0805 12:58:57.103116  450884 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5dr9v" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:57.503531  450884 pod_ready.go:92] pod "kube-proxy-5dr9v" in "kube-system" namespace has status "Ready":"True"
	I0805 12:58:57.503556  450884 pod_ready.go:81] duration metric: took 400.433562ms for pod "kube-proxy-5dr9v" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:57.503565  450884 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:59.514591  450884 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:02.011308  450884 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:03.148902  450393 start.go:364] duration metric: took 56.514427046s to acquireMachinesLock for "embed-certs-321139"
	I0805 12:59:03.148967  450393 start.go:96] Skipping create...Using existing machine configuration
	I0805 12:59:03.148976  450393 fix.go:54] fixHost starting: 
	I0805 12:59:03.149432  450393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:59:03.149473  450393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:59:03.166485  450393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43007
	I0805 12:59:03.166934  450393 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:59:03.167443  450393 main.go:141] libmachine: Using API Version  1
	I0805 12:59:03.167469  450393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:59:03.167808  450393 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:59:03.168062  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:03.168258  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetState
	I0805 12:59:03.170011  450393 fix.go:112] recreateIfNeeded on embed-certs-321139: state=Stopped err=<nil>
	I0805 12:59:03.170036  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	W0805 12:59:03.170221  450393 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 12:59:03.172109  450393 out.go:177] * Restarting existing kvm2 VM for "embed-certs-321139" ...
	I0805 12:58:58.886766  450576 pod_ready.go:92] pod "kube-apiserver-no-preload-669469" in "kube-system" namespace has status "Ready":"True"
	I0805 12:58:58.886792  450576 pod_ready.go:81] duration metric: took 510.389529ms for pod "kube-apiserver-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.886804  450576 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.891878  450576 pod_ready.go:92] pod "kube-controller-manager-no-preload-669469" in "kube-system" namespace has status "Ready":"True"
	I0805 12:58:58.891907  450576 pod_ready.go:81] duration metric: took 5.094036ms for pod "kube-controller-manager-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.891919  450576 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cwq69" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.896953  450576 pod_ready.go:92] pod "kube-proxy-cwq69" in "kube-system" namespace has status "Ready":"True"
	I0805 12:58:58.896981  450576 pod_ready.go:81] duration metric: took 5.054422ms for pod "kube-proxy-cwq69" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.896995  450576 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.902437  450576 pod_ready.go:92] pod "kube-scheduler-no-preload-669469" in "kube-system" namespace has status "Ready":"True"
	I0805 12:58:58.902456  450576 pod_ready.go:81] duration metric: took 5.453487ms for pod "kube-scheduler-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 12:58:58.902465  450576 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:00.909633  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:03.410487  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:03.173728  450393 main.go:141] libmachine: (embed-certs-321139) Calling .Start
	I0805 12:59:03.173932  450393 main.go:141] libmachine: (embed-certs-321139) Ensuring networks are active...
	I0805 12:59:03.174932  450393 main.go:141] libmachine: (embed-certs-321139) Ensuring network default is active
	I0805 12:59:03.175441  450393 main.go:141] libmachine: (embed-certs-321139) Ensuring network mk-embed-certs-321139 is active
	I0805 12:59:03.176102  450393 main.go:141] libmachine: (embed-certs-321139) Getting domain xml...
	I0805 12:59:03.176848  450393 main.go:141] libmachine: (embed-certs-321139) Creating domain...
	I0805 12:59:01.984198  451238 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 12:59:01.984237  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetMachineName
	I0805 12:59:01.984501  451238 buildroot.go:166] provisioning hostname "old-k8s-version-635707"
	I0805 12:59:01.984534  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetMachineName
	I0805 12:59:01.984750  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:01.987690  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.988085  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:01.988115  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:01.988240  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:01.988470  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:01.988782  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:01.988945  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:01.989173  451238 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:01.989407  451238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I0805 12:59:01.989425  451238 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-635707 && echo "old-k8s-version-635707" | sudo tee /etc/hostname
	I0805 12:59:02.108368  451238 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-635707
	
	I0805 12:59:02.108406  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:02.111301  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.111669  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:02.111712  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.111837  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:02.112027  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:02.112212  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:02.112393  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:02.112563  451238 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:02.112797  451238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I0805 12:59:02.112824  451238 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-635707' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-635707/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-635707' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 12:59:02.225638  451238 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 12:59:02.225681  451238 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19377-383955/.minikube CaCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19377-383955/.minikube}
	I0805 12:59:02.225731  451238 buildroot.go:174] setting up certificates
	I0805 12:59:02.225745  451238 provision.go:84] configureAuth start
	I0805 12:59:02.225760  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetMachineName
	I0805 12:59:02.226099  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetIP
	I0805 12:59:02.229252  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.229643  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:02.229671  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.229885  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:02.232479  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.232912  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:02.232951  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.233125  451238 provision.go:143] copyHostCerts
	I0805 12:59:02.233188  451238 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem, removing ...
	I0805 12:59:02.233201  451238 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 12:59:02.233271  451238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem (1123 bytes)
	I0805 12:59:02.233412  451238 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem, removing ...
	I0805 12:59:02.233426  451238 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 12:59:02.233459  451238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem (1675 bytes)
	I0805 12:59:02.233543  451238 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem, removing ...
	I0805 12:59:02.233553  451238 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 12:59:02.233581  451238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem (1082 bytes)
	I0805 12:59:02.233661  451238 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-635707 san=[127.0.0.1 192.168.61.41 localhost minikube old-k8s-version-635707]
	I0805 12:59:02.470213  451238 provision.go:177] copyRemoteCerts
	I0805 12:59:02.470328  451238 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 12:59:02.470369  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:02.473450  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.473791  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:02.473829  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.473964  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:02.474173  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:02.474313  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:02.474429  451238 sshutil.go:53] new ssh client: &{IP:192.168.61.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/id_rsa Username:docker}
	I0805 12:59:02.558831  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 12:59:02.583652  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0805 12:59:02.609154  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0805 12:59:02.635827  451238 provision.go:87] duration metric: took 410.067115ms to configureAuth
	I0805 12:59:02.635862  451238 buildroot.go:189] setting minikube options for container-runtime
	I0805 12:59:02.636109  451238 config.go:182] Loaded profile config "old-k8s-version-635707": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0805 12:59:02.636357  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:02.638964  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.639466  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:02.639489  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.639644  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:02.639953  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:02.640197  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:02.640454  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:02.640733  451238 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:02.640975  451238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I0805 12:59:02.641000  451238 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 12:59:02.917466  451238 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 12:59:02.917499  451238 machine.go:97] duration metric: took 1.036701572s to provisionDockerMachine
	I0805 12:59:02.917512  451238 start.go:293] postStartSetup for "old-k8s-version-635707" (driver="kvm2")
	I0805 12:59:02.917522  451238 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 12:59:02.917539  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:59:02.917946  451238 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 12:59:02.917979  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:02.920900  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.921383  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:02.921426  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:02.921552  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:02.921773  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:02.921958  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:02.922220  451238 sshutil.go:53] new ssh client: &{IP:192.168.61.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/id_rsa Username:docker}
	I0805 12:59:03.003670  451238 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 12:59:03.008348  451238 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 12:59:03.008384  451238 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/addons for local assets ...
	I0805 12:59:03.008468  451238 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/files for local assets ...
	I0805 12:59:03.008588  451238 filesync.go:149] local asset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> 3912192.pem in /etc/ssl/certs
	I0805 12:59:03.008727  451238 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 12:59:03.019098  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:59:03.042969  451238 start.go:296] duration metric: took 125.441712ms for postStartSetup
	I0805 12:59:03.043011  451238 fix.go:56] duration metric: took 20.961935899s for fixHost
	I0805 12:59:03.043034  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:03.045667  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.046030  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:03.046062  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.046254  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:03.046508  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:03.046701  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:03.046824  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:03.047002  451238 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:03.047182  451238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I0805 12:59:03.047192  451238 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 12:59:03.148773  451238 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722862743.120260193
	
	I0805 12:59:03.148798  451238 fix.go:216] guest clock: 1722862743.120260193
	I0805 12:59:03.148807  451238 fix.go:229] Guest: 2024-08-05 12:59:03.120260193 +0000 UTC Remote: 2024-08-05 12:59:03.043015059 +0000 UTC m=+231.118249223 (delta=77.245134ms)
	I0805 12:59:03.148831  451238 fix.go:200] guest clock delta is within tolerance: 77.245134ms
	I0805 12:59:03.148836  451238 start.go:83] releasing machines lock for "old-k8s-version-635707", held for 21.067801046s
	I0805 12:59:03.148857  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:59:03.149131  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetIP
	I0805 12:59:03.152026  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.152444  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:03.152475  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.152645  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:59:03.153237  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:59:03.153423  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .DriverName
	I0805 12:59:03.153495  451238 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 12:59:03.153551  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:03.153860  451238 ssh_runner.go:195] Run: cat /version.json
	I0805 12:59:03.153895  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHHostname
	I0805 12:59:03.156566  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.156903  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:03.156963  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.156994  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.157187  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:03.157411  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:03.157479  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:03.157508  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:03.157594  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:03.157770  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHPort
	I0805 12:59:03.157782  451238 sshutil.go:53] new ssh client: &{IP:192.168.61.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/id_rsa Username:docker}
	I0805 12:59:03.157924  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHKeyPath
	I0805 12:59:03.158107  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetSSHUsername
	I0805 12:59:03.158344  451238 sshutil.go:53] new ssh client: &{IP:192.168.61.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/old-k8s-version-635707/id_rsa Username:docker}
	I0805 12:59:03.254162  451238 ssh_runner.go:195] Run: systemctl --version
	I0805 12:59:03.260684  451238 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 12:59:03.409837  451238 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 12:59:03.416010  451238 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 12:59:03.416093  451238 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 12:59:03.433548  451238 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 12:59:03.433584  451238 start.go:495] detecting cgroup driver to use...
	I0805 12:59:03.433667  451238 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 12:59:03.450756  451238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 12:59:03.467281  451238 docker.go:217] disabling cri-docker service (if available) ...
	I0805 12:59:03.467341  451238 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 12:59:03.482537  451238 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 12:59:03.498623  451238 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 12:59:03.621224  451238 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 12:59:03.781777  451238 docker.go:233] disabling docker service ...
	I0805 12:59:03.781842  451238 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 12:59:03.798020  451238 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 12:59:03.818262  451238 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 12:59:03.940897  451238 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 12:59:04.075622  451238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 12:59:04.092487  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 12:59:04.112699  451238 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0805 12:59:04.112769  451238 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:04.124102  451238 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 12:59:04.124181  451238 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:04.136339  451238 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:04.147689  451238 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:04.158552  451238 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 12:59:04.171412  451238 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 12:59:04.183284  451238 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0805 12:59:04.183336  451238 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0805 12:59:04.199465  451238 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 12:59:04.215571  451238 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:59:04.342540  451238 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 12:59:04.521705  451238 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 12:59:04.521786  451238 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 12:59:04.526734  451238 start.go:563] Will wait 60s for crictl version
	I0805 12:59:04.526795  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:04.530528  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 12:59:04.572468  451238 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 12:59:04.572557  451238 ssh_runner.go:195] Run: crio --version
	I0805 12:59:04.602411  451238 ssh_runner.go:195] Run: crio --version
	I0805 12:59:04.636641  451238 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0805 12:59:04.638062  451238 main.go:141] libmachine: (old-k8s-version-635707) Calling .GetIP
	I0805 12:59:04.641240  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:04.641734  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:da:c5", ip: ""} in network mk-old-k8s-version-635707: {Iface:virbr3 ExpiryTime:2024-08-05 13:58:54 +0000 UTC Type:0 Mac:52:54:00:2a:da:c5 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:old-k8s-version-635707 Clientid:01:52:54:00:2a:da:c5}
	I0805 12:59:04.641763  451238 main.go:141] libmachine: (old-k8s-version-635707) DBG | domain old-k8s-version-635707 has defined IP address 192.168.61.41 and MAC address 52:54:00:2a:da:c5 in network mk-old-k8s-version-635707
	I0805 12:59:04.641991  451238 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0805 12:59:04.646446  451238 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:59:04.659876  451238 kubeadm.go:883] updating cluster {Name:old-k8s-version-635707 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-635707 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.41 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 12:59:04.660037  451238 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0805 12:59:04.660105  451238 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:59:04.709636  451238 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0805 12:59:04.709725  451238 ssh_runner.go:195] Run: which lz4
	I0805 12:59:04.714439  451238 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0805 12:59:04.719014  451238 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 12:59:04.719047  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0805 12:59:06.414858  451238 crio.go:462] duration metric: took 1.70045694s to copy over tarball
	I0805 12:59:06.414950  451238 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 12:59:04.513198  450884 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:07.018197  450884 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:05.911274  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:07.911405  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:04.478626  450393 main.go:141] libmachine: (embed-certs-321139) Waiting to get IP...
	I0805 12:59:04.479615  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:04.480147  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:04.480209  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:04.480103  452359 retry.go:31] will retry after 236.369287ms: waiting for machine to come up
	I0805 12:59:04.718716  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:04.719184  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:04.719209  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:04.719125  452359 retry.go:31] will retry after 296.553947ms: waiting for machine to come up
	I0805 12:59:05.017667  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:05.018198  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:05.018235  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:05.018143  452359 retry.go:31] will retry after 427.78496ms: waiting for machine to come up
	I0805 12:59:05.447507  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:05.448075  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:05.448105  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:05.448038  452359 retry.go:31] will retry after 469.229133ms: waiting for machine to come up
	I0805 12:59:05.918469  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:05.919013  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:05.919047  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:05.918998  452359 retry.go:31] will retry after 720.005641ms: waiting for machine to come up
	I0805 12:59:06.641103  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:06.641679  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:06.641708  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:06.641634  452359 retry.go:31] will retry after 591.439327ms: waiting for machine to come up
	I0805 12:59:07.234573  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:07.235179  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:07.235207  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:07.235063  452359 retry.go:31] will retry after 1.087958168s: waiting for machine to come up
	I0805 12:59:08.324599  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:08.325179  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:08.325212  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:08.325129  452359 retry.go:31] will retry after 1.316276197s: waiting for machine to come up
	I0805 12:59:09.473711  451238 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.058718584s)
	I0805 12:59:09.473740  451238 crio.go:469] duration metric: took 3.058854233s to extract the tarball
	I0805 12:59:09.473748  451238 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0805 12:59:09.524420  451238 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:59:09.562003  451238 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0805 12:59:09.562035  451238 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0805 12:59:09.562107  451238 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:59:09.562159  451238 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0805 12:59:09.562156  451238 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0805 12:59:09.562194  451238 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0805 12:59:09.562228  451238 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0805 12:59:09.562256  451238 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0805 12:59:09.562374  451238 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0805 12:59:09.562274  451238 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0805 12:59:09.563981  451238 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0805 12:59:09.563993  451238 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0805 12:59:09.564007  451238 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0805 12:59:09.564015  451238 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0805 12:59:09.564032  451238 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0805 12:59:09.564041  451238 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0805 12:59:09.564076  451238 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:59:09.564075  451238 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0805 12:59:09.727888  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0805 12:59:09.732060  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0805 12:59:09.732150  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0805 12:59:09.736408  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0805 12:59:09.748051  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0805 12:59:09.753579  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0805 12:59:09.762561  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0805 12:59:09.822623  451238 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0805 12:59:09.822681  451238 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0805 12:59:09.822742  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:09.824314  451238 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0805 12:59:09.824360  451238 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0805 12:59:09.824404  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:09.905619  451238 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0805 12:59:09.905778  451238 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0805 12:59:09.905738  451238 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0805 12:59:09.905944  451238 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0805 12:59:09.905998  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:09.905851  451238 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0805 12:59:09.906075  451238 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0805 12:59:09.906133  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:09.905861  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:09.916767  451238 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0805 12:59:09.916796  451238 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0805 12:59:09.916812  451238 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0805 12:59:09.916830  451238 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0805 12:59:09.916864  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:09.916868  451238 ssh_runner.go:195] Run: which crictl
	I0805 12:59:09.916905  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0805 12:59:09.916958  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0805 12:59:09.918683  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0805 12:59:09.918718  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0805 12:59:09.918776  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0805 12:59:10.007687  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0805 12:59:10.007721  451238 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0805 12:59:10.007871  451238 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0805 12:59:10.042432  451238 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0805 12:59:10.061343  451238 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0805 12:59:10.061400  451238 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0805 12:59:10.061469  451238 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0805 12:59:10.073852  451238 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0805 12:59:10.084957  451238 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0805 12:59:10.423355  451238 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:59:10.563992  451238 cache_images.go:92] duration metric: took 1.001937985s to LoadCachedImages
	W0805 12:59:10.564184  451238 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19377-383955/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0805 12:59:10.564211  451238 kubeadm.go:934] updating node { 192.168.61.41 8443 v1.20.0 crio true true} ...
	I0805 12:59:10.564345  451238 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-635707 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.41
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-635707 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 12:59:10.564427  451238 ssh_runner.go:195] Run: crio config
	I0805 12:59:10.612146  451238 cni.go:84] Creating CNI manager for ""
	I0805 12:59:10.612180  451238 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:59:10.612197  451238 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 12:59:10.612226  451238 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.41 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-635707 NodeName:old-k8s-version-635707 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.41"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.41 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0805 12:59:10.612415  451238 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.41
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-635707"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.41
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.41"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 12:59:10.612507  451238 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0805 12:59:10.623036  451238 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 12:59:10.623121  451238 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 12:59:10.633484  451238 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0805 12:59:10.652444  451238 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 12:59:10.673192  451238 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0805 12:59:10.694533  451238 ssh_runner.go:195] Run: grep 192.168.61.41	control-plane.minikube.internal$ /etc/hosts
	I0805 12:59:10.699901  451238 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.41	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:59:10.714251  451238 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:59:10.838992  451238 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 12:59:10.857248  451238 certs.go:68] Setting up /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707 for IP: 192.168.61.41
	I0805 12:59:10.857279  451238 certs.go:194] generating shared ca certs ...
	I0805 12:59:10.857303  451238 certs.go:226] acquiring lock for ca certs: {Name:mk0abfcaff3883fbb5243c47b487f9200d9166d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:59:10.857515  451238 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key
	I0805 12:59:10.857587  451238 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key
	I0805 12:59:10.857602  451238 certs.go:256] generating profile certs ...
	I0805 12:59:10.857746  451238 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/client.key
	I0805 12:59:10.857847  451238 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/apiserver.key.3f42c485
	I0805 12:59:10.857907  451238 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/proxy-client.key
	I0805 12:59:10.858072  451238 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem (1338 bytes)
	W0805 12:59:10.858122  451238 certs.go:480] ignoring /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219_empty.pem, impossibly tiny 0 bytes
	I0805 12:59:10.858143  451238 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 12:59:10.858177  451238 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem (1082 bytes)
	I0805 12:59:10.858207  451238 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem (1123 bytes)
	I0805 12:59:10.858235  451238 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem (1675 bytes)
	I0805 12:59:10.858294  451238 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:59:10.859247  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 12:59:10.908518  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 12:59:10.949310  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 12:59:10.981447  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 12:59:11.008085  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0805 12:59:11.035539  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0805 12:59:11.071371  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 12:59:11.099842  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/old-k8s-version-635707/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 12:59:11.135629  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 12:59:11.164194  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem --> /usr/share/ca-certificates/391219.pem (1338 bytes)
	I0805 12:59:11.190595  451238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /usr/share/ca-certificates/3912192.pem (1708 bytes)
	I0805 12:59:11.219765  451238 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 12:59:11.240836  451238 ssh_runner.go:195] Run: openssl version
	I0805 12:59:11.247516  451238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3912192.pem && ln -fs /usr/share/ca-certificates/3912192.pem /etc/ssl/certs/3912192.pem"
	I0805 12:59:11.260736  451238 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3912192.pem
	I0805 12:59:11.266004  451238 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 11:39 /usr/share/ca-certificates/3912192.pem
	I0805 12:59:11.266100  451238 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3912192.pem
	I0805 12:59:11.273012  451238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3912192.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 12:59:11.285453  451238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 12:59:11.296934  451238 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:59:11.301588  451238 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:59:11.301655  451238 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:59:11.307459  451238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 12:59:11.318833  451238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391219.pem && ln -fs /usr/share/ca-certificates/391219.pem /etc/ssl/certs/391219.pem"
	I0805 12:59:11.330224  451238 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/391219.pem
	I0805 12:59:11.334864  451238 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 11:39 /usr/share/ca-certificates/391219.pem
	I0805 12:59:11.334917  451238 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391219.pem
	I0805 12:59:11.341338  451238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391219.pem /etc/ssl/certs/51391683.0"
	I0805 12:59:11.353084  451238 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 12:59:11.358532  451238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 12:59:11.365419  451238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 12:59:11.371581  451238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 12:59:11.378308  451238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 12:59:11.384640  451238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 12:59:11.390622  451238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 12:59:11.397027  451238 kubeadm.go:392] StartCluster: {Name:old-k8s-version-635707 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-635707 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.41 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:59:11.397199  451238 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 12:59:11.397286  451238 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:59:11.436612  451238 cri.go:89] found id: ""
	I0805 12:59:11.436689  451238 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 12:59:11.447906  451238 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0805 12:59:11.447927  451238 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0805 12:59:11.447984  451238 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0805 12:59:11.459282  451238 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0805 12:59:11.460548  451238 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-635707" does not appear in /home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 12:59:11.461355  451238 kubeconfig.go:62] /home/jenkins/minikube-integration/19377-383955/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-635707" cluster setting kubeconfig missing "old-k8s-version-635707" context setting]
	I0805 12:59:11.462324  451238 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/kubeconfig: {Name:mkf2ea766e58530103015ce4ba9d1ed3336f3926 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:59:11.476306  451238 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0805 12:59:11.487869  451238 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.41
	I0805 12:59:11.487911  451238 kubeadm.go:1160] stopping kube-system containers ...
	I0805 12:59:11.487927  451238 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0805 12:59:11.487988  451238 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:59:11.526601  451238 cri.go:89] found id: ""
	I0805 12:59:11.526674  451238 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0805 12:59:11.545429  451238 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 12:59:11.556725  451238 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 12:59:11.556755  451238 kubeadm.go:157] found existing configuration files:
	
	I0805 12:59:11.556820  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 12:59:11.566564  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 12:59:11.566648  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 12:59:11.576859  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 12:59:11.586237  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 12:59:11.586329  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 12:59:11.596721  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 12:59:11.607239  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 12:59:11.607340  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 12:59:11.617626  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 12:59:11.627179  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 12:59:11.627251  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 12:59:11.637566  451238 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 12:59:11.648889  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:11.780270  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:08.018320  450884 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"True"
	I0805 12:59:08.018363  450884 pod_ready.go:81] duration metric: took 10.514788401s for pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:08.018379  450884 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:10.270876  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:10.409419  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:12.410565  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:09.643077  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:09.643655  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:09.643692  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:09.643554  452359 retry.go:31] will retry after 1.473183692s: waiting for machine to come up
	I0805 12:59:11.118468  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:11.119005  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:11.119035  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:11.118943  452359 retry.go:31] will retry after 2.036333626s: waiting for machine to come up
	I0805 12:59:13.156866  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:13.157390  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:13.157419  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:13.157339  452359 retry.go:31] will retry after 2.095065362s: waiting for machine to come up
	I0805 12:59:12.549918  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:12.781853  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:12.877381  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:12.978141  451238 api_server.go:52] waiting for apiserver process to appear ...
	I0805 12:59:12.978250  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:13.479242  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:13.978456  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:14.478575  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:14.978783  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:15.479342  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:15.978307  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:16.479180  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:12.526543  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:15.027362  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:14.909480  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:16.911090  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:15.253589  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:15.254081  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:15.254111  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:15.254020  452359 retry.go:31] will retry after 2.859783781s: waiting for machine to come up
	I0805 12:59:18.116972  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:18.117528  450393 main.go:141] libmachine: (embed-certs-321139) DBG | unable to find current IP address of domain embed-certs-321139 in network mk-embed-certs-321139
	I0805 12:59:18.117559  450393 main.go:141] libmachine: (embed-certs-321139) DBG | I0805 12:59:18.117486  452359 retry.go:31] will retry after 4.456427854s: waiting for machine to come up
	I0805 12:59:16.978915  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:17.479019  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:17.978574  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:18.478343  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:18.978820  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:19.478488  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:19.978335  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:20.478945  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:20.979040  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:21.479324  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:17.525332  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:19.525407  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:22.025092  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:19.410416  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:21.908646  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:22.576842  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.577261  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has current primary IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.577291  450393 main.go:141] libmachine: (embed-certs-321139) Found IP for machine: 192.168.39.196
	I0805 12:59:22.577306  450393 main.go:141] libmachine: (embed-certs-321139) Reserving static IP address...
	I0805 12:59:22.577834  450393 main.go:141] libmachine: (embed-certs-321139) Reserved static IP address: 192.168.39.196
	I0805 12:59:22.577877  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "embed-certs-321139", mac: "52:54:00:6c:ad:fd", ip: "192.168.39.196"} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:22.577893  450393 main.go:141] libmachine: (embed-certs-321139) Waiting for SSH to be available...
	I0805 12:59:22.577915  450393 main.go:141] libmachine: (embed-certs-321139) DBG | skip adding static IP to network mk-embed-certs-321139 - found existing host DHCP lease matching {name: "embed-certs-321139", mac: "52:54:00:6c:ad:fd", ip: "192.168.39.196"}
	I0805 12:59:22.577922  450393 main.go:141] libmachine: (embed-certs-321139) DBG | Getting to WaitForSSH function...
	I0805 12:59:22.580080  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.580520  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:22.580552  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.580707  450393 main.go:141] libmachine: (embed-certs-321139) DBG | Using SSH client type: external
	I0805 12:59:22.580742  450393 main.go:141] libmachine: (embed-certs-321139) DBG | Using SSH private key: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa (-rw-------)
	I0805 12:59:22.580764  450393 main.go:141] libmachine: (embed-certs-321139) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.196 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 12:59:22.580778  450393 main.go:141] libmachine: (embed-certs-321139) DBG | About to run SSH command:
	I0805 12:59:22.580793  450393 main.go:141] libmachine: (embed-certs-321139) DBG | exit 0
	I0805 12:59:22.703872  450393 main.go:141] libmachine: (embed-certs-321139) DBG | SSH cmd err, output: <nil>: 
	I0805 12:59:22.704333  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetConfigRaw
	I0805 12:59:22.705046  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetIP
	I0805 12:59:22.707544  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.707919  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:22.707951  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.708240  450393 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139/config.json ...
	I0805 12:59:22.708474  450393 machine.go:94] provisionDockerMachine start ...
	I0805 12:59:22.708501  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:22.708755  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:22.711177  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.711488  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:22.711510  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.711639  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:22.711842  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:22.711998  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:22.712157  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:22.712378  450393 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:22.712581  450393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0805 12:59:22.712595  450393 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 12:59:22.816371  450393 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 12:59:22.816433  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetMachineName
	I0805 12:59:22.816708  450393 buildroot.go:166] provisioning hostname "embed-certs-321139"
	I0805 12:59:22.816743  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetMachineName
	I0805 12:59:22.816959  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:22.819715  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.820085  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:22.820108  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.820321  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:22.820510  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:22.820656  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:22.820794  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:22.820952  450393 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:22.821203  450393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0805 12:59:22.821229  450393 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-321139 && echo "embed-certs-321139" | sudo tee /etc/hostname
	I0805 12:59:22.938845  450393 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-321139
	
	I0805 12:59:22.938888  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:22.942264  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.942651  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:22.942684  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:22.942904  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:22.943161  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:22.943383  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:22.943568  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:22.943777  450393 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:22.943987  450393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0805 12:59:22.944011  450393 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-321139' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-321139/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-321139' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 12:59:23.062700  450393 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 12:59:23.062734  450393 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19377-383955/.minikube CaCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19377-383955/.minikube}
	I0805 12:59:23.062762  450393 buildroot.go:174] setting up certificates
	I0805 12:59:23.062774  450393 provision.go:84] configureAuth start
	I0805 12:59:23.062800  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetMachineName
	I0805 12:59:23.063142  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetIP
	I0805 12:59:23.065839  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.066140  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.066175  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.066359  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:23.069214  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.069562  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.069597  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.069746  450393 provision.go:143] copyHostCerts
	I0805 12:59:23.069813  450393 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem, removing ...
	I0805 12:59:23.069827  450393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem
	I0805 12:59:23.069897  450393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/ca.pem (1082 bytes)
	I0805 12:59:23.070014  450393 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem, removing ...
	I0805 12:59:23.070025  450393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem
	I0805 12:59:23.070083  450393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/cert.pem (1123 bytes)
	I0805 12:59:23.070185  450393 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem, removing ...
	I0805 12:59:23.070197  450393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem
	I0805 12:59:23.070226  450393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19377-383955/.minikube/key.pem (1675 bytes)
	I0805 12:59:23.070308  450393 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem org=jenkins.embed-certs-321139 san=[127.0.0.1 192.168.39.196 embed-certs-321139 localhost minikube]
	I0805 12:59:23.223660  450393 provision.go:177] copyRemoteCerts
	I0805 12:59:23.223759  450393 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 12:59:23.223799  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:23.226548  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.226980  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.227014  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.227195  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:23.227449  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:23.227624  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:23.227801  450393 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa Username:docker}
	I0805 12:59:23.311952  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0805 12:59:23.336888  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0805 12:59:23.363397  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 12:59:23.388197  450393 provision.go:87] duration metric: took 325.408192ms to configureAuth
	I0805 12:59:23.388234  450393 buildroot.go:189] setting minikube options for container-runtime
	I0805 12:59:23.388470  450393 config.go:182] Loaded profile config "embed-certs-321139": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:59:23.388596  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:23.391247  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.391597  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.391626  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.391843  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:23.392054  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:23.392240  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:23.392371  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:23.392528  450393 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:23.392825  450393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0805 12:59:23.392853  450393 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 12:59:23.675427  450393 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 12:59:23.675459  450393 machine.go:97] duration metric: took 966.969142ms to provisionDockerMachine
	I0805 12:59:23.675472  450393 start.go:293] postStartSetup for "embed-certs-321139" (driver="kvm2")
	I0805 12:59:23.675484  450393 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 12:59:23.675515  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:23.675885  450393 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 12:59:23.675912  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:23.678780  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.679100  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.679152  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.679333  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:23.679524  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:23.679657  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:23.679860  450393 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa Username:docker}
	I0805 12:59:23.764372  450393 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 12:59:23.769059  450393 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 12:59:23.769088  450393 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/addons for local assets ...
	I0805 12:59:23.769162  450393 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-383955/.minikube/files for local assets ...
	I0805 12:59:23.769231  450393 filesync.go:149] local asset: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem -> 3912192.pem in /etc/ssl/certs
	I0805 12:59:23.769334  450393 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 12:59:23.781287  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:59:23.808609  450393 start.go:296] duration metric: took 133.117086ms for postStartSetup
	I0805 12:59:23.808665  450393 fix.go:56] duration metric: took 20.659690035s for fixHost
	I0805 12:59:23.808694  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:23.811519  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.811948  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.811978  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.812164  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:23.812366  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:23.812539  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:23.812708  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:23.812897  450393 main.go:141] libmachine: Using SSH client type: native
	I0805 12:59:23.813137  450393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0805 12:59:23.813151  450393 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 12:59:23.916498  450393 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722862763.883942670
	
	I0805 12:59:23.916521  450393 fix.go:216] guest clock: 1722862763.883942670
	I0805 12:59:23.916536  450393 fix.go:229] Guest: 2024-08-05 12:59:23.88394267 +0000 UTC Remote: 2024-08-05 12:59:23.8086712 +0000 UTC m=+359.764794687 (delta=75.27147ms)
	I0805 12:59:23.916570  450393 fix.go:200] guest clock delta is within tolerance: 75.27147ms
	I0805 12:59:23.916578  450393 start.go:83] releasing machines lock for "embed-certs-321139", held for 20.767637373s
	I0805 12:59:23.916598  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:23.916867  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetIP
	I0805 12:59:23.919570  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.919972  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.919999  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.920142  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:23.920666  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:23.920837  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:23.920930  450393 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 12:59:23.920981  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:23.921063  450393 ssh_runner.go:195] Run: cat /version.json
	I0805 12:59:23.921083  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:23.924176  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.924209  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.924557  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.924588  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.924613  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:23.924635  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:23.924749  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:23.924936  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:23.925021  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:23.925127  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:23.925219  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:23.925286  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:23.925369  450393 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa Username:docker}
	I0805 12:59:23.925454  450393 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa Username:docker}
	I0805 12:59:24.000693  450393 ssh_runner.go:195] Run: systemctl --version
	I0805 12:59:24.023194  450393 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 12:59:24.178807  450393 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 12:59:24.184954  450393 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 12:59:24.185031  450393 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 12:59:24.201420  450393 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 12:59:24.201453  450393 start.go:495] detecting cgroup driver to use...
	I0805 12:59:24.201543  450393 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 12:59:24.218603  450393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 12:59:24.233928  450393 docker.go:217] disabling cri-docker service (if available) ...
	I0805 12:59:24.233999  450393 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 12:59:24.248455  450393 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 12:59:24.263355  450393 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 12:59:24.386806  450393 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 12:59:24.565128  450393 docker.go:233] disabling docker service ...
	I0805 12:59:24.565229  450393 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 12:59:24.581053  450393 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 12:59:24.594297  450393 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 12:59:24.716615  450393 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 12:59:24.835687  450393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 12:59:24.850666  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 12:59:24.870993  450393 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0805 12:59:24.871055  450393 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:24.881731  450393 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 12:59:24.881815  450393 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:24.893156  450393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:24.903802  450393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:24.915189  450393 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 12:59:24.926967  450393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:24.938008  450393 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:24.956033  450393 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 12:59:24.967863  450393 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 12:59:24.977758  450393 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0805 12:59:24.977822  450393 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0805 12:59:24.993837  450393 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 12:59:25.005009  450393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:59:25.135856  450393 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 12:59:25.277425  450393 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 12:59:25.277513  450393 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 12:59:25.282628  450393 start.go:563] Will wait 60s for crictl version
	I0805 12:59:25.282704  450393 ssh_runner.go:195] Run: which crictl
	I0805 12:59:25.287324  450393 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 12:59:25.335315  450393 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 12:59:25.335396  450393 ssh_runner.go:195] Run: crio --version
	I0805 12:59:25.367574  450393 ssh_runner.go:195] Run: crio --version
	I0805 12:59:25.398926  450393 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0805 12:59:21.979289  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:22.478367  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:22.978424  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:23.478877  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:23.978841  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:24.478635  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:24.978824  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:25.479076  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:25.979222  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:26.478928  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:24.025234  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:26.028817  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:23.909428  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:25.910877  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:27.911235  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:25.400219  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetIP
	I0805 12:59:25.403052  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:25.403508  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:25.403552  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:25.403849  450393 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0805 12:59:25.408402  450393 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:59:25.423146  450393 kubeadm.go:883] updating cluster {Name:embed-certs-321139 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-321139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 12:59:25.423301  450393 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 12:59:25.423368  450393 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:59:25.460713  450393 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0805 12:59:25.460795  450393 ssh_runner.go:195] Run: which lz4
	I0805 12:59:25.464997  450393 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0805 12:59:25.469397  450393 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 12:59:25.469452  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0805 12:59:26.966110  450393 crio.go:462] duration metric: took 1.501152522s to copy over tarball
	I0805 12:59:26.966207  450393 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 12:59:26.978648  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:27.478951  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:27.978405  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:28.479008  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:28.978521  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:29.479199  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:29.979288  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:30.479030  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:30.978372  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:31.479194  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:28.525888  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:31.025690  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:30.410973  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:32.910889  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:29.287605  450393 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.321364872s)
	I0805 12:59:29.287636  450393 crio.go:469] duration metric: took 2.321487153s to extract the tarball
	I0805 12:59:29.287647  450393 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0805 12:59:29.329182  450393 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 12:59:29.372183  450393 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 12:59:29.372211  450393 cache_images.go:84] Images are preloaded, skipping loading
	I0805 12:59:29.372220  450393 kubeadm.go:934] updating node { 192.168.39.196 8443 v1.30.3 crio true true} ...
	I0805 12:59:29.372349  450393 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-321139 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.196
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-321139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 12:59:29.372433  450393 ssh_runner.go:195] Run: crio config
	I0805 12:59:29.426003  450393 cni.go:84] Creating CNI manager for ""
	I0805 12:59:29.426025  450393 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:59:29.426036  450393 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 12:59:29.426059  450393 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.196 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-321139 NodeName:embed-certs-321139 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.196"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.196 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 12:59:29.426192  450393 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.196
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-321139"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.196
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.196"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 12:59:29.426250  450393 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 12:59:29.436248  450393 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 12:59:29.436315  450393 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 12:59:29.445844  450393 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0805 12:59:29.463125  450393 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 12:59:29.479685  450393 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0805 12:59:29.499033  450393 ssh_runner.go:195] Run: grep 192.168.39.196	control-plane.minikube.internal$ /etc/hosts
	I0805 12:59:29.503175  450393 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.196	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 12:59:29.516141  450393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:59:29.645914  450393 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 12:59:29.664578  450393 certs.go:68] Setting up /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139 for IP: 192.168.39.196
	I0805 12:59:29.664608  450393 certs.go:194] generating shared ca certs ...
	I0805 12:59:29.664626  450393 certs.go:226] acquiring lock for ca certs: {Name:mk0abfcaff3883fbb5243c47b487f9200d9166d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:59:29.664853  450393 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key
	I0805 12:59:29.664922  450393 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key
	I0805 12:59:29.664939  450393 certs.go:256] generating profile certs ...
	I0805 12:59:29.665058  450393 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139/client.key
	I0805 12:59:29.665143  450393 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139/apiserver.key.ce53eda3
	I0805 12:59:29.665183  450393 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139/proxy-client.key
	I0805 12:59:29.665293  450393 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem (1338 bytes)
	W0805 12:59:29.665324  450393 certs.go:480] ignoring /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219_empty.pem, impossibly tiny 0 bytes
	I0805 12:59:29.665331  450393 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 12:59:29.665360  450393 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/ca.pem (1082 bytes)
	I0805 12:59:29.665382  450393 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/cert.pem (1123 bytes)
	I0805 12:59:29.665405  450393 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/certs/key.pem (1675 bytes)
	I0805 12:59:29.665442  450393 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem (1708 bytes)
	I0805 12:59:29.666287  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 12:59:29.705969  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 12:59:29.752700  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 12:59:29.779819  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 12:59:29.806578  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0805 12:59:29.832277  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0805 12:59:29.861682  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 12:59:29.888113  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/embed-certs-321139/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0805 12:59:29.915023  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/certs/391219.pem --> /usr/share/ca-certificates/391219.pem (1338 bytes)
	I0805 12:59:29.942582  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/ssl/certs/3912192.pem --> /usr/share/ca-certificates/3912192.pem (1708 bytes)
	I0805 12:59:29.971225  450393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-383955/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 12:59:29.999278  450393 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 12:59:30.018294  450393 ssh_runner.go:195] Run: openssl version
	I0805 12:59:30.024645  450393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 12:59:30.035446  450393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:59:30.040216  450393 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:59:30.040279  450393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 12:59:30.046151  450393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 12:59:30.057664  450393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391219.pem && ln -fs /usr/share/ca-certificates/391219.pem /etc/ssl/certs/391219.pem"
	I0805 12:59:30.068822  450393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/391219.pem
	I0805 12:59:30.074073  450393 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 11:39 /usr/share/ca-certificates/391219.pem
	I0805 12:59:30.074138  450393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391219.pem
	I0805 12:59:30.080126  450393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391219.pem /etc/ssl/certs/51391683.0"
	I0805 12:59:30.091168  450393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3912192.pem && ln -fs /usr/share/ca-certificates/3912192.pem /etc/ssl/certs/3912192.pem"
	I0805 12:59:30.103171  450393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3912192.pem
	I0805 12:59:30.108840  450393 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 11:39 /usr/share/ca-certificates/3912192.pem
	I0805 12:59:30.108924  450393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3912192.pem
	I0805 12:59:30.115469  450393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3912192.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 12:59:30.126742  450393 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 12:59:30.132008  450393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 12:59:30.138285  450393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 12:59:30.144251  450393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 12:59:30.150718  450393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 12:59:30.157183  450393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 12:59:30.163709  450393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 12:59:30.170852  450393 kubeadm.go:392] StartCluster: {Name:embed-certs-321139 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-321139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:59:30.170987  450393 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 12:59:30.171055  450393 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:59:30.216014  450393 cri.go:89] found id: ""
	I0805 12:59:30.216103  450393 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 12:59:30.234046  450393 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0805 12:59:30.234076  450393 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0805 12:59:30.234151  450393 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0805 12:59:30.245861  450393 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0805 12:59:30.247434  450393 kubeconfig.go:125] found "embed-certs-321139" server: "https://192.168.39.196:8443"
	I0805 12:59:30.250024  450393 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0805 12:59:30.261066  450393 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.196
	I0805 12:59:30.261116  450393 kubeadm.go:1160] stopping kube-system containers ...
	I0805 12:59:30.261140  450393 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0805 12:59:30.261201  450393 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 12:59:30.306587  450393 cri.go:89] found id: ""
	I0805 12:59:30.306678  450393 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0805 12:59:30.326818  450393 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 12:59:30.336908  450393 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 12:59:30.336931  450393 kubeadm.go:157] found existing configuration files:
	
	I0805 12:59:30.336984  450393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 12:59:30.346004  450393 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 12:59:30.346105  450393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 12:59:30.355979  450393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 12:59:30.366124  450393 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 12:59:30.366185  450393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 12:59:30.376923  450393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 12:59:30.386526  450393 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 12:59:30.386599  450393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 12:59:30.396661  450393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 12:59:30.406693  450393 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 12:59:30.406765  450393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 12:59:30.417789  450393 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 12:59:30.428214  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:30.554777  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:31.703579  450393 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.14876196s)
	I0805 12:59:31.703620  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:31.925724  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:31.999840  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:32.089948  450393 api_server.go:52] waiting for apiserver process to appear ...
	I0805 12:59:32.090084  450393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:32.590152  450393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:33.090222  450393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:33.115351  450393 api_server.go:72] duration metric: took 1.025404322s to wait for apiserver process to appear ...
	I0805 12:59:33.115385  450393 api_server.go:88] waiting for apiserver healthz status ...
	I0805 12:59:33.115411  450393 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0805 12:59:33.115983  450393 api_server.go:269] stopped: https://192.168.39.196:8443/healthz: Get "https://192.168.39.196:8443/healthz": dial tcp 192.168.39.196:8443: connect: connection refused
	I0805 12:59:33.616210  450393 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0805 12:59:31.978481  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:32.479031  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:32.978796  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:33.478677  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:33.979377  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:34.478595  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:34.979227  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:35.478695  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:35.978911  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:36.479327  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:33.027363  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:35.525528  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:36.274855  450393 api_server.go:279] https://192.168.39.196:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0805 12:59:36.274895  450393 api_server.go:103] status: https://192.168.39.196:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0805 12:59:36.274912  450393 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0805 12:59:36.314290  450393 api_server.go:279] https://192.168.39.196:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0805 12:59:36.314325  450393 api_server.go:103] status: https://192.168.39.196:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0805 12:59:36.615566  450393 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0805 12:59:36.620594  450393 api_server.go:279] https://192.168.39.196:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:59:36.620626  450393 api_server.go:103] status: https://192.168.39.196:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:59:37.116251  450393 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0805 12:59:37.120719  450393 api_server.go:279] https://192.168.39.196:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 12:59:37.120749  450393 api_server.go:103] status: https://192.168.39.196:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 12:59:37.616330  450393 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0805 12:59:37.620778  450393 api_server.go:279] https://192.168.39.196:8443/healthz returned 200:
	ok
	I0805 12:59:37.627608  450393 api_server.go:141] control plane version: v1.30.3
	I0805 12:59:37.627640  450393 api_server.go:131] duration metric: took 4.512246076s to wait for apiserver health ...
	I0805 12:59:37.627652  450393 cni.go:84] Creating CNI manager for ""
	I0805 12:59:37.627661  450393 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 12:59:37.628987  450393 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0805 12:59:35.410070  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:37.411719  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:37.630068  450393 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0805 12:59:37.650034  450393 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0805 12:59:37.691891  450393 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 12:59:37.704810  450393 system_pods.go:59] 8 kube-system pods found
	I0805 12:59:37.704855  450393 system_pods.go:61] "coredns-7db6d8ff4d-wm7lh" [e3851d79-431c-4629-bfdc-ed9615cd46aa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0805 12:59:37.704866  450393 system_pods.go:61] "etcd-embed-certs-321139" [98de664b-92d7-432d-9881-496dd8edd9f3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0805 12:59:37.704887  450393 system_pods.go:61] "kube-apiserver-embed-certs-321139" [2d93e6df-1933-4ac1-82f6-d0d8f74f6d4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0805 12:59:37.704900  450393 system_pods.go:61] "kube-controller-manager-embed-certs-321139" [84165f78-f74b-4714-81b9-eeac2771b86b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0805 12:59:37.704916  450393 system_pods.go:61] "kube-proxy-shgv2" [a19c5991-505f-4105-8c20-7afd63dd8e61] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0805 12:59:37.704928  450393 system_pods.go:61] "kube-scheduler-embed-certs-321139" [961a5013-fd55-48a2-adc2-acde33f6aed5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0805 12:59:37.704946  450393 system_pods.go:61] "metrics-server-569cc877fc-k8mrt" [6d400b20-5de5-4046-b773-39766c67cdb4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 12:59:37.704956  450393 system_pods.go:61] "storage-provisioner" [8b2db057-5262-4648-93ea-f2f0ed51a19b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0805 12:59:37.704967  450393 system_pods.go:74] duration metric: took 13.04358ms to wait for pod list to return data ...
	I0805 12:59:37.704980  450393 node_conditions.go:102] verifying NodePressure condition ...
	I0805 12:59:37.710340  450393 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 12:59:37.710367  450393 node_conditions.go:123] node cpu capacity is 2
	I0805 12:59:37.710382  450393 node_conditions.go:105] duration metric: took 5.392102ms to run NodePressure ...
	I0805 12:59:37.710402  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 12:59:37.995945  450393 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0805 12:59:38.000274  450393 kubeadm.go:739] kubelet initialised
	I0805 12:59:38.000295  450393 kubeadm.go:740] duration metric: took 4.323835ms waiting for restarted kubelet to initialise ...
	I0805 12:59:38.000302  450393 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 12:59:38.006122  450393 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-wm7lh" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:38.012368  450393 pod_ready.go:97] node "embed-certs-321139" hosting pod "coredns-7db6d8ff4d-wm7lh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.012392  450393 pod_ready.go:81] duration metric: took 6.243837ms for pod "coredns-7db6d8ff4d-wm7lh" in "kube-system" namespace to be "Ready" ...
	E0805 12:59:38.012400  450393 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-321139" hosting pod "coredns-7db6d8ff4d-wm7lh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.012406  450393 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:38.016338  450393 pod_ready.go:97] node "embed-certs-321139" hosting pod "etcd-embed-certs-321139" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.016357  450393 pod_ready.go:81] duration metric: took 3.943012ms for pod "etcd-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	E0805 12:59:38.016364  450393 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-321139" hosting pod "etcd-embed-certs-321139" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.016369  450393 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:38.021019  450393 pod_ready.go:97] node "embed-certs-321139" hosting pod "kube-apiserver-embed-certs-321139" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.021044  450393 pod_ready.go:81] duration metric: took 4.667242ms for pod "kube-apiserver-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	E0805 12:59:38.021055  450393 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-321139" hosting pod "kube-apiserver-embed-certs-321139" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.021063  450393 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:38.096303  450393 pod_ready.go:97] node "embed-certs-321139" hosting pod "kube-controller-manager-embed-certs-321139" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.096334  450393 pod_ready.go:81] duration metric: took 75.253785ms for pod "kube-controller-manager-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	E0805 12:59:38.096345  450393 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-321139" hosting pod "kube-controller-manager-embed-certs-321139" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.096351  450393 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-shgv2" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:38.495648  450393 pod_ready.go:97] node "embed-certs-321139" hosting pod "kube-proxy-shgv2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.495677  450393 pod_ready.go:81] duration metric: took 399.318117ms for pod "kube-proxy-shgv2" in "kube-system" namespace to be "Ready" ...
	E0805 12:59:38.495687  450393 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-321139" hosting pod "kube-proxy-shgv2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.495694  450393 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:38.896066  450393 pod_ready.go:97] node "embed-certs-321139" hosting pod "kube-scheduler-embed-certs-321139" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.896091  450393 pod_ready.go:81] duration metric: took 400.39101ms for pod "kube-scheduler-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	E0805 12:59:38.896101  450393 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-321139" hosting pod "kube-scheduler-embed-certs-321139" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:38.896108  450393 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:39.295587  450393 pod_ready.go:97] node "embed-certs-321139" hosting pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:39.295618  450393 pod_ready.go:81] duration metric: took 399.499354ms for pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace to be "Ready" ...
	E0805 12:59:39.295632  450393 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-321139" hosting pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:39.295653  450393 pod_ready.go:38] duration metric: took 1.295340252s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 12:59:39.295675  450393 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 12:59:39.308136  450393 ops.go:34] apiserver oom_adj: -16
	I0805 12:59:39.308161  450393 kubeadm.go:597] duration metric: took 9.07407738s to restartPrimaryControlPlane
	I0805 12:59:39.308170  450393 kubeadm.go:394] duration metric: took 9.137335392s to StartCluster
	I0805 12:59:39.308188  450393 settings.go:142] acquiring lock: {Name:mkef693333292ed53a03690c72ec170ce2e26d3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:59:39.308272  450393 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 12:59:39.310750  450393 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/kubeconfig: {Name:mkf2ea766e58530103015ce4ba9d1ed3336f3926 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 12:59:39.311015  450393 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 12:59:39.311149  450393 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 12:59:39.311240  450393 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-321139"
	I0805 12:59:39.311289  450393 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-321139"
	W0805 12:59:39.311303  450393 addons.go:243] addon storage-provisioner should already be in state true
	I0805 12:59:39.311301  450393 addons.go:69] Setting metrics-server=true in profile "embed-certs-321139"
	I0805 12:59:39.311305  450393 addons.go:69] Setting default-storageclass=true in profile "embed-certs-321139"
	I0805 12:59:39.311351  450393 host.go:66] Checking if "embed-certs-321139" exists ...
	I0805 12:59:39.311360  450393 addons.go:234] Setting addon metrics-server=true in "embed-certs-321139"
	W0805 12:59:39.311371  450393 addons.go:243] addon metrics-server should already be in state true
	I0805 12:59:39.311371  450393 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-321139"
	I0805 12:59:39.311454  450393 host.go:66] Checking if "embed-certs-321139" exists ...
	I0805 12:59:39.311287  450393 config.go:182] Loaded profile config "embed-certs-321139": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:59:39.311848  450393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:59:39.311897  450393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:59:39.311906  450393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:59:39.311912  450393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:59:39.311964  450393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:59:39.312115  450393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:59:39.313050  450393 out.go:177] * Verifying Kubernetes components...
	I0805 12:59:39.314390  450393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 12:59:39.327427  450393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36355
	I0805 12:59:39.327687  450393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39217
	I0805 12:59:39.328016  450393 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:59:39.328155  450393 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:59:39.328609  450393 main.go:141] libmachine: Using API Version  1
	I0805 12:59:39.328649  450393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:59:39.328735  450393 main.go:141] libmachine: Using API Version  1
	I0805 12:59:39.328786  450393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:59:39.329013  450393 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:59:39.329086  450393 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:59:39.329560  450393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:59:39.329599  450393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:59:39.329676  450393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:59:39.329721  450393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:59:39.330884  450393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34247
	I0805 12:59:39.331381  450393 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:59:39.331878  450393 main.go:141] libmachine: Using API Version  1
	I0805 12:59:39.331902  450393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:59:39.332289  450393 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:59:39.332529  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetState
	I0805 12:59:39.336244  450393 addons.go:234] Setting addon default-storageclass=true in "embed-certs-321139"
	W0805 12:59:39.336269  450393 addons.go:243] addon default-storageclass should already be in state true
	I0805 12:59:39.336305  450393 host.go:66] Checking if "embed-certs-321139" exists ...
	I0805 12:59:39.336688  450393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:59:39.336735  450393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:59:39.347255  450393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41715
	I0805 12:59:39.347411  450393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43729
	I0805 12:59:39.347776  450393 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:59:39.347910  450393 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:59:39.348271  450393 main.go:141] libmachine: Using API Version  1
	I0805 12:59:39.348291  450393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:59:39.348464  450393 main.go:141] libmachine: Using API Version  1
	I0805 12:59:39.348476  450393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:59:39.348603  450393 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:59:39.348760  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetState
	I0805 12:59:39.348817  450393 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:59:39.348955  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetState
	I0805 12:59:39.350697  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:39.350906  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:39.352896  450393 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 12:59:39.352895  450393 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0805 12:59:39.354185  450393 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0805 12:59:39.354207  450393 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0805 12:59:39.354224  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:39.354266  450393 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 12:59:39.354277  450393 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 12:59:39.354292  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:39.356641  450393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41381
	I0805 12:59:39.357213  450393 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:59:39.357546  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:39.357791  450393 main.go:141] libmachine: Using API Version  1
	I0805 12:59:39.357814  450393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:59:39.357867  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:39.358001  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:39.358020  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:39.359294  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:39.359322  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:39.359337  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:39.359345  450393 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:59:39.359353  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:39.359488  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:39.359624  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:39.359669  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:39.359783  450393 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa Username:docker}
	I0805 12:59:39.359977  450393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:59:39.360009  450393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:59:39.360077  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:39.360210  450393 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa Username:docker}
	I0805 12:59:39.380935  450393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33787
	I0805 12:59:39.381394  450393 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:59:39.381987  450393 main.go:141] libmachine: Using API Version  1
	I0805 12:59:39.382029  450393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:59:39.382362  450393 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:59:39.382603  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetState
	I0805 12:59:39.384225  450393 main.go:141] libmachine: (embed-certs-321139) Calling .DriverName
	I0805 12:59:39.384497  450393 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 12:59:39.384515  450393 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 12:59:39.384536  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHHostname
	I0805 12:59:39.389471  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:39.389972  450393 main.go:141] libmachine: (embed-certs-321139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:ad:fd", ip: ""} in network mk-embed-certs-321139: {Iface:virbr1 ExpiryTime:2024-08-05 13:49:57 +0000 UTC Type:0 Mac:52:54:00:6c:ad:fd Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:embed-certs-321139 Clientid:01:52:54:00:6c:ad:fd}
	I0805 12:59:39.390001  450393 main.go:141] libmachine: (embed-certs-321139) DBG | domain embed-certs-321139 has defined IP address 192.168.39.196 and MAC address 52:54:00:6c:ad:fd in network mk-embed-certs-321139
	I0805 12:59:39.390124  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHPort
	I0805 12:59:39.390303  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHKeyPath
	I0805 12:59:39.390604  450393 main.go:141] libmachine: (embed-certs-321139) Calling .GetSSHUsername
	I0805 12:59:39.390791  450393 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/embed-certs-321139/id_rsa Username:docker}
	I0805 12:59:39.513696  450393 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 12:59:39.533291  450393 node_ready.go:35] waiting up to 6m0s for node "embed-certs-321139" to be "Ready" ...
	I0805 12:59:39.597816  450393 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 12:59:39.700234  450393 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 12:59:39.719936  450393 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0805 12:59:39.719958  450393 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0805 12:59:39.760405  450393 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0805 12:59:39.760441  450393 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0805 12:59:39.808765  450393 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0805 12:59:39.808794  450393 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0805 12:59:39.833073  450393 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0805 12:59:39.946594  450393 main.go:141] libmachine: Making call to close driver server
	I0805 12:59:39.946633  450393 main.go:141] libmachine: (embed-certs-321139) Calling .Close
	I0805 12:59:39.946968  450393 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:59:39.946995  450393 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:59:39.947052  450393 main.go:141] libmachine: (embed-certs-321139) DBG | Closing plugin on server side
	I0805 12:59:39.947121  450393 main.go:141] libmachine: Making call to close driver server
	I0805 12:59:39.947137  450393 main.go:141] libmachine: (embed-certs-321139) Calling .Close
	I0805 12:59:39.947456  450393 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:59:39.947477  450393 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:59:39.947490  450393 main.go:141] libmachine: (embed-certs-321139) DBG | Closing plugin on server side
	I0805 12:59:39.953919  450393 main.go:141] libmachine: Making call to close driver server
	I0805 12:59:39.953942  450393 main.go:141] libmachine: (embed-certs-321139) Calling .Close
	I0805 12:59:39.954189  450393 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:59:39.954209  450393 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:59:40.636249  450393 main.go:141] libmachine: Making call to close driver server
	I0805 12:59:40.636274  450393 main.go:141] libmachine: (embed-certs-321139) Calling .Close
	I0805 12:59:40.636638  450393 main.go:141] libmachine: (embed-certs-321139) DBG | Closing plugin on server side
	I0805 12:59:40.636715  450393 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:59:40.636729  450393 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:59:40.636745  450393 main.go:141] libmachine: Making call to close driver server
	I0805 12:59:40.636757  450393 main.go:141] libmachine: (embed-certs-321139) Calling .Close
	I0805 12:59:40.636989  450393 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:59:40.637008  450393 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:59:40.671789  450393 main.go:141] libmachine: Making call to close driver server
	I0805 12:59:40.671819  450393 main.go:141] libmachine: (embed-certs-321139) Calling .Close
	I0805 12:59:40.672189  450393 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:59:40.672207  450393 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:59:40.672217  450393 main.go:141] libmachine: Making call to close driver server
	I0805 12:59:40.672225  450393 main.go:141] libmachine: (embed-certs-321139) Calling .Close
	I0805 12:59:40.672468  450393 main.go:141] libmachine: Successfully made call to close driver server
	I0805 12:59:40.672485  450393 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 12:59:40.672499  450393 addons.go:475] Verifying addon metrics-server=true in "embed-certs-321139"
	I0805 12:59:40.674497  450393 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0805 12:59:36.978361  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:37.478380  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:37.978354  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:38.478283  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:38.979257  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:39.478407  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:39.978772  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:40.478395  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:40.979309  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:41.478302  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:38.026001  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:40.026706  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:39.909336  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:41.910240  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:40.675778  450393 addons.go:510] duration metric: took 1.364642066s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0805 12:59:41.537321  450393 node_ready.go:53] node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:44.037571  450393 node_ready.go:53] node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:41.978791  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:42.478841  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:42.979289  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:43.478344  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:43.978613  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:44.478756  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:44.978392  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:45.478363  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:45.978354  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:46.478417  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:42.524568  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:45.024950  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:47.025453  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:44.408846  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:46.410085  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:46.537183  450393 node_ready.go:53] node "embed-certs-321139" has status "Ready":"False"
	I0805 12:59:47.037178  450393 node_ready.go:49] node "embed-certs-321139" has status "Ready":"True"
	I0805 12:59:47.037206  450393 node_ready.go:38] duration metric: took 7.503884334s for node "embed-certs-321139" to be "Ready" ...
	I0805 12:59:47.037221  450393 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 12:59:47.043159  450393 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wm7lh" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:47.048037  450393 pod_ready.go:92] pod "coredns-7db6d8ff4d-wm7lh" in "kube-system" namespace has status "Ready":"True"
	I0805 12:59:47.048088  450393 pod_ready.go:81] duration metric: took 4.901694ms for pod "coredns-7db6d8ff4d-wm7lh" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:47.048102  450393 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.055429  450393 pod_ready.go:92] pod "etcd-embed-certs-321139" in "kube-system" namespace has status "Ready":"True"
	I0805 12:59:49.055454  450393 pod_ready.go:81] duration metric: took 2.007345086s for pod "etcd-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.055464  450393 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.060072  450393 pod_ready.go:92] pod "kube-apiserver-embed-certs-321139" in "kube-system" namespace has status "Ready":"True"
	I0805 12:59:49.060095  450393 pod_ready.go:81] duration metric: took 4.624968ms for pod "kube-apiserver-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.060103  450393 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.065663  450393 pod_ready.go:92] pod "kube-controller-manager-embed-certs-321139" in "kube-system" namespace has status "Ready":"True"
	I0805 12:59:49.065689  450393 pod_ready.go:81] duration metric: took 5.578205ms for pod "kube-controller-manager-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.065708  450393 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-shgv2" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.071143  450393 pod_ready.go:92] pod "kube-proxy-shgv2" in "kube-system" namespace has status "Ready":"True"
	I0805 12:59:49.071166  450393 pod_ready.go:81] duration metric: took 5.450104ms for pod "kube-proxy-shgv2" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.071174  450393 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:46.978356  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:47.478322  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:47.978417  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:48.478966  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:48.979317  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:49.478449  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:49.978364  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:50.479294  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:50.978435  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:51.478614  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:49.028075  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:51.524299  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:48.908177  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:50.908490  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:52.909257  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:49.438002  450393 pod_ready.go:92] pod "kube-scheduler-embed-certs-321139" in "kube-system" namespace has status "Ready":"True"
	I0805 12:59:49.438032  450393 pod_ready.go:81] duration metric: took 366.851004ms for pod "kube-scheduler-embed-certs-321139" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:49.438042  450393 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace to be "Ready" ...
	I0805 12:59:51.443490  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:53.444534  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:51.978526  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:52.479187  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:52.979090  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:53.478733  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:53.978571  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:54.478525  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:54.979125  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:55.478711  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:55.979266  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:56.478956  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:53.525369  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:55.526660  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:54.909757  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:57.409489  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:55.445189  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:57.944983  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:56.979226  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:57.479019  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:57.978634  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:58.478338  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:58.978987  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:59.479290  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:59.978383  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:00.478373  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:00.978412  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:01.479312  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:59:57.527240  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:00.024177  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:02.024749  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 12:59:59.908362  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:01.909101  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:00.445471  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:02.944535  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:01.978392  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:02.479119  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:02.978313  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:03.478401  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:03.979029  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:04.478963  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:04.978393  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:05.478418  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:05.978381  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:06.479229  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:04.028522  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:06.525385  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:04.409119  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:06.409863  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:05.444313  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:07.452452  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:06.979172  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:07.479251  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:07.979183  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:08.478722  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:08.979248  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:09.478527  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:09.978581  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:10.478499  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:10.978520  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:11.478843  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:09.025651  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:11.525086  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:08.909528  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:11.408408  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:13.410472  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:09.945614  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:12.443723  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:11.978536  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:12.478504  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:12.979179  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:12.979258  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:13.022653  451238 cri.go:89] found id: ""
	I0805 13:00:13.022680  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.022689  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:13.022696  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:13.022766  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:13.059292  451238 cri.go:89] found id: ""
	I0805 13:00:13.059326  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.059336  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:13.059343  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:13.059399  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:13.098750  451238 cri.go:89] found id: ""
	I0805 13:00:13.098782  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.098793  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:13.098802  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:13.098866  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:13.133307  451238 cri.go:89] found id: ""
	I0805 13:00:13.133338  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.133346  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:13.133353  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:13.133420  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:13.171124  451238 cri.go:89] found id: ""
	I0805 13:00:13.171160  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.171170  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:13.171177  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:13.171237  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:13.209200  451238 cri.go:89] found id: ""
	I0805 13:00:13.209235  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.209247  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:13.209254  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:13.209312  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:13.244261  451238 cri.go:89] found id: ""
	I0805 13:00:13.244302  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.244313  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:13.244324  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:13.244397  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:13.283295  451238 cri.go:89] found id: ""
	I0805 13:00:13.283331  451238 logs.go:276] 0 containers: []
	W0805 13:00:13.283342  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:13.283356  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:13.283372  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:13.344134  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:13.344174  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:13.384084  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:13.384119  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:13.433784  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:13.433821  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:13.449756  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:13.449786  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:13.573090  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:16.074053  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:16.087817  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:16.087900  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:16.130938  451238 cri.go:89] found id: ""
	I0805 13:00:16.130970  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.130981  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:16.130989  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:16.131058  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:16.184208  451238 cri.go:89] found id: ""
	I0805 13:00:16.184245  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.184259  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:16.184269  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:16.184346  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:16.230959  451238 cri.go:89] found id: ""
	I0805 13:00:16.230998  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.231011  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:16.231020  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:16.231100  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:16.282886  451238 cri.go:89] found id: ""
	I0805 13:00:16.282940  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.282954  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:16.282963  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:16.283024  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:16.320345  451238 cri.go:89] found id: ""
	I0805 13:00:16.320381  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.320397  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:16.320404  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:16.320521  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:16.356390  451238 cri.go:89] found id: ""
	I0805 13:00:16.356427  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.356439  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:16.356447  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:16.356503  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:16.400477  451238 cri.go:89] found id: ""
	I0805 13:00:16.400510  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.400529  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:16.400539  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:16.400612  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:16.440634  451238 cri.go:89] found id: ""
	I0805 13:00:16.440662  451238 logs.go:276] 0 containers: []
	W0805 13:00:16.440673  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:16.440685  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:16.440702  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:16.510879  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:16.510922  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:16.554294  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:16.554332  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:16.607798  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:16.607853  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:16.622618  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:16.622655  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:16.702599  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:14.025025  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:16.025182  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:15.909245  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:18.409729  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:14.445222  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:16.445451  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:18.944533  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:19.202789  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:19.215776  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:19.215851  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:19.250503  451238 cri.go:89] found id: ""
	I0805 13:00:19.250540  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.250551  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:19.250558  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:19.250630  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:19.287358  451238 cri.go:89] found id: ""
	I0805 13:00:19.287392  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.287403  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:19.287412  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:19.287484  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:19.322167  451238 cri.go:89] found id: ""
	I0805 13:00:19.322195  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.322203  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:19.322209  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:19.322262  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:19.356874  451238 cri.go:89] found id: ""
	I0805 13:00:19.356905  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.356923  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:19.356931  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:19.357006  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:19.395172  451238 cri.go:89] found id: ""
	I0805 13:00:19.395206  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.395217  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:19.395227  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:19.395294  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:19.438404  451238 cri.go:89] found id: ""
	I0805 13:00:19.438431  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.438439  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:19.438445  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:19.438510  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:19.474727  451238 cri.go:89] found id: ""
	I0805 13:00:19.474755  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.474762  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:19.474769  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:19.474832  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:19.513906  451238 cri.go:89] found id: ""
	I0805 13:00:19.513945  451238 logs.go:276] 0 containers: []
	W0805 13:00:19.513953  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:19.513963  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:19.513977  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:19.528337  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:19.528378  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:19.601135  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:19.601168  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:19.601185  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:19.676792  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:19.676844  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:19.716861  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:19.716894  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:18.025634  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:20.027525  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:20.909150  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:22.910153  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:20.945009  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:23.444529  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:22.266971  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:22.280346  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:22.280422  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:22.314788  451238 cri.go:89] found id: ""
	I0805 13:00:22.314816  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.314824  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:22.314831  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:22.314884  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:22.357357  451238 cri.go:89] found id: ""
	I0805 13:00:22.357394  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.357405  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:22.357414  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:22.357483  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:22.393254  451238 cri.go:89] found id: ""
	I0805 13:00:22.393288  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.393296  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:22.393302  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:22.393366  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:22.434766  451238 cri.go:89] found id: ""
	I0805 13:00:22.434796  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.434807  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:22.434815  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:22.434887  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:22.475649  451238 cri.go:89] found id: ""
	I0805 13:00:22.475676  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.475684  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:22.475690  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:22.475754  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:22.515633  451238 cri.go:89] found id: ""
	I0805 13:00:22.515662  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.515670  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:22.515677  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:22.515757  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:22.550716  451238 cri.go:89] found id: ""
	I0805 13:00:22.550749  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.550759  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:22.550767  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:22.550849  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:22.588537  451238 cri.go:89] found id: ""
	I0805 13:00:22.588571  451238 logs.go:276] 0 containers: []
	W0805 13:00:22.588583  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:22.588595  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:22.588609  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:22.638535  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:22.638577  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:22.654879  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:22.654919  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:22.721482  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:22.721513  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:22.721529  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:22.801442  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:22.801489  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:25.343805  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:25.358068  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:25.358176  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:25.393734  451238 cri.go:89] found id: ""
	I0805 13:00:25.393767  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.393778  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:25.393785  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:25.393849  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:25.428217  451238 cri.go:89] found id: ""
	I0805 13:00:25.428244  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.428252  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:25.428257  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:25.428316  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:25.462826  451238 cri.go:89] found id: ""
	I0805 13:00:25.462858  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.462869  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:25.462877  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:25.462961  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:25.502960  451238 cri.go:89] found id: ""
	I0805 13:00:25.502989  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.502998  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:25.503006  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:25.503072  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:25.538859  451238 cri.go:89] found id: ""
	I0805 13:00:25.538888  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.538897  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:25.538902  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:25.538964  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:25.577850  451238 cri.go:89] found id: ""
	I0805 13:00:25.577883  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.577894  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:25.577901  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:25.577988  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:25.611728  451238 cri.go:89] found id: ""
	I0805 13:00:25.611773  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.611785  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:25.611793  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:25.611865  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:25.654987  451238 cri.go:89] found id: ""
	I0805 13:00:25.655018  451238 logs.go:276] 0 containers: []
	W0805 13:00:25.655027  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:25.655039  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:25.655052  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:25.669124  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:25.669160  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:25.747354  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:25.747380  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:25.747398  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:25.825198  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:25.825241  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:25.865511  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:25.865546  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:22.526638  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:25.024414  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:27.025393  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:25.409361  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:27.411148  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:25.444607  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:27.447460  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:28.418263  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:28.431831  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:28.431895  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:28.470249  451238 cri.go:89] found id: ""
	I0805 13:00:28.470280  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.470291  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:28.470301  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:28.470373  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:28.506935  451238 cri.go:89] found id: ""
	I0805 13:00:28.506968  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.506977  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:28.506985  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:28.507053  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:28.546621  451238 cri.go:89] found id: ""
	I0805 13:00:28.546652  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.546663  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:28.546671  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:28.546749  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:28.584699  451238 cri.go:89] found id: ""
	I0805 13:00:28.584734  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.584745  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:28.584753  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:28.584820  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:28.620693  451238 cri.go:89] found id: ""
	I0805 13:00:28.620726  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.620736  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:28.620744  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:28.620814  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:28.657340  451238 cri.go:89] found id: ""
	I0805 13:00:28.657370  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.657379  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:28.657385  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:28.657438  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:28.695126  451238 cri.go:89] found id: ""
	I0805 13:00:28.695156  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.695166  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:28.695174  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:28.695239  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:28.729757  451238 cri.go:89] found id: ""
	I0805 13:00:28.729808  451238 logs.go:276] 0 containers: []
	W0805 13:00:28.729821  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:28.729834  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:28.729852  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:28.769642  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:28.769675  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:28.818076  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:28.818114  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:28.831466  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:28.831496  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:28.902788  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:28.902818  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:28.902836  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:31.482482  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:31.497767  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:31.497867  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:31.536922  451238 cri.go:89] found id: ""
	I0805 13:00:31.536948  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.536960  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:31.536969  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:31.537040  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:31.572422  451238 cri.go:89] found id: ""
	I0805 13:00:31.572456  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.572466  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:31.572472  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:31.572531  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:31.607961  451238 cri.go:89] found id: ""
	I0805 13:00:31.607996  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.608008  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:31.608016  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:31.608082  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:31.641771  451238 cri.go:89] found id: ""
	I0805 13:00:31.641800  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.641822  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:31.641830  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:31.641904  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:31.681661  451238 cri.go:89] found id: ""
	I0805 13:00:31.681695  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.681707  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:31.681715  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:31.681791  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:31.723777  451238 cri.go:89] found id: ""
	I0805 13:00:31.723814  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.723823  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:31.723829  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:31.723922  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:31.759898  451238 cri.go:89] found id: ""
	I0805 13:00:31.759935  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.759948  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:31.759957  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:31.760022  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:31.798433  451238 cri.go:89] found id: ""
	I0805 13:00:31.798462  451238 logs.go:276] 0 containers: []
	W0805 13:00:31.798470  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:31.798480  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:31.798497  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:31.872005  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:31.872030  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:31.872045  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:31.952201  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:31.952240  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:29.524445  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:31.525646  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:29.909901  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:32.408826  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:29.944170  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:31.944427  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:31.995920  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:31.995955  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:32.047453  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:32.047493  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:34.562369  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:34.576644  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:34.576708  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:34.613002  451238 cri.go:89] found id: ""
	I0805 13:00:34.613036  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.613047  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:34.613056  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:34.613127  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:34.650723  451238 cri.go:89] found id: ""
	I0805 13:00:34.650757  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.650769  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:34.650777  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:34.650851  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:34.689047  451238 cri.go:89] found id: ""
	I0805 13:00:34.689073  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.689081  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:34.689088  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:34.689148  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:34.727552  451238 cri.go:89] found id: ""
	I0805 13:00:34.727592  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.727604  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:34.727612  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:34.727683  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:34.761661  451238 cri.go:89] found id: ""
	I0805 13:00:34.761696  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.761707  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:34.761715  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:34.761791  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:34.800062  451238 cri.go:89] found id: ""
	I0805 13:00:34.800116  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.800128  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:34.800137  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:34.800198  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:34.833536  451238 cri.go:89] found id: ""
	I0805 13:00:34.833566  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.833578  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:34.833586  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:34.833654  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:34.868079  451238 cri.go:89] found id: ""
	I0805 13:00:34.868117  451238 logs.go:276] 0 containers: []
	W0805 13:00:34.868126  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:34.868135  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:34.868149  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:34.920092  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:34.920124  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:34.934484  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:34.934510  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:35.007716  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:35.007751  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:35.007768  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:35.088183  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:35.088233  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:34.024704  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:36.025754  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:34.409917  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:36.409993  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:34.444842  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:36.943985  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:38.944649  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:37.633443  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:37.647405  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:37.647470  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:37.684682  451238 cri.go:89] found id: ""
	I0805 13:00:37.684711  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.684720  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:37.684727  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:37.684779  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:37.723413  451238 cri.go:89] found id: ""
	I0805 13:00:37.723442  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.723449  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:37.723455  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:37.723506  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:37.758388  451238 cri.go:89] found id: ""
	I0805 13:00:37.758418  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.758428  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:37.758437  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:37.758501  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:37.797846  451238 cri.go:89] found id: ""
	I0805 13:00:37.797879  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.797890  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:37.797901  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:37.797971  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:37.837053  451238 cri.go:89] found id: ""
	I0805 13:00:37.837082  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.837092  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:37.837104  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:37.837163  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:37.876185  451238 cri.go:89] found id: ""
	I0805 13:00:37.876211  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.876220  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:37.876226  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:37.876294  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:37.915318  451238 cri.go:89] found id: ""
	I0805 13:00:37.915350  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.915362  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:37.915370  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:37.915429  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:37.953916  451238 cri.go:89] found id: ""
	I0805 13:00:37.953944  451238 logs.go:276] 0 containers: []
	W0805 13:00:37.953954  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:37.953964  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:37.953976  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:37.991116  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:37.991154  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:38.043796  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:38.043838  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:38.058636  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:38.058669  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:38.143022  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:38.143051  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:38.143067  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:40.721468  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:40.735679  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:40.735774  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:40.773583  451238 cri.go:89] found id: ""
	I0805 13:00:40.773609  451238 logs.go:276] 0 containers: []
	W0805 13:00:40.773617  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:40.773626  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:40.773685  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:40.819857  451238 cri.go:89] found id: ""
	I0805 13:00:40.819886  451238 logs.go:276] 0 containers: []
	W0805 13:00:40.819895  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:40.819901  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:40.819963  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:40.857156  451238 cri.go:89] found id: ""
	I0805 13:00:40.857184  451238 logs.go:276] 0 containers: []
	W0805 13:00:40.857192  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:40.857198  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:40.857251  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:40.892933  451238 cri.go:89] found id: ""
	I0805 13:00:40.892970  451238 logs.go:276] 0 containers: []
	W0805 13:00:40.892981  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:40.892990  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:40.893046  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:40.927128  451238 cri.go:89] found id: ""
	I0805 13:00:40.927163  451238 logs.go:276] 0 containers: []
	W0805 13:00:40.927173  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:40.927182  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:40.927237  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:40.961790  451238 cri.go:89] found id: ""
	I0805 13:00:40.961817  451238 logs.go:276] 0 containers: []
	W0805 13:00:40.961826  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:40.961832  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:40.961886  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:40.996249  451238 cri.go:89] found id: ""
	I0805 13:00:40.996282  451238 logs.go:276] 0 containers: []
	W0805 13:00:40.996293  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:40.996300  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:40.996371  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:41.032305  451238 cri.go:89] found id: ""
	I0805 13:00:41.032332  451238 logs.go:276] 0 containers: []
	W0805 13:00:41.032342  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:41.032358  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:41.032375  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:41.075993  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:41.076027  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:41.126020  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:41.126057  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:41.140263  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:41.140288  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:41.216648  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:41.216670  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:41.216683  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:38.524812  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:41.024597  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:38.909518  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:40.910256  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:43.410062  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:41.443930  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:43.945026  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:43.796367  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:43.810086  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:43.810162  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:43.844373  451238 cri.go:89] found id: ""
	I0805 13:00:43.844410  451238 logs.go:276] 0 containers: []
	W0805 13:00:43.844422  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:43.844430  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:43.844502  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:43.880249  451238 cri.go:89] found id: ""
	I0805 13:00:43.880285  451238 logs.go:276] 0 containers: []
	W0805 13:00:43.880295  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:43.880303  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:43.880376  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:43.921279  451238 cri.go:89] found id: ""
	I0805 13:00:43.921313  451238 logs.go:276] 0 containers: []
	W0805 13:00:43.921323  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:43.921329  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:43.921382  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:43.963736  451238 cri.go:89] found id: ""
	I0805 13:00:43.963782  451238 logs.go:276] 0 containers: []
	W0805 13:00:43.963794  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:43.963803  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:43.963869  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:44.009001  451238 cri.go:89] found id: ""
	I0805 13:00:44.009038  451238 logs.go:276] 0 containers: []
	W0805 13:00:44.009050  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:44.009057  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:44.009128  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:44.059484  451238 cri.go:89] found id: ""
	I0805 13:00:44.059514  451238 logs.go:276] 0 containers: []
	W0805 13:00:44.059526  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:44.059534  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:44.059605  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:44.102043  451238 cri.go:89] found id: ""
	I0805 13:00:44.102075  451238 logs.go:276] 0 containers: []
	W0805 13:00:44.102088  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:44.102094  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:44.102170  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:44.137518  451238 cri.go:89] found id: ""
	I0805 13:00:44.137558  451238 logs.go:276] 0 containers: []
	W0805 13:00:44.137569  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:44.137584  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:44.137600  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:44.188139  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:44.188175  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:44.202544  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:44.202588  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:44.278486  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:44.278508  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:44.278521  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:44.363419  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:44.363458  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:46.905665  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:46.922141  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:46.922206  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:43.025461  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:45.523997  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:45.908437  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:48.409410  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:46.445919  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:48.944243  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:46.963468  451238 cri.go:89] found id: ""
	I0805 13:00:46.963494  451238 logs.go:276] 0 containers: []
	W0805 13:00:46.963502  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:46.963508  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:46.963557  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:47.003445  451238 cri.go:89] found id: ""
	I0805 13:00:47.003472  451238 logs.go:276] 0 containers: []
	W0805 13:00:47.003480  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:47.003486  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:47.003537  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:47.043271  451238 cri.go:89] found id: ""
	I0805 13:00:47.043306  451238 logs.go:276] 0 containers: []
	W0805 13:00:47.043318  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:47.043326  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:47.043394  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:47.079843  451238 cri.go:89] found id: ""
	I0805 13:00:47.079874  451238 logs.go:276] 0 containers: []
	W0805 13:00:47.079884  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:47.079893  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:47.079954  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:47.116819  451238 cri.go:89] found id: ""
	I0805 13:00:47.116847  451238 logs.go:276] 0 containers: []
	W0805 13:00:47.116856  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:47.116861  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:47.116917  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:47.156302  451238 cri.go:89] found id: ""
	I0805 13:00:47.156331  451238 logs.go:276] 0 containers: []
	W0805 13:00:47.156340  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:47.156353  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:47.156410  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:47.200419  451238 cri.go:89] found id: ""
	I0805 13:00:47.200449  451238 logs.go:276] 0 containers: []
	W0805 13:00:47.200463  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:47.200469  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:47.200533  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:47.237483  451238 cri.go:89] found id: ""
	I0805 13:00:47.237515  451238 logs.go:276] 0 containers: []
	W0805 13:00:47.237522  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:47.237532  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:47.237545  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:47.251598  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:47.251632  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:47.326457  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:47.326483  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:47.326501  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:47.410413  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:47.410455  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:47.452696  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:47.452732  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:50.005335  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:50.019610  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:50.019679  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:50.057401  451238 cri.go:89] found id: ""
	I0805 13:00:50.057435  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.057447  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:50.057456  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:50.057516  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:50.101710  451238 cri.go:89] found id: ""
	I0805 13:00:50.101743  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.101751  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:50.101758  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:50.101822  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:50.139624  451238 cri.go:89] found id: ""
	I0805 13:00:50.139658  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.139669  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:50.139677  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:50.139761  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:50.176004  451238 cri.go:89] found id: ""
	I0805 13:00:50.176031  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.176039  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:50.176045  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:50.176123  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:50.219319  451238 cri.go:89] found id: ""
	I0805 13:00:50.219352  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.219362  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:50.219369  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:50.219437  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:50.287443  451238 cri.go:89] found id: ""
	I0805 13:00:50.287478  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.287489  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:50.287498  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:50.287582  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:50.321018  451238 cri.go:89] found id: ""
	I0805 13:00:50.321047  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.321056  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:50.321063  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:50.321124  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:50.354559  451238 cri.go:89] found id: ""
	I0805 13:00:50.354597  451238 logs.go:276] 0 containers: []
	W0805 13:00:50.354610  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:50.354625  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:50.354642  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:50.398621  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:50.398657  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:50.451693  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:50.451735  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:50.466810  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:50.466851  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:50.542431  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:50.542461  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:50.542482  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:47.525977  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:50.025280  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:52.025760  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:50.410198  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:52.908466  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:50.946086  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:53.445962  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:53.128466  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:53.144139  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:53.144216  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:53.178383  451238 cri.go:89] found id: ""
	I0805 13:00:53.178427  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.178438  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:53.178447  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:53.178516  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:53.220312  451238 cri.go:89] found id: ""
	I0805 13:00:53.220348  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.220358  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:53.220365  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:53.220432  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:53.255352  451238 cri.go:89] found id: ""
	I0805 13:00:53.255380  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.255390  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:53.255398  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:53.255473  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:53.293254  451238 cri.go:89] found id: ""
	I0805 13:00:53.293292  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.293311  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:53.293320  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:53.293395  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:53.329407  451238 cri.go:89] found id: ""
	I0805 13:00:53.329436  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.329448  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:53.329455  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:53.329523  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:53.362838  451238 cri.go:89] found id: ""
	I0805 13:00:53.362868  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.362876  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:53.362883  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:53.362957  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:53.399283  451238 cri.go:89] found id: ""
	I0805 13:00:53.399313  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.399324  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:53.399332  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:53.399405  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:53.438527  451238 cri.go:89] found id: ""
	I0805 13:00:53.438558  451238 logs.go:276] 0 containers: []
	W0805 13:00:53.438567  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:53.438578  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:53.438597  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:53.492709  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:53.492760  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:53.507522  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:53.507555  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:53.581690  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:53.581710  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:53.581724  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:53.664402  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:53.664451  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:56.209640  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:56.224403  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:56.224487  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:56.266214  451238 cri.go:89] found id: ""
	I0805 13:00:56.266243  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.266254  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:56.266263  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:56.266328  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:56.304034  451238 cri.go:89] found id: ""
	I0805 13:00:56.304070  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.304082  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:56.304091  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:56.304172  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:56.342133  451238 cri.go:89] found id: ""
	I0805 13:00:56.342159  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.342167  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:56.342173  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:56.342225  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:56.378549  451238 cri.go:89] found id: ""
	I0805 13:00:56.378588  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.378599  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:56.378606  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:56.378667  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:56.415613  451238 cri.go:89] found id: ""
	I0805 13:00:56.415641  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.415651  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:56.415657  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:56.415715  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:56.451915  451238 cri.go:89] found id: ""
	I0805 13:00:56.451944  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.451953  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:56.451960  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:56.452021  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:56.492219  451238 cri.go:89] found id: ""
	I0805 13:00:56.492255  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.492267  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:56.492275  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:56.492347  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:56.534564  451238 cri.go:89] found id: ""
	I0805 13:00:56.534606  451238 logs.go:276] 0 containers: []
	W0805 13:00:56.534618  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:56.534632  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:56.534652  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:56.548772  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:56.548813  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:56.625649  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:56.625678  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:56.625695  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:56.716735  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:56.716787  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:56.771881  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:56.771910  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:54.525355  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:57.025659  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:54.908805  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:56.909601  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:55.943885  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:57.945233  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:59.325624  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:00:59.338796  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:00:59.338869  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:00:59.375002  451238 cri.go:89] found id: ""
	I0805 13:00:59.375039  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.375050  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:00:59.375059  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:00:59.375138  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:00:59.410778  451238 cri.go:89] found id: ""
	I0805 13:00:59.410800  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.410810  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:00:59.410817  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:00:59.410873  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:00:59.453728  451238 cri.go:89] found id: ""
	I0805 13:00:59.453760  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.453771  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:00:59.453779  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:00:59.453845  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:00:59.492968  451238 cri.go:89] found id: ""
	I0805 13:00:59.493002  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.493013  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:00:59.493021  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:00:59.493091  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:00:59.533342  451238 cri.go:89] found id: ""
	I0805 13:00:59.533372  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.533383  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:00:59.533390  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:00:59.533445  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:00:59.569677  451238 cri.go:89] found id: ""
	I0805 13:00:59.569705  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.569715  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:00:59.569722  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:00:59.569789  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:00:59.605106  451238 cri.go:89] found id: ""
	I0805 13:00:59.605139  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.605150  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:00:59.605158  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:00:59.605228  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:00:59.639948  451238 cri.go:89] found id: ""
	I0805 13:00:59.639980  451238 logs.go:276] 0 containers: []
	W0805 13:00:59.639989  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:00:59.640000  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:00:59.640016  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:00:59.679926  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:00:59.679956  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:00:59.731545  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:00:59.731591  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:00:59.746286  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:00:59.746320  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:00:59.828398  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:00:59.828420  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:00:59.828439  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:00:59.524365  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:01.525092  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:59.410713  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:01.909619  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:00:59.945483  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:02.445780  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:02.412560  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:02.429633  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:02.429718  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:02.475916  451238 cri.go:89] found id: ""
	I0805 13:01:02.475951  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.475963  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:02.475971  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:02.476061  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:02.528807  451238 cri.go:89] found id: ""
	I0805 13:01:02.528837  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.528849  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:02.528856  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:02.528924  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:02.575164  451238 cri.go:89] found id: ""
	I0805 13:01:02.575194  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.575210  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:02.575218  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:02.575286  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:02.614709  451238 cri.go:89] found id: ""
	I0805 13:01:02.614800  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.614815  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:02.614824  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:02.614902  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:02.654941  451238 cri.go:89] found id: ""
	I0805 13:01:02.654979  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.654990  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:02.654997  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:02.655069  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:02.690552  451238 cri.go:89] found id: ""
	I0805 13:01:02.690586  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.690595  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:02.690602  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:02.690657  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:02.725607  451238 cri.go:89] found id: ""
	I0805 13:01:02.725644  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.725656  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:02.725665  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:02.725745  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:02.760180  451238 cri.go:89] found id: ""
	I0805 13:01:02.760211  451238 logs.go:276] 0 containers: []
	W0805 13:01:02.760223  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:02.760244  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:02.760262  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:02.813071  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:02.813128  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:02.828633  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:02.828665  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:02.898049  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:02.898074  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:02.898087  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:02.988077  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:02.988124  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:05.532719  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:05.546423  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:05.546489  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:05.590978  451238 cri.go:89] found id: ""
	I0805 13:01:05.591006  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.591013  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:05.591019  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:05.591071  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:05.631251  451238 cri.go:89] found id: ""
	I0805 13:01:05.631287  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.631298  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:05.631306  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:05.631391  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:05.671826  451238 cri.go:89] found id: ""
	I0805 13:01:05.671863  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.671875  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:05.671883  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:05.671951  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:05.708147  451238 cri.go:89] found id: ""
	I0805 13:01:05.708176  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.708186  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:05.708194  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:05.708262  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:05.741962  451238 cri.go:89] found id: ""
	I0805 13:01:05.741994  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.742006  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:05.742015  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:05.742087  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:05.777930  451238 cri.go:89] found id: ""
	I0805 13:01:05.777965  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.777976  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:05.777985  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:05.778061  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:05.813066  451238 cri.go:89] found id: ""
	I0805 13:01:05.813099  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.813111  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:05.813119  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:05.813189  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:05.849382  451238 cri.go:89] found id: ""
	I0805 13:01:05.849410  451238 logs.go:276] 0 containers: []
	W0805 13:01:05.849418  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:05.849428  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:05.849440  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:05.903376  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:05.903423  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:05.918540  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:05.918575  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:05.990608  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:05.990637  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:05.990658  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:06.072524  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:06.072571  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:04.025528  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:06.525325  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:04.409190  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:06.409231  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:04.944649  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:07.445278  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:08.617528  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:08.631637  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:08.631713  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:08.669999  451238 cri.go:89] found id: ""
	I0805 13:01:08.670039  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.670050  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:08.670065  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:08.670147  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:08.705322  451238 cri.go:89] found id: ""
	I0805 13:01:08.705356  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.705365  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:08.705370  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:08.705442  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:08.744884  451238 cri.go:89] found id: ""
	I0805 13:01:08.744915  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.744927  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:08.744936  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:08.745018  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:08.782394  451238 cri.go:89] found id: ""
	I0805 13:01:08.782428  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.782440  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:08.782448  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:08.782518  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:08.816989  451238 cri.go:89] found id: ""
	I0805 13:01:08.817018  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.817027  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:08.817034  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:08.817106  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:08.856389  451238 cri.go:89] found id: ""
	I0805 13:01:08.856420  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.856431  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:08.856439  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:08.856506  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:08.891942  451238 cri.go:89] found id: ""
	I0805 13:01:08.891975  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.891986  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:08.891995  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:08.892064  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:08.930329  451238 cri.go:89] found id: ""
	I0805 13:01:08.930364  451238 logs.go:276] 0 containers: []
	W0805 13:01:08.930375  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:08.930389  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:08.930406  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:08.972574  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:08.972610  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:09.026194  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:09.026228  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:09.040973  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:09.041002  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:09.115094  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:09.115121  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:09.115143  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:11.698322  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:11.711841  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:11.711927  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:11.749152  451238 cri.go:89] found id: ""
	I0805 13:01:11.749187  451238 logs.go:276] 0 containers: []
	W0805 13:01:11.749199  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:11.749207  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:11.749274  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:11.785395  451238 cri.go:89] found id: ""
	I0805 13:01:11.785430  451238 logs.go:276] 0 containers: []
	W0805 13:01:11.785441  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:11.785449  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:11.785516  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:11.822240  451238 cri.go:89] found id: ""
	I0805 13:01:11.822282  451238 logs.go:276] 0 containers: []
	W0805 13:01:11.822293  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:11.822302  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:11.822372  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:11.858755  451238 cri.go:89] found id: ""
	I0805 13:01:11.858794  451238 logs.go:276] 0 containers: []
	W0805 13:01:11.858805  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:11.858814  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:11.858884  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:11.893064  451238 cri.go:89] found id: ""
	I0805 13:01:11.893101  451238 logs.go:276] 0 containers: []
	W0805 13:01:11.893113  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:11.893121  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:11.893195  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:11.930965  451238 cri.go:89] found id: ""
	I0805 13:01:11.931003  451238 logs.go:276] 0 containers: []
	W0805 13:01:11.931015  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:11.931025  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:11.931089  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:09.025566  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:11.525069  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:08.910618  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:11.409157  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:09.944797  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:12.445029  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:11.967594  451238 cri.go:89] found id: ""
	I0805 13:01:11.967620  451238 logs.go:276] 0 containers: []
	W0805 13:01:11.967630  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:11.967638  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:11.967697  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:12.004978  451238 cri.go:89] found id: ""
	I0805 13:01:12.005007  451238 logs.go:276] 0 containers: []
	W0805 13:01:12.005015  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:12.005025  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:12.005037  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:12.087476  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:12.087500  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:12.087515  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:12.177690  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:12.177757  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:12.222858  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:12.222889  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:12.273322  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:12.273362  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:14.788210  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:14.802351  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:14.802426  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:14.837705  451238 cri.go:89] found id: ""
	I0805 13:01:14.837736  451238 logs.go:276] 0 containers: []
	W0805 13:01:14.837746  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:14.837755  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:14.837824  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:14.873389  451238 cri.go:89] found id: ""
	I0805 13:01:14.873420  451238 logs.go:276] 0 containers: []
	W0805 13:01:14.873430  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:14.873438  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:14.873506  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:14.913969  451238 cri.go:89] found id: ""
	I0805 13:01:14.913999  451238 logs.go:276] 0 containers: []
	W0805 13:01:14.914009  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:14.914018  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:14.914081  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:14.953478  451238 cri.go:89] found id: ""
	I0805 13:01:14.953510  451238 logs.go:276] 0 containers: []
	W0805 13:01:14.953521  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:14.953528  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:14.953584  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:14.992166  451238 cri.go:89] found id: ""
	I0805 13:01:14.992197  451238 logs.go:276] 0 containers: []
	W0805 13:01:14.992206  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:14.992212  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:14.992291  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:15.031258  451238 cri.go:89] found id: ""
	I0805 13:01:15.031285  451238 logs.go:276] 0 containers: []
	W0805 13:01:15.031293  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:15.031300  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:15.031353  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:15.068944  451238 cri.go:89] found id: ""
	I0805 13:01:15.068972  451238 logs.go:276] 0 containers: []
	W0805 13:01:15.068980  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:15.068986  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:15.069042  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:15.105413  451238 cri.go:89] found id: ""
	I0805 13:01:15.105443  451238 logs.go:276] 0 containers: []
	W0805 13:01:15.105454  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:15.105467  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:15.105489  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:15.161925  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:15.161969  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:15.177174  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:15.177206  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:15.257950  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:15.257975  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:15.257989  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:15.336672  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:15.336716  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:13.526088  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:16.025513  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:13.908773  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:15.908817  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:17.910431  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:14.945842  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:17.444869  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:17.876314  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:17.889842  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:17.889909  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:17.928050  451238 cri.go:89] found id: ""
	I0805 13:01:17.928077  451238 logs.go:276] 0 containers: []
	W0805 13:01:17.928086  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:17.928092  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:17.928150  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:17.965713  451238 cri.go:89] found id: ""
	I0805 13:01:17.965751  451238 logs.go:276] 0 containers: []
	W0805 13:01:17.965762  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:17.965770  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:17.965837  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:18.002938  451238 cri.go:89] found id: ""
	I0805 13:01:18.002972  451238 logs.go:276] 0 containers: []
	W0805 13:01:18.002984  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:18.002992  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:18.003062  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:18.040140  451238 cri.go:89] found id: ""
	I0805 13:01:18.040178  451238 logs.go:276] 0 containers: []
	W0805 13:01:18.040190  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:18.040198  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:18.040269  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:18.075427  451238 cri.go:89] found id: ""
	I0805 13:01:18.075463  451238 logs.go:276] 0 containers: []
	W0805 13:01:18.075475  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:18.075490  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:18.075558  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:18.113469  451238 cri.go:89] found id: ""
	I0805 13:01:18.113507  451238 logs.go:276] 0 containers: []
	W0805 13:01:18.113521  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:18.113528  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:18.113587  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:18.152626  451238 cri.go:89] found id: ""
	I0805 13:01:18.152662  451238 logs.go:276] 0 containers: []
	W0805 13:01:18.152672  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:18.152678  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:18.152745  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:18.189540  451238 cri.go:89] found id: ""
	I0805 13:01:18.189577  451238 logs.go:276] 0 containers: []
	W0805 13:01:18.189590  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:18.189602  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:18.189618  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:18.244314  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:18.244353  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:18.257912  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:18.257939  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:18.339659  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:18.339682  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:18.339699  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:18.425391  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:18.425449  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:20.975889  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:20.989798  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:20.989868  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:21.030858  451238 cri.go:89] found id: ""
	I0805 13:01:21.030894  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.030906  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:21.030915  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:21.030979  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:21.067367  451238 cri.go:89] found id: ""
	I0805 13:01:21.067402  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.067411  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:21.067419  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:21.067476  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:21.104307  451238 cri.go:89] found id: ""
	I0805 13:01:21.104337  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.104352  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:21.104361  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:21.104424  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:21.141486  451238 cri.go:89] found id: ""
	I0805 13:01:21.141519  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.141531  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:21.141539  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:21.141606  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:21.179247  451238 cri.go:89] found id: ""
	I0805 13:01:21.179305  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.179317  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:21.179330  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:21.179406  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:21.215030  451238 cri.go:89] found id: ""
	I0805 13:01:21.215065  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.215075  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:21.215083  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:21.215152  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:21.252982  451238 cri.go:89] found id: ""
	I0805 13:01:21.253008  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.253016  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:21.253022  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:21.253097  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:21.290256  451238 cri.go:89] found id: ""
	I0805 13:01:21.290292  451238 logs.go:276] 0 containers: []
	W0805 13:01:21.290302  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:21.290325  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:21.290343  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:21.342809  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:21.342855  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:21.357959  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:21.358000  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:21.433087  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:21.433120  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:21.433143  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:21.514261  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:21.514312  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:18.025965  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:20.524832  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:20.409943  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:22.909233  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:19.445074  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:21.445547  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:23.445637  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:24.060402  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:24.076056  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:24.076131  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:24.115976  451238 cri.go:89] found id: ""
	I0805 13:01:24.116009  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.116022  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:24.116031  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:24.116111  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:24.158411  451238 cri.go:89] found id: ""
	I0805 13:01:24.158440  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.158448  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:24.158454  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:24.158520  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:24.194589  451238 cri.go:89] found id: ""
	I0805 13:01:24.194624  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.194635  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:24.194644  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:24.194720  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:24.231528  451238 cri.go:89] found id: ""
	I0805 13:01:24.231562  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.231569  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:24.231576  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:24.231649  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:24.268491  451238 cri.go:89] found id: ""
	I0805 13:01:24.268523  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.268532  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:24.268538  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:24.268602  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:24.306718  451238 cri.go:89] found id: ""
	I0805 13:01:24.306752  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.306763  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:24.306772  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:24.306839  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:24.343552  451238 cri.go:89] found id: ""
	I0805 13:01:24.343578  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.343586  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:24.343593  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:24.343649  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:24.384555  451238 cri.go:89] found id: ""
	I0805 13:01:24.384590  451238 logs.go:276] 0 containers: []
	W0805 13:01:24.384602  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:24.384615  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:24.384633  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:24.430256  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:24.430298  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:24.484616  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:24.484661  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:24.500926  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:24.500958  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:24.581379  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:24.581410  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:24.581424  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:22.525806  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:24.526411  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:27.024452  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:25.408887  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:27.409717  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:25.945113  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:28.444740  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:27.167538  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:27.181959  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:27.182035  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:27.223243  451238 cri.go:89] found id: ""
	I0805 13:01:27.223282  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.223293  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:27.223301  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:27.223374  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:27.257806  451238 cri.go:89] found id: ""
	I0805 13:01:27.257843  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.257856  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:27.257864  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:27.257940  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:27.304306  451238 cri.go:89] found id: ""
	I0805 13:01:27.304342  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.304353  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:27.304370  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:27.304439  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:27.342595  451238 cri.go:89] found id: ""
	I0805 13:01:27.342623  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.342631  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:27.342638  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:27.342707  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:27.385628  451238 cri.go:89] found id: ""
	I0805 13:01:27.385661  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.385670  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:27.385677  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:27.385760  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:27.425059  451238 cri.go:89] found id: ""
	I0805 13:01:27.425091  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.425100  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:27.425106  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:27.425175  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:27.465739  451238 cri.go:89] found id: ""
	I0805 13:01:27.465783  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.465794  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:27.465807  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:27.465869  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:27.506431  451238 cri.go:89] found id: ""
	I0805 13:01:27.506460  451238 logs.go:276] 0 containers: []
	W0805 13:01:27.506468  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:27.506477  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:27.506494  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:27.586440  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:27.586467  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:27.586482  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:27.667826  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:27.667869  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:27.710458  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:27.710496  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:27.763057  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:27.763100  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:30.278799  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:30.293788  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:30.293874  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:30.336209  451238 cri.go:89] found id: ""
	I0805 13:01:30.336240  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.336248  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:30.336255  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:30.336323  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:30.371593  451238 cri.go:89] found id: ""
	I0805 13:01:30.371627  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.371642  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:30.371649  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:30.371714  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:30.408266  451238 cri.go:89] found id: ""
	I0805 13:01:30.408298  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.408317  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:30.408325  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:30.408388  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:30.448841  451238 cri.go:89] found id: ""
	I0805 13:01:30.448864  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.448872  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:30.448878  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:30.448940  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:30.488367  451238 cri.go:89] found id: ""
	I0805 13:01:30.488403  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.488411  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:30.488418  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:30.488485  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:30.527131  451238 cri.go:89] found id: ""
	I0805 13:01:30.527163  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.527173  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:30.527181  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:30.527249  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:30.568089  451238 cri.go:89] found id: ""
	I0805 13:01:30.568122  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.568131  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:30.568138  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:30.568203  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:30.605952  451238 cri.go:89] found id: ""
	I0805 13:01:30.605990  451238 logs.go:276] 0 containers: []
	W0805 13:01:30.606007  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:30.606021  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:30.606041  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:30.656449  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:30.656491  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:30.710124  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:30.710164  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:30.724417  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:30.724455  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:30.820639  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:30.820669  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:30.820687  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:29.025377  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:31.525340  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:29.909043  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:32.410359  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:30.445047  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:32.445931  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:33.403497  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:33.419581  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:33.419651  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:33.462011  451238 cri.go:89] found id: ""
	I0805 13:01:33.462042  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.462051  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:33.462057  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:33.462126  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:33.502476  451238 cri.go:89] found id: ""
	I0805 13:01:33.502509  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.502519  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:33.502527  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:33.502601  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:33.547392  451238 cri.go:89] found id: ""
	I0805 13:01:33.547421  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.547430  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:33.547437  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:33.547490  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:33.584013  451238 cri.go:89] found id: ""
	I0805 13:01:33.584040  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.584048  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:33.584054  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:33.584125  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:33.617325  451238 cri.go:89] found id: ""
	I0805 13:01:33.617359  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.617367  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:33.617374  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:33.617429  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:33.651922  451238 cri.go:89] found id: ""
	I0805 13:01:33.651959  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.651971  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:33.651980  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:33.652049  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:33.689487  451238 cri.go:89] found id: ""
	I0805 13:01:33.689515  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.689522  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:33.689529  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:33.689580  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:33.723220  451238 cri.go:89] found id: ""
	I0805 13:01:33.723251  451238 logs.go:276] 0 containers: []
	W0805 13:01:33.723260  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:33.723270  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:33.723282  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:33.777271  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:33.777311  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:33.792497  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:33.792532  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:33.866801  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:33.866826  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:33.866842  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:33.946739  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:33.946774  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:36.486108  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:36.501316  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:36.501397  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:36.542082  451238 cri.go:89] found id: ""
	I0805 13:01:36.542118  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.542130  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:36.542139  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:36.542217  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:36.581005  451238 cri.go:89] found id: ""
	I0805 13:01:36.581047  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.581059  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:36.581068  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:36.581148  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:36.623945  451238 cri.go:89] found id: ""
	I0805 13:01:36.623974  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.623982  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:36.623987  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:36.624041  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:36.661632  451238 cri.go:89] found id: ""
	I0805 13:01:36.661665  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.661673  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:36.661680  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:36.661738  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:36.701808  451238 cri.go:89] found id: ""
	I0805 13:01:36.701839  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.701850  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:36.701857  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:36.701941  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:36.742287  451238 cri.go:89] found id: ""
	I0805 13:01:36.742320  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.742331  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:36.742340  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:36.742410  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:36.794581  451238 cri.go:89] found id: ""
	I0805 13:01:36.794610  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.794621  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:36.794629  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:36.794690  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:36.833271  451238 cri.go:89] found id: ""
	I0805 13:01:36.833301  451238 logs.go:276] 0 containers: []
	W0805 13:01:36.833311  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:36.833325  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:36.833346  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:36.921427  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:36.921467  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:34.024353  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:36.025557  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:34.909401  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:36.909529  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:34.945077  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:36.945632  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:36.965468  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:36.965503  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:37.018475  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:37.018515  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:37.033671  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:37.033697  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:37.105339  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:39.606042  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:39.619215  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:39.619296  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:39.655614  451238 cri.go:89] found id: ""
	I0805 13:01:39.655648  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.655660  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:39.655668  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:39.655760  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:39.691489  451238 cri.go:89] found id: ""
	I0805 13:01:39.691523  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.691535  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:39.691543  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:39.691610  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:39.726394  451238 cri.go:89] found id: ""
	I0805 13:01:39.726427  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.726438  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:39.726446  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:39.726518  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:39.759847  451238 cri.go:89] found id: ""
	I0805 13:01:39.759897  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.759909  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:39.759918  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:39.759988  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:39.795011  451238 cri.go:89] found id: ""
	I0805 13:01:39.795043  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.795051  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:39.795057  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:39.795120  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:39.831302  451238 cri.go:89] found id: ""
	I0805 13:01:39.831336  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.831346  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:39.831356  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:39.831432  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:39.866506  451238 cri.go:89] found id: ""
	I0805 13:01:39.866540  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.866547  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:39.866554  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:39.866622  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:39.898083  451238 cri.go:89] found id: ""
	I0805 13:01:39.898108  451238 logs.go:276] 0 containers: []
	W0805 13:01:39.898115  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:39.898128  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:39.898147  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:39.912192  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:39.912221  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:39.989216  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:39.989246  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:39.989262  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:40.069702  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:40.069746  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:40.118390  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:40.118428  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:38.525929  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:40.527120  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:38.909905  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:41.408953  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:43.409966  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:39.445474  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:41.944704  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:43.944956  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:42.669421  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:42.682287  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:42.682359  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:42.722933  451238 cri.go:89] found id: ""
	I0805 13:01:42.722961  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.722969  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:42.722975  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:42.723037  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:42.757604  451238 cri.go:89] found id: ""
	I0805 13:01:42.757635  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.757646  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:42.757654  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:42.757723  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:42.795825  451238 cri.go:89] found id: ""
	I0805 13:01:42.795852  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.795863  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:42.795871  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:42.795939  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:42.831749  451238 cri.go:89] found id: ""
	I0805 13:01:42.831779  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.831791  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:42.831800  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:42.831862  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:42.866280  451238 cri.go:89] found id: ""
	I0805 13:01:42.866310  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.866322  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:42.866330  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:42.866390  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:42.904393  451238 cri.go:89] found id: ""
	I0805 13:01:42.904427  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.904436  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:42.904445  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:42.904510  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:42.943175  451238 cri.go:89] found id: ""
	I0805 13:01:42.943204  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.943215  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:42.943223  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:42.943292  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:42.979117  451238 cri.go:89] found id: ""
	I0805 13:01:42.979144  451238 logs.go:276] 0 containers: []
	W0805 13:01:42.979152  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:42.979174  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:42.979191  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:43.032032  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:43.032070  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:43.046285  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:43.046315  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:43.120300  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:43.120327  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:43.120347  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:43.209800  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:43.209851  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:45.759057  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:45.771984  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:45.772056  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:45.805421  451238 cri.go:89] found id: ""
	I0805 13:01:45.805451  451238 logs.go:276] 0 containers: []
	W0805 13:01:45.805459  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:45.805466  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:45.805521  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:45.841552  451238 cri.go:89] found id: ""
	I0805 13:01:45.841579  451238 logs.go:276] 0 containers: []
	W0805 13:01:45.841588  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:45.841597  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:45.841672  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:45.878502  451238 cri.go:89] found id: ""
	I0805 13:01:45.878529  451238 logs.go:276] 0 containers: []
	W0805 13:01:45.878537  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:45.878546  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:45.878622  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:45.921145  451238 cri.go:89] found id: ""
	I0805 13:01:45.921187  451238 logs.go:276] 0 containers: []
	W0805 13:01:45.921198  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:45.921207  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:45.921273  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:45.958408  451238 cri.go:89] found id: ""
	I0805 13:01:45.958437  451238 logs.go:276] 0 containers: []
	W0805 13:01:45.958445  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:45.958452  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:45.958521  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:45.994632  451238 cri.go:89] found id: ""
	I0805 13:01:45.994660  451238 logs.go:276] 0 containers: []
	W0805 13:01:45.994669  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:45.994676  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:45.994727  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:46.032930  451238 cri.go:89] found id: ""
	I0805 13:01:46.032961  451238 logs.go:276] 0 containers: []
	W0805 13:01:46.032971  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:46.032978  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:46.033041  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:46.074396  451238 cri.go:89] found id: ""
	I0805 13:01:46.074429  451238 logs.go:276] 0 containers: []
	W0805 13:01:46.074441  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:46.074454  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:46.074475  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:46.131977  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:46.132020  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:46.147924  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:46.147957  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:46.222005  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:46.222038  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:46.222054  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:46.306799  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:46.306842  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:43.024643  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:45.524936  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:45.410385  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:47.909281  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:46.444746  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:48.950198  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:48.856982  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:48.870945  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:48.871025  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:48.930811  451238 cri.go:89] found id: ""
	I0805 13:01:48.930837  451238 logs.go:276] 0 containers: []
	W0805 13:01:48.930852  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:48.930858  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:48.930917  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:48.986604  451238 cri.go:89] found id: ""
	I0805 13:01:48.986629  451238 logs.go:276] 0 containers: []
	W0805 13:01:48.986637  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:48.986643  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:48.986706  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:49.039433  451238 cri.go:89] found id: ""
	I0805 13:01:49.039468  451238 logs.go:276] 0 containers: []
	W0805 13:01:49.039479  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:49.039487  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:49.039555  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:49.079593  451238 cri.go:89] found id: ""
	I0805 13:01:49.079625  451238 logs.go:276] 0 containers: []
	W0805 13:01:49.079637  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:49.079645  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:49.079714  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:49.116243  451238 cri.go:89] found id: ""
	I0805 13:01:49.116274  451238 logs.go:276] 0 containers: []
	W0805 13:01:49.116284  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:49.116292  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:49.116360  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:49.158744  451238 cri.go:89] found id: ""
	I0805 13:01:49.158779  451238 logs.go:276] 0 containers: []
	W0805 13:01:49.158790  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:49.158799  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:49.158868  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:49.193747  451238 cri.go:89] found id: ""
	I0805 13:01:49.193778  451238 logs.go:276] 0 containers: []
	W0805 13:01:49.193786  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:49.193792  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:49.193843  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:49.227663  451238 cri.go:89] found id: ""
	I0805 13:01:49.227691  451238 logs.go:276] 0 containers: []
	W0805 13:01:49.227704  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:49.227714  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:49.227727  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:49.281380  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:49.281424  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:49.296286  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:49.296318  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:49.368584  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:49.368609  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:49.368625  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:49.453857  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:49.453909  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:48.024987  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:50.026076  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:50.408363  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:52.410039  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:51.444602  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:53.445118  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:51.993057  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:52.006066  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:52.006148  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:52.043179  451238 cri.go:89] found id: ""
	I0805 13:01:52.043212  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.043223  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:52.043231  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:52.043300  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:52.076469  451238 cri.go:89] found id: ""
	I0805 13:01:52.076502  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.076512  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:52.076520  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:52.076586  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:52.112443  451238 cri.go:89] found id: ""
	I0805 13:01:52.112477  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.112488  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:52.112497  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:52.112569  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:52.147589  451238 cri.go:89] found id: ""
	I0805 13:01:52.147620  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.147631  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:52.147638  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:52.147702  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:52.184016  451238 cri.go:89] found id: ""
	I0805 13:01:52.184053  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.184063  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:52.184072  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:52.184134  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:52.219670  451238 cri.go:89] found id: ""
	I0805 13:01:52.219702  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.219714  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:52.219727  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:52.219820  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:52.258697  451238 cri.go:89] found id: ""
	I0805 13:01:52.258731  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.258744  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:52.258752  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:52.258818  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:52.299599  451238 cri.go:89] found id: ""
	I0805 13:01:52.299636  451238 logs.go:276] 0 containers: []
	W0805 13:01:52.299649  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:52.299665  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:52.299683  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:52.351730  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:52.351772  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:52.365993  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:52.366022  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:52.436019  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:52.436041  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:52.436056  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:52.520082  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:52.520118  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:55.064214  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:55.077358  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:55.077454  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:55.110523  451238 cri.go:89] found id: ""
	I0805 13:01:55.110555  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.110564  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:55.110570  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:55.110630  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:55.147870  451238 cri.go:89] found id: ""
	I0805 13:01:55.147905  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.147916  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:55.147925  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:55.147998  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:55.180769  451238 cri.go:89] found id: ""
	I0805 13:01:55.180803  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.180814  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:55.180822  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:55.180890  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:55.217290  451238 cri.go:89] found id: ""
	I0805 13:01:55.217332  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.217343  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:55.217353  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:55.217420  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:55.254185  451238 cri.go:89] found id: ""
	I0805 13:01:55.254221  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.254232  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:55.254239  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:55.254295  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:55.290633  451238 cri.go:89] found id: ""
	I0805 13:01:55.290662  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.290673  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:55.290681  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:55.290747  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:55.325830  451238 cri.go:89] found id: ""
	I0805 13:01:55.325862  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.325873  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:55.325880  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:55.325947  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:55.359887  451238 cri.go:89] found id: ""
	I0805 13:01:55.359922  451238 logs.go:276] 0 containers: []
	W0805 13:01:55.359931  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:55.359941  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:55.359953  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:55.418251  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:55.418299  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:55.432007  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:55.432038  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:55.507177  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:01:55.507205  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:55.507219  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:55.586919  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:55.586965  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:52.525480  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:54.525653  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:57.024834  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:54.410408  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:56.909810  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:55.944741  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:57.946654  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:58.128822  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:01:58.142726  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:01:58.142799  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:01:58.178027  451238 cri.go:89] found id: ""
	I0805 13:01:58.178056  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.178067  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:01:58.178075  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:01:58.178147  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:01:58.213309  451238 cri.go:89] found id: ""
	I0805 13:01:58.213340  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.213351  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:01:58.213358  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:01:58.213430  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:01:58.247296  451238 cri.go:89] found id: ""
	I0805 13:01:58.247323  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.247332  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:01:58.247338  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:01:58.247393  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:01:58.280226  451238 cri.go:89] found id: ""
	I0805 13:01:58.280255  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.280266  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:01:58.280277  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:01:58.280335  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:01:58.316934  451238 cri.go:89] found id: ""
	I0805 13:01:58.316969  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.316981  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:01:58.316989  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:01:58.317055  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:01:58.360931  451238 cri.go:89] found id: ""
	I0805 13:01:58.360967  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.360979  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:01:58.360987  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:01:58.361055  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:01:58.399112  451238 cri.go:89] found id: ""
	I0805 13:01:58.399150  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.399163  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:01:58.399171  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:01:58.399244  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:01:58.441903  451238 cri.go:89] found id: ""
	I0805 13:01:58.441930  451238 logs.go:276] 0 containers: []
	W0805 13:01:58.441941  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:01:58.441952  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:01:58.441967  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:01:58.524869  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:01:58.524908  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:58.562598  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:01:58.562634  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:01:58.618274  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:01:58.618313  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:01:58.633011  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:01:58.633039  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:01:58.706287  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:01.206971  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:01.222277  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:01.222357  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:01.266949  451238 cri.go:89] found id: ""
	I0805 13:02:01.266982  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.266993  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:01.267007  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:01.267108  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:01.306765  451238 cri.go:89] found id: ""
	I0805 13:02:01.306791  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.306799  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:01.306805  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:01.306859  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:01.345108  451238 cri.go:89] found id: ""
	I0805 13:02:01.345145  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.345157  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:01.345164  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:01.345227  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:01.383201  451238 cri.go:89] found id: ""
	I0805 13:02:01.383231  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.383239  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:01.383245  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:01.383307  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:01.419292  451238 cri.go:89] found id: ""
	I0805 13:02:01.419320  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.419331  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:01.419338  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:01.419410  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:01.456447  451238 cri.go:89] found id: ""
	I0805 13:02:01.456482  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.456492  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:01.456500  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:01.456568  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:01.496266  451238 cri.go:89] found id: ""
	I0805 13:02:01.496298  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.496306  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:01.496312  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:01.496375  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:01.541492  451238 cri.go:89] found id: ""
	I0805 13:02:01.541529  451238 logs.go:276] 0 containers: []
	W0805 13:02:01.541541  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:01.541555  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:01.541571  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:01.593140  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:01.593185  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:01.606641  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:01.606670  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:01.681989  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:01.682015  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:01.682030  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:01.765612  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:01.765655  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:01:59.025355  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:01.025443  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:01:59.408591  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:01.409368  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:00.445254  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:02.944495  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:04.311066  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:04.326530  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:04.326599  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:04.360091  451238 cri.go:89] found id: ""
	I0805 13:02:04.360124  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.360136  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:04.360142  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:04.360214  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:04.398983  451238 cri.go:89] found id: ""
	I0805 13:02:04.399014  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.399026  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:04.399045  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:04.399122  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:04.433444  451238 cri.go:89] found id: ""
	I0805 13:02:04.433474  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.433483  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:04.433495  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:04.433546  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:04.470113  451238 cri.go:89] found id: ""
	I0805 13:02:04.470145  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.470156  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:04.470167  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:04.470233  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:04.505695  451238 cri.go:89] found id: ""
	I0805 13:02:04.505721  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.505731  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:04.505738  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:04.505801  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:04.544093  451238 cri.go:89] found id: ""
	I0805 13:02:04.544121  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.544129  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:04.544136  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:04.544196  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:04.579663  451238 cri.go:89] found id: ""
	I0805 13:02:04.579702  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.579715  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:04.579724  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:04.579803  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:04.616524  451238 cri.go:89] found id: ""
	I0805 13:02:04.616565  451238 logs.go:276] 0 containers: []
	W0805 13:02:04.616577  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:04.616590  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:04.616607  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:04.693014  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:04.693035  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:04.693048  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:04.772508  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:04.772550  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:04.813014  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:04.813043  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:04.864653  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:04.864702  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:03.525225  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:06.024868  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:03.908365  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:05.908993  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:07.910958  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:05.444593  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:07.444737  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:07.378816  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:07.392347  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:07.392439  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:07.425843  451238 cri.go:89] found id: ""
	I0805 13:02:07.425876  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.425887  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:07.425895  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:07.425958  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:07.461547  451238 cri.go:89] found id: ""
	I0805 13:02:07.461575  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.461584  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:07.461591  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:07.461651  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:07.496461  451238 cri.go:89] found id: ""
	I0805 13:02:07.496500  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.496510  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:07.496521  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:07.496599  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:07.531520  451238 cri.go:89] found id: ""
	I0805 13:02:07.531556  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.531566  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:07.531574  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:07.531642  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:07.571821  451238 cri.go:89] found id: ""
	I0805 13:02:07.571855  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.571866  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:07.571876  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:07.571948  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:07.611111  451238 cri.go:89] found id: ""
	I0805 13:02:07.611151  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.611159  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:07.611165  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:07.611226  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:07.651428  451238 cri.go:89] found id: ""
	I0805 13:02:07.651456  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.651464  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:07.651470  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:07.651520  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:07.689828  451238 cri.go:89] found id: ""
	I0805 13:02:07.689858  451238 logs.go:276] 0 containers: []
	W0805 13:02:07.689866  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:07.689877  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:07.689893  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:07.746381  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:07.746422  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:07.760953  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:07.760989  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:07.834859  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:07.834883  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:07.834901  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:07.915344  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:07.915376  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:10.459232  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:10.472789  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:10.472853  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:10.508434  451238 cri.go:89] found id: ""
	I0805 13:02:10.508462  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.508470  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:10.508477  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:10.508539  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:10.543487  451238 cri.go:89] found id: ""
	I0805 13:02:10.543515  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.543524  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:10.543530  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:10.543582  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:10.588274  451238 cri.go:89] found id: ""
	I0805 13:02:10.588302  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.588310  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:10.588317  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:10.588379  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:10.620810  451238 cri.go:89] found id: ""
	I0805 13:02:10.620851  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.620863  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:10.620871  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:10.620945  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:10.657882  451238 cri.go:89] found id: ""
	I0805 13:02:10.657913  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.657923  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:10.657929  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:10.657993  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:10.696188  451238 cri.go:89] found id: ""
	I0805 13:02:10.696220  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.696229  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:10.696235  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:10.696294  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:10.729942  451238 cri.go:89] found id: ""
	I0805 13:02:10.729977  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.729988  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:10.729996  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:10.730050  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:10.761972  451238 cri.go:89] found id: ""
	I0805 13:02:10.762000  451238 logs.go:276] 0 containers: []
	W0805 13:02:10.762008  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:10.762018  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:10.762032  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:10.816859  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:10.816890  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:10.830348  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:10.830379  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:10.902720  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:10.902753  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:10.902771  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:10.981464  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:10.981505  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:08.024948  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:10.525441  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:10.408841  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:12.409506  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:09.445359  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:11.944853  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:13.528296  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:13.541813  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:13.541887  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:13.575632  451238 cri.go:89] found id: ""
	I0805 13:02:13.575669  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.575681  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:13.575689  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:13.575766  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:13.612646  451238 cri.go:89] found id: ""
	I0805 13:02:13.612680  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.612691  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:13.612699  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:13.612755  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:13.650310  451238 cri.go:89] found id: ""
	I0805 13:02:13.650341  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.650361  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:13.650369  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:13.650439  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:13.686941  451238 cri.go:89] found id: ""
	I0805 13:02:13.686970  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.686981  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:13.686990  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:13.687054  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:13.722250  451238 cri.go:89] found id: ""
	I0805 13:02:13.722285  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.722297  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:13.722306  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:13.722388  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:13.758337  451238 cri.go:89] found id: ""
	I0805 13:02:13.758367  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.758375  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:13.758382  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:13.758443  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:13.792980  451238 cri.go:89] found id: ""
	I0805 13:02:13.793016  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.793028  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:13.793036  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:13.793127  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:13.831511  451238 cri.go:89] found id: ""
	I0805 13:02:13.831539  451238 logs.go:276] 0 containers: []
	W0805 13:02:13.831547  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:13.831558  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:13.831579  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:13.885124  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:13.885169  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:13.899112  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:13.899155  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:13.977058  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:13.977099  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:13.977115  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:14.060873  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:14.060911  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:16.602595  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:16.617557  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:16.617638  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:16.660212  451238 cri.go:89] found id: ""
	I0805 13:02:16.660244  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.660256  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:16.660264  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:16.660323  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:16.695515  451238 cri.go:89] found id: ""
	I0805 13:02:16.695553  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.695564  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:16.695572  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:16.695638  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:16.732844  451238 cri.go:89] found id: ""
	I0805 13:02:16.732875  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.732884  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:16.732891  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:16.732943  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:16.772465  451238 cri.go:89] found id: ""
	I0805 13:02:16.772497  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.772504  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:16.772517  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:16.772582  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:16.809826  451238 cri.go:89] found id: ""
	I0805 13:02:16.809863  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.809875  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:16.809882  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:16.809949  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:16.849480  451238 cri.go:89] found id: ""
	I0805 13:02:16.849512  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.849523  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:16.849531  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:16.849598  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:16.884098  451238 cri.go:89] found id: ""
	I0805 13:02:16.884132  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.884144  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:16.884152  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:16.884222  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:16.920497  451238 cri.go:89] found id: ""
	I0805 13:02:16.920523  451238 logs.go:276] 0 containers: []
	W0805 13:02:16.920530  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:16.920541  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:16.920556  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:13.025299  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:15.525474  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:14.908633  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:16.909254  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:14.445321  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:16.945044  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:18.945630  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:16.975287  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:16.975317  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:16.989524  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:16.989552  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:17.057997  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:17.058022  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:17.058037  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:17.133721  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:17.133763  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:19.672385  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:19.687948  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:19.688017  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:19.724105  451238 cri.go:89] found id: ""
	I0805 13:02:19.724132  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.724140  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:19.724147  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:19.724199  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:19.758263  451238 cri.go:89] found id: ""
	I0805 13:02:19.758296  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.758306  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:19.758314  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:19.758381  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:19.792924  451238 cri.go:89] found id: ""
	I0805 13:02:19.792954  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.792961  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:19.792967  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:19.793023  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:19.826340  451238 cri.go:89] found id: ""
	I0805 13:02:19.826367  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.826375  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:19.826382  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:19.826434  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:19.864289  451238 cri.go:89] found id: ""
	I0805 13:02:19.864323  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.864334  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:19.864343  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:19.864413  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:19.899630  451238 cri.go:89] found id: ""
	I0805 13:02:19.899661  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.899673  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:19.899682  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:19.899786  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:19.935798  451238 cri.go:89] found id: ""
	I0805 13:02:19.935826  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.935836  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:19.935843  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:19.935896  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:19.977984  451238 cri.go:89] found id: ""
	I0805 13:02:19.978019  451238 logs.go:276] 0 containers: []
	W0805 13:02:19.978031  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:19.978044  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:19.978062  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:20.030096  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:20.030131  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:20.043878  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:20.043940  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:20.119251  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:20.119279  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:20.119297  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:20.202445  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:20.202488  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:18.026282  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:20.524225  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:19.408760  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:21.410108  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:21.445045  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:23.944150  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:22.744728  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:22.758606  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:22.758675  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:22.791663  451238 cri.go:89] found id: ""
	I0805 13:02:22.791696  451238 logs.go:276] 0 containers: []
	W0805 13:02:22.791708  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:22.791717  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:22.791821  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:22.826568  451238 cri.go:89] found id: ""
	I0805 13:02:22.826594  451238 logs.go:276] 0 containers: []
	W0805 13:02:22.826603  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:22.826609  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:22.826671  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:22.860430  451238 cri.go:89] found id: ""
	I0805 13:02:22.860459  451238 logs.go:276] 0 containers: []
	W0805 13:02:22.860470  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:22.860479  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:22.860543  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:22.893815  451238 cri.go:89] found id: ""
	I0805 13:02:22.893846  451238 logs.go:276] 0 containers: []
	W0805 13:02:22.893854  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:22.893860  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:22.893929  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:22.929804  451238 cri.go:89] found id: ""
	I0805 13:02:22.929830  451238 logs.go:276] 0 containers: []
	W0805 13:02:22.929840  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:22.929849  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:22.929915  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:22.964918  451238 cri.go:89] found id: ""
	I0805 13:02:22.964950  451238 logs.go:276] 0 containers: []
	W0805 13:02:22.964961  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:22.964969  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:22.965035  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:23.000236  451238 cri.go:89] found id: ""
	I0805 13:02:23.000271  451238 logs.go:276] 0 containers: []
	W0805 13:02:23.000282  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:23.000290  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:23.000354  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:23.052075  451238 cri.go:89] found id: ""
	I0805 13:02:23.052108  451238 logs.go:276] 0 containers: []
	W0805 13:02:23.052117  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:23.052128  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:23.052141  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:23.104213  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:23.104248  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:23.118811  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:23.118851  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:23.188552  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:23.188578  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:23.188595  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:23.272518  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:23.272562  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:25.811116  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:25.825030  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:25.825113  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:25.864282  451238 cri.go:89] found id: ""
	I0805 13:02:25.864318  451238 logs.go:276] 0 containers: []
	W0805 13:02:25.864331  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:25.864339  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:25.864413  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:25.901712  451238 cri.go:89] found id: ""
	I0805 13:02:25.901746  451238 logs.go:276] 0 containers: []
	W0805 13:02:25.901754  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:25.901760  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:25.901822  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:25.937036  451238 cri.go:89] found id: ""
	I0805 13:02:25.937068  451238 logs.go:276] 0 containers: []
	W0805 13:02:25.937077  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:25.937083  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:25.937146  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:25.974598  451238 cri.go:89] found id: ""
	I0805 13:02:25.974627  451238 logs.go:276] 0 containers: []
	W0805 13:02:25.974638  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:25.974646  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:25.974713  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:26.011083  451238 cri.go:89] found id: ""
	I0805 13:02:26.011116  451238 logs.go:276] 0 containers: []
	W0805 13:02:26.011124  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:26.011130  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:26.011190  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:26.050187  451238 cri.go:89] found id: ""
	I0805 13:02:26.050219  451238 logs.go:276] 0 containers: []
	W0805 13:02:26.050231  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:26.050242  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:26.050317  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:26.085038  451238 cri.go:89] found id: ""
	I0805 13:02:26.085067  451238 logs.go:276] 0 containers: []
	W0805 13:02:26.085077  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:26.085086  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:26.085151  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:26.122121  451238 cri.go:89] found id: ""
	I0805 13:02:26.122150  451238 logs.go:276] 0 containers: []
	W0805 13:02:26.122158  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:26.122173  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:26.122191  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:26.193819  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:26.193850  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:26.193865  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:26.273453  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:26.273492  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:26.312474  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:26.312509  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:26.363176  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:26.363215  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:22.524303  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:24.525047  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:26.528347  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:23.909120  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:26.409913  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:25.944824  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:28.444803  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:28.878523  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:28.892242  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:28.892330  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:28.928650  451238 cri.go:89] found id: ""
	I0805 13:02:28.928682  451238 logs.go:276] 0 containers: []
	W0805 13:02:28.928693  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:28.928702  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:28.928772  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:28.965582  451238 cri.go:89] found id: ""
	I0805 13:02:28.965615  451238 logs.go:276] 0 containers: []
	W0805 13:02:28.965626  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:28.965634  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:28.965698  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:29.001824  451238 cri.go:89] found id: ""
	I0805 13:02:29.001855  451238 logs.go:276] 0 containers: []
	W0805 13:02:29.001865  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:29.001874  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:29.001939  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:29.037688  451238 cri.go:89] found id: ""
	I0805 13:02:29.037715  451238 logs.go:276] 0 containers: []
	W0805 13:02:29.037722  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:29.037730  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:29.037780  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:29.078495  451238 cri.go:89] found id: ""
	I0805 13:02:29.078540  451238 logs.go:276] 0 containers: []
	W0805 13:02:29.078552  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:29.078559  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:29.078627  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:29.113728  451238 cri.go:89] found id: ""
	I0805 13:02:29.113764  451238 logs.go:276] 0 containers: []
	W0805 13:02:29.113776  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:29.113786  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:29.113851  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:29.147590  451238 cri.go:89] found id: ""
	I0805 13:02:29.147618  451238 logs.go:276] 0 containers: []
	W0805 13:02:29.147629  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:29.147638  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:29.147702  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:29.186015  451238 cri.go:89] found id: ""
	I0805 13:02:29.186043  451238 logs.go:276] 0 containers: []
	W0805 13:02:29.186052  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:29.186062  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:29.186074  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:29.242795  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:29.242850  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:29.257012  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:29.257046  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:29.330528  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:29.330555  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:29.330569  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:29.418109  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:29.418145  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:29.025256  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:31.526187  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:28.909283  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:31.409736  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:30.944380  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:32.945421  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:31.986351  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:32.001265  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:32.001349  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:32.035152  451238 cri.go:89] found id: ""
	I0805 13:02:32.035191  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.035200  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:32.035208  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:32.035262  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:32.069086  451238 cri.go:89] found id: ""
	I0805 13:02:32.069118  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.069128  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:32.069136  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:32.069204  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:32.103788  451238 cri.go:89] found id: ""
	I0805 13:02:32.103814  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.103822  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:32.103831  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:32.103893  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:32.139104  451238 cri.go:89] found id: ""
	I0805 13:02:32.139138  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.139149  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:32.139157  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:32.139222  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:32.192759  451238 cri.go:89] found id: ""
	I0805 13:02:32.192789  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.192798  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:32.192804  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:32.192865  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:32.231080  451238 cri.go:89] found id: ""
	I0805 13:02:32.231115  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.231126  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:32.231135  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:32.231200  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:32.266547  451238 cri.go:89] found id: ""
	I0805 13:02:32.266578  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.266587  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:32.266594  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:32.266647  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:32.301828  451238 cri.go:89] found id: ""
	I0805 13:02:32.301856  451238 logs.go:276] 0 containers: []
	W0805 13:02:32.301865  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:32.301875  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:32.301888  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:32.358439  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:32.358479  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:32.372349  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:32.372383  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:32.442335  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:32.442369  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:32.442388  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:32.521705  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:32.521744  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:35.060867  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:35.074370  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:35.074433  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:35.111149  451238 cri.go:89] found id: ""
	I0805 13:02:35.111181  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.111191  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:35.111200  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:35.111268  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:35.153781  451238 cri.go:89] found id: ""
	I0805 13:02:35.153814  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.153825  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:35.153832  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:35.153894  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:35.193207  451238 cri.go:89] found id: ""
	I0805 13:02:35.193239  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.193256  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:35.193291  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:35.193370  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:35.243879  451238 cri.go:89] found id: ""
	I0805 13:02:35.243915  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.243928  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:35.243936  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:35.243994  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:35.297922  451238 cri.go:89] found id: ""
	I0805 13:02:35.297954  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.297966  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:35.297973  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:35.298039  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:35.333201  451238 cri.go:89] found id: ""
	I0805 13:02:35.333234  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.333245  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:35.333254  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:35.333316  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:35.366327  451238 cri.go:89] found id: ""
	I0805 13:02:35.366361  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.366373  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:35.366381  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:35.366449  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:35.401515  451238 cri.go:89] found id: ""
	I0805 13:02:35.401546  451238 logs.go:276] 0 containers: []
	W0805 13:02:35.401555  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:35.401565  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:35.401578  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:35.451057  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:35.451090  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:35.465054  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:35.465095  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:35.547111  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:35.547142  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:35.547160  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:35.627451  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:35.627490  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:34.025104  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:36.524904  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:33.908489  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:35.909183  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:37.909360  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:35.445317  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:37.446056  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:38.169022  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:38.181892  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:38.181968  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:38.217919  451238 cri.go:89] found id: ""
	I0805 13:02:38.217951  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.217961  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:38.217970  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:38.218041  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:38.253967  451238 cri.go:89] found id: ""
	I0805 13:02:38.253999  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.254008  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:38.254020  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:38.254073  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:38.293757  451238 cri.go:89] found id: ""
	I0805 13:02:38.293789  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.293801  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:38.293809  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:38.293904  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:38.329657  451238 cri.go:89] found id: ""
	I0805 13:02:38.329686  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.329697  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:38.329705  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:38.329772  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:38.364602  451238 cri.go:89] found id: ""
	I0805 13:02:38.364635  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.364647  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:38.364656  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:38.364732  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:38.396352  451238 cri.go:89] found id: ""
	I0805 13:02:38.396382  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.396394  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:38.396403  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:38.396471  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:38.429172  451238 cri.go:89] found id: ""
	I0805 13:02:38.429203  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.429214  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:38.429223  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:38.429293  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:38.464855  451238 cri.go:89] found id: ""
	I0805 13:02:38.464891  451238 logs.go:276] 0 containers: []
	W0805 13:02:38.464903  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:38.464916  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:38.464931  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:38.514924  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:38.514967  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:38.530076  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:38.530113  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:38.602472  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:38.602494  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:38.602509  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:38.683905  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:38.683948  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:41.226878  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:41.245027  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:41.245100  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:41.280482  451238 cri.go:89] found id: ""
	I0805 13:02:41.280511  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.280523  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:41.280532  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:41.280597  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:41.316592  451238 cri.go:89] found id: ""
	I0805 13:02:41.316622  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.316633  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:41.316641  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:41.316708  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:41.353282  451238 cri.go:89] found id: ""
	I0805 13:02:41.353313  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.353324  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:41.353333  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:41.353397  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:41.393379  451238 cri.go:89] found id: ""
	I0805 13:02:41.393406  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.393417  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:41.393426  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:41.393502  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:41.430980  451238 cri.go:89] found id: ""
	I0805 13:02:41.431012  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.431023  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:41.431031  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:41.431106  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:41.467228  451238 cri.go:89] found id: ""
	I0805 13:02:41.467261  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.467273  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:41.467281  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:41.467348  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:41.502105  451238 cri.go:89] found id: ""
	I0805 13:02:41.502153  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.502166  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:41.502175  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:41.502250  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:41.539286  451238 cri.go:89] found id: ""
	I0805 13:02:41.539314  451238 logs.go:276] 0 containers: []
	W0805 13:02:41.539325  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:41.539338  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:41.539353  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:41.592135  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:41.592175  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:41.608151  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:41.608184  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:41.680096  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:41.680131  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:41.680148  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:41.759589  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:41.759628  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:39.025448  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:41.526590  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:40.409447  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:42.909412  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:39.945459  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:42.444630  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:44.300461  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:44.314310  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:44.314388  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:44.348516  451238 cri.go:89] found id: ""
	I0805 13:02:44.348549  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.348562  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:44.348570  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:44.348635  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:44.388256  451238 cri.go:89] found id: ""
	I0805 13:02:44.388289  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.388299  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:44.388309  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:44.388383  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:44.426743  451238 cri.go:89] found id: ""
	I0805 13:02:44.426778  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.426786  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:44.426792  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:44.426848  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:44.463008  451238 cri.go:89] found id: ""
	I0805 13:02:44.463044  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.463054  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:44.463062  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:44.463129  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:44.497662  451238 cri.go:89] found id: ""
	I0805 13:02:44.497696  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.497707  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:44.497715  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:44.497789  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:44.534253  451238 cri.go:89] found id: ""
	I0805 13:02:44.534281  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.534288  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:44.534294  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:44.534378  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:44.574350  451238 cri.go:89] found id: ""
	I0805 13:02:44.574380  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.574390  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:44.574398  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:44.574468  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:44.609984  451238 cri.go:89] found id: ""
	I0805 13:02:44.610018  451238 logs.go:276] 0 containers: []
	W0805 13:02:44.610031  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:44.610044  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:44.610060  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:44.650363  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:44.650402  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:44.700997  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:44.701032  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:44.716841  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:44.716874  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:44.785482  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:44.785502  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:44.785517  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:44.023932  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:46.025733  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:44.909613  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:47.409724  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:44.445234  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:46.944157  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:48.946098  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:47.365382  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:47.378779  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:47.378851  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:47.413615  451238 cri.go:89] found id: ""
	I0805 13:02:47.413636  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.413645  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:47.413651  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:47.413699  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:47.448536  451238 cri.go:89] found id: ""
	I0805 13:02:47.448563  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.448572  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:47.448578  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:47.448629  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:47.490817  451238 cri.go:89] found id: ""
	I0805 13:02:47.490847  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.490856  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:47.490862  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:47.490931  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:47.533151  451238 cri.go:89] found id: ""
	I0805 13:02:47.533179  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.533187  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:47.533193  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:47.533250  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:47.571991  451238 cri.go:89] found id: ""
	I0805 13:02:47.572022  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.572030  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:47.572036  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:47.572096  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:47.606943  451238 cri.go:89] found id: ""
	I0805 13:02:47.606976  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.606987  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:47.606995  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:47.607073  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:47.644704  451238 cri.go:89] found id: ""
	I0805 13:02:47.644741  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.644753  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:47.644762  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:47.644828  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:47.687361  451238 cri.go:89] found id: ""
	I0805 13:02:47.687395  451238 logs.go:276] 0 containers: []
	W0805 13:02:47.687408  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:47.687427  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:47.687453  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:47.766572  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:47.766614  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:47.812209  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:47.812242  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:47.862948  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:47.862987  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:47.878697  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:47.878729  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:47.951680  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:50.452861  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:50.466370  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:50.466440  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:50.500001  451238 cri.go:89] found id: ""
	I0805 13:02:50.500031  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.500043  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:50.500051  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:50.500126  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:50.541752  451238 cri.go:89] found id: ""
	I0805 13:02:50.541786  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.541794  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:50.541800  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:50.541864  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:50.578889  451238 cri.go:89] found id: ""
	I0805 13:02:50.578915  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.578923  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:50.578930  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:50.578984  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:50.614865  451238 cri.go:89] found id: ""
	I0805 13:02:50.614896  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.614906  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:50.614912  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:50.614980  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:50.656169  451238 cri.go:89] found id: ""
	I0805 13:02:50.656195  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.656202  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:50.656209  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:50.656277  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:50.695050  451238 cri.go:89] found id: ""
	I0805 13:02:50.695082  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.695099  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:50.695108  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:50.695187  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:50.733205  451238 cri.go:89] found id: ""
	I0805 13:02:50.733233  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.733242  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:50.733249  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:50.733300  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:50.770654  451238 cri.go:89] found id: ""
	I0805 13:02:50.770683  451238 logs.go:276] 0 containers: []
	W0805 13:02:50.770693  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:50.770706  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:50.770721  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:50.826521  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:50.826567  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:50.842153  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:50.842181  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:50.916445  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:50.916474  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:50.916487  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:50.999973  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:51.000020  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:48.525240  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:51.024459  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:49.907505  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:51.909037  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:50.946199  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:53.444128  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:53.539541  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:53.553804  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:53.553893  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:53.593075  451238 cri.go:89] found id: ""
	I0805 13:02:53.593105  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.593114  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:53.593121  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:53.593190  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:53.629967  451238 cri.go:89] found id: ""
	I0805 13:02:53.630001  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.630012  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:53.630020  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:53.630088  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:53.663535  451238 cri.go:89] found id: ""
	I0805 13:02:53.663564  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.663572  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:53.663577  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:53.663635  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:53.697650  451238 cri.go:89] found id: ""
	I0805 13:02:53.697676  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.697684  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:53.697690  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:53.697741  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:53.732845  451238 cri.go:89] found id: ""
	I0805 13:02:53.732873  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.732883  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:53.732891  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:53.732950  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:53.774673  451238 cri.go:89] found id: ""
	I0805 13:02:53.774703  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.774712  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:53.774719  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:53.774783  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:53.815368  451238 cri.go:89] found id: ""
	I0805 13:02:53.815401  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.815413  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:53.815423  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:53.815487  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:53.849726  451238 cri.go:89] found id: ""
	I0805 13:02:53.849760  451238 logs.go:276] 0 containers: []
	W0805 13:02:53.849771  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:53.849785  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:53.849801  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:53.925356  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:53.925398  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:53.966721  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:53.966751  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:54.023096  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:54.023140  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:54.037634  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:54.037666  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:54.115159  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:56.616326  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:56.629665  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:56.629744  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:56.665665  451238 cri.go:89] found id: ""
	I0805 13:02:56.665701  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.665713  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:56.665722  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:56.665790  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:56.700446  451238 cri.go:89] found id: ""
	I0805 13:02:56.700473  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.700481  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:56.700488  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:56.700554  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:56.737152  451238 cri.go:89] found id: ""
	I0805 13:02:56.737190  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.737202  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:56.737210  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:56.737283  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:56.777909  451238 cri.go:89] found id: ""
	I0805 13:02:56.777942  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.777954  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:56.777961  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:56.778027  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:56.813503  451238 cri.go:89] found id: ""
	I0805 13:02:56.813537  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.813547  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:56.813556  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:56.813625  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:56.848964  451238 cri.go:89] found id: ""
	I0805 13:02:56.848993  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.849002  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:56.849008  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:56.849071  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:56.884310  451238 cri.go:89] found id: ""
	I0805 13:02:56.884339  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.884347  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:56.884356  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:56.884417  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:02:56.925895  451238 cri.go:89] found id: ""
	I0805 13:02:56.925926  451238 logs.go:276] 0 containers: []
	W0805 13:02:56.925936  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:02:56.925948  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:02:56.925962  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:02:53.025086  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:55.025424  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:57.026117  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:53.909851  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:56.411536  450576 pod_ready.go:102] pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:55.945123  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:57.945278  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:56.982847  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:02:56.982882  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:02:56.997703  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:02:56.997742  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:02:57.071130  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:02:57.071153  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:02:57.071174  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:02:57.152985  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:02:57.153029  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:59.697501  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:02:59.711799  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:02:59.711879  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:02:59.746992  451238 cri.go:89] found id: ""
	I0805 13:02:59.747024  451238 logs.go:276] 0 containers: []
	W0805 13:02:59.747035  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:02:59.747043  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:02:59.747115  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:02:59.780563  451238 cri.go:89] found id: ""
	I0805 13:02:59.780592  451238 logs.go:276] 0 containers: []
	W0805 13:02:59.780604  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:02:59.780611  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:02:59.780676  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:02:59.816973  451238 cri.go:89] found id: ""
	I0805 13:02:59.817007  451238 logs.go:276] 0 containers: []
	W0805 13:02:59.817019  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:02:59.817027  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:02:59.817098  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:02:59.851989  451238 cri.go:89] found id: ""
	I0805 13:02:59.852018  451238 logs.go:276] 0 containers: []
	W0805 13:02:59.852028  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:02:59.852035  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:02:59.852086  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:02:59.887491  451238 cri.go:89] found id: ""
	I0805 13:02:59.887517  451238 logs.go:276] 0 containers: []
	W0805 13:02:59.887525  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:02:59.887535  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:02:59.887587  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:02:59.924965  451238 cri.go:89] found id: ""
	I0805 13:02:59.924997  451238 logs.go:276] 0 containers: []
	W0805 13:02:59.925005  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:02:59.925012  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:02:59.925062  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:02:59.965830  451238 cri.go:89] found id: ""
	I0805 13:02:59.965860  451238 logs.go:276] 0 containers: []
	W0805 13:02:59.965868  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:02:59.965875  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:02:59.965932  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:03:00.003208  451238 cri.go:89] found id: ""
	I0805 13:03:00.003241  451238 logs.go:276] 0 containers: []
	W0805 13:03:00.003250  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:03:00.003260  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:00.003275  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:00.056865  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:00.056911  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:00.070563  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:00.070593  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:03:00.137931  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:03:00.137957  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:00.137976  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:00.221598  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:03:00.221649  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:02:59.525042  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:02.024461  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:02:58.903499  450576 pod_ready.go:81] duration metric: took 4m0.001018928s for pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace to be "Ready" ...
	E0805 13:02:58.903533  450576 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-p7b2r" in "kube-system" namespace to be "Ready" (will not retry!)
	I0805 13:02:58.903556  450576 pod_ready.go:38] duration metric: took 4m8.049032492s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 13:02:58.903598  450576 kubeadm.go:597] duration metric: took 4m18.518107211s to restartPrimaryControlPlane
	W0805 13:02:58.903786  450576 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0805 13:02:58.903819  450576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0805 13:02:59.945464  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:02.443954  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:02.761328  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:03:02.775836  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:03:02.775904  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:03:02.812714  451238 cri.go:89] found id: ""
	I0805 13:03:02.812752  451238 logs.go:276] 0 containers: []
	W0805 13:03:02.812764  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:03:02.812773  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:03:02.812848  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:03:02.850072  451238 cri.go:89] found id: ""
	I0805 13:03:02.850103  451238 logs.go:276] 0 containers: []
	W0805 13:03:02.850130  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:03:02.850138  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:03:02.850197  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:03:02.886956  451238 cri.go:89] found id: ""
	I0805 13:03:02.887081  451238 logs.go:276] 0 containers: []
	W0805 13:03:02.887103  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:03:02.887114  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:03:02.887188  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:03:02.924874  451238 cri.go:89] found id: ""
	I0805 13:03:02.924906  451238 logs.go:276] 0 containers: []
	W0805 13:03:02.924918  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:03:02.924925  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:03:02.924996  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:03:02.965965  451238 cri.go:89] found id: ""
	I0805 13:03:02.965996  451238 logs.go:276] 0 containers: []
	W0805 13:03:02.966007  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:03:02.966015  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:03:02.966101  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:03:03.001081  451238 cri.go:89] found id: ""
	I0805 13:03:03.001118  451238 logs.go:276] 0 containers: []
	W0805 13:03:03.001130  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:03:03.001140  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:03:03.001201  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:03:03.036194  451238 cri.go:89] found id: ""
	I0805 13:03:03.036223  451238 logs.go:276] 0 containers: []
	W0805 13:03:03.036234  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:03:03.036243  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:03:03.036303  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:03:03.071905  451238 cri.go:89] found id: ""
	I0805 13:03:03.071940  451238 logs.go:276] 0 containers: []
	W0805 13:03:03.071951  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:03:03.071964  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:03.071982  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:03.124400  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:03.124442  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:03.138492  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:03.138520  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:03:03.207300  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:03:03.207326  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:03.207342  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:03.294941  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:03:03.294983  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:03:05.836187  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:03:05.850504  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:03:05.850609  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:03:05.889692  451238 cri.go:89] found id: ""
	I0805 13:03:05.889718  451238 logs.go:276] 0 containers: []
	W0805 13:03:05.889729  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:03:05.889737  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:03:05.889804  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:03:05.924597  451238 cri.go:89] found id: ""
	I0805 13:03:05.924630  451238 logs.go:276] 0 containers: []
	W0805 13:03:05.924640  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:03:05.924647  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:03:05.924711  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:03:05.960373  451238 cri.go:89] found id: ""
	I0805 13:03:05.960404  451238 logs.go:276] 0 containers: []
	W0805 13:03:05.960413  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:03:05.960419  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:03:05.960471  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:03:05.996583  451238 cri.go:89] found id: ""
	I0805 13:03:05.996617  451238 logs.go:276] 0 containers: []
	W0805 13:03:05.996628  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:03:05.996636  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:03:05.996708  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:03:06.033539  451238 cri.go:89] found id: ""
	I0805 13:03:06.033567  451238 logs.go:276] 0 containers: []
	W0805 13:03:06.033575  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:03:06.033586  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:03:06.033655  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:03:06.069348  451238 cri.go:89] found id: ""
	I0805 13:03:06.069378  451238 logs.go:276] 0 containers: []
	W0805 13:03:06.069391  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:03:06.069401  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:03:06.069466  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:03:06.103570  451238 cri.go:89] found id: ""
	I0805 13:03:06.103599  451238 logs.go:276] 0 containers: []
	W0805 13:03:06.103607  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:03:06.103613  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:03:06.103665  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:03:06.140230  451238 cri.go:89] found id: ""
	I0805 13:03:06.140260  451238 logs.go:276] 0 containers: []
	W0805 13:03:06.140271  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:03:06.140284  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:06.140300  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:06.191073  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:06.191123  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:06.204825  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:06.204857  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:03:06.281309  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:03:06.281339  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:06.281358  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:06.361709  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:03:06.361749  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:03:04.025007  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:06.524506  450884 pod_ready.go:102] pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:04.444267  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:06.444910  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:08.445441  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:08.903194  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:03:08.921602  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:03:08.921681  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:03:08.960916  451238 cri.go:89] found id: ""
	I0805 13:03:08.960945  451238 logs.go:276] 0 containers: []
	W0805 13:03:08.960975  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:03:08.960986  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:03:08.961055  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:03:08.996316  451238 cri.go:89] found id: ""
	I0805 13:03:08.996417  451238 logs.go:276] 0 containers: []
	W0805 13:03:08.996436  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:03:08.996448  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:03:08.996522  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:03:09.038536  451238 cri.go:89] found id: ""
	I0805 13:03:09.038572  451238 logs.go:276] 0 containers: []
	W0805 13:03:09.038584  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:03:09.038593  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:03:09.038664  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:03:09.075368  451238 cri.go:89] found id: ""
	I0805 13:03:09.075396  451238 logs.go:276] 0 containers: []
	W0805 13:03:09.075405  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:03:09.075412  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:03:09.075474  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:03:09.114232  451238 cri.go:89] found id: ""
	I0805 13:03:09.114262  451238 logs.go:276] 0 containers: []
	W0805 13:03:09.114272  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:03:09.114280  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:03:09.114353  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:03:09.161878  451238 cri.go:89] found id: ""
	I0805 13:03:09.161964  451238 logs.go:276] 0 containers: []
	W0805 13:03:09.161978  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:03:09.161988  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:03:09.162062  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:03:09.206694  451238 cri.go:89] found id: ""
	I0805 13:03:09.206727  451238 logs.go:276] 0 containers: []
	W0805 13:03:09.206739  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:03:09.206748  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:03:09.206890  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:03:09.257029  451238 cri.go:89] found id: ""
	I0805 13:03:09.257066  451238 logs.go:276] 0 containers: []
	W0805 13:03:09.257079  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:03:09.257090  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:09.257107  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:09.278638  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:09.278679  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:03:09.353760  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:03:09.353781  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:09.353793  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:09.438371  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:03:09.438419  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:03:09.487253  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:09.487297  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:08.018954  450884 pod_ready.go:81] duration metric: took 4m0.00055059s for pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace to be "Ready" ...
	E0805 13:03:08.018987  450884 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-dsrqr" in "kube-system" namespace to be "Ready" (will not retry!)
	I0805 13:03:08.019010  450884 pod_ready.go:38] duration metric: took 4m11.028507743s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 13:03:08.019048  450884 kubeadm.go:597] duration metric: took 4m19.097834327s to restartPrimaryControlPlane
	W0805 13:03:08.019122  450884 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0805 13:03:08.019157  450884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0805 13:03:10.945002  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:12.945953  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:12.042215  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:03:12.055721  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:03:12.055812  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:03:12.096936  451238 cri.go:89] found id: ""
	I0805 13:03:12.096965  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.096977  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:03:12.096985  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:03:12.097051  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:03:12.136149  451238 cri.go:89] found id: ""
	I0805 13:03:12.136181  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.136192  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:03:12.136199  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:03:12.136276  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:03:12.180568  451238 cri.go:89] found id: ""
	I0805 13:03:12.180606  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.180618  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:03:12.180626  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:03:12.180695  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:03:12.221759  451238 cri.go:89] found id: ""
	I0805 13:03:12.221794  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.221806  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:03:12.221815  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:03:12.221882  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:03:12.259460  451238 cri.go:89] found id: ""
	I0805 13:03:12.259490  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.259498  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:03:12.259508  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:03:12.259563  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:03:12.301245  451238 cri.go:89] found id: ""
	I0805 13:03:12.301277  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.301289  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:03:12.301297  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:03:12.301368  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:03:12.343640  451238 cri.go:89] found id: ""
	I0805 13:03:12.343678  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.343690  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:03:12.343698  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:03:12.343809  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:03:12.382729  451238 cri.go:89] found id: ""
	I0805 13:03:12.382762  451238 logs.go:276] 0 containers: []
	W0805 13:03:12.382774  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:03:12.382787  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:12.382807  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:12.400862  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:12.400897  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:03:12.478755  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:03:12.478788  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:12.478807  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:12.566029  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:03:12.566080  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:03:12.611834  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:12.611929  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:15.171517  451238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:03:15.185569  451238 kubeadm.go:597] duration metric: took 4m3.737627997s to restartPrimaryControlPlane
	W0805 13:03:15.185662  451238 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0805 13:03:15.185697  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0805 13:03:15.669994  451238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 13:03:15.684794  451238 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 13:03:15.695088  451238 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 13:03:15.705403  451238 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 13:03:15.705427  451238 kubeadm.go:157] found existing configuration files:
	
	I0805 13:03:15.705488  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 13:03:15.714777  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 13:03:15.714833  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 13:03:15.724437  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 13:03:15.733263  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 13:03:15.733317  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 13:03:15.743004  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 13:03:15.752219  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 13:03:15.752278  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 13:03:15.761788  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 13:03:15.771193  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 13:03:15.771245  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 13:03:15.780964  451238 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 13:03:15.855628  451238 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0805 13:03:15.855751  451238 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 13:03:16.015686  451238 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 13:03:16.015880  451238 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 13:03:16.016041  451238 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 13:03:16.207054  451238 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 13:03:16.209133  451238 out.go:204]   - Generating certificates and keys ...
	I0805 13:03:16.209256  451238 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 13:03:16.209376  451238 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 13:03:16.209493  451238 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0805 13:03:16.209597  451238 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0805 13:03:16.209703  451238 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0805 13:03:16.211637  451238 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0805 13:03:16.211726  451238 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0805 13:03:16.211833  451238 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0805 13:03:16.211959  451238 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0805 13:03:16.212690  451238 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0805 13:03:16.212863  451238 kubeadm.go:310] [certs] Using the existing "sa" key
	I0805 13:03:16.212963  451238 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 13:03:16.283080  451238 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 13:03:16.609523  451238 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 13:03:16.765635  451238 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 13:03:16.934487  451238 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 13:03:16.955335  451238 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 13:03:16.956267  451238 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 13:03:16.956328  451238 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 13:03:17.088081  451238 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 13:03:15.445305  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:17.447306  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:17.090118  451238 out.go:204]   - Booting up control plane ...
	I0805 13:03:17.090264  451238 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 13:03:17.100902  451238 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 13:03:17.101263  451238 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 13:03:17.102210  451238 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 13:03:17.112522  451238 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0805 13:03:19.943658  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:21.944253  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:23.945158  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:25.252381  450576 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.348530672s)
	I0805 13:03:25.252504  450576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 13:03:25.269305  450576 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 13:03:25.279322  450576 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 13:03:25.289241  450576 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 13:03:25.289266  450576 kubeadm.go:157] found existing configuration files:
	
	I0805 13:03:25.289304  450576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 13:03:25.298671  450576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 13:03:25.298732  450576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 13:03:25.309962  450576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 13:03:25.320180  450576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 13:03:25.320247  450576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 13:03:25.330481  450576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 13:03:25.340565  450576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 13:03:25.340652  450576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 13:03:25.351244  450576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 13:03:25.361443  450576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 13:03:25.361536  450576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 13:03:25.371655  450576 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 13:03:25.419277  450576 kubeadm.go:310] W0805 13:03:25.398597    2979 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0805 13:03:25.420220  450576 kubeadm.go:310] W0805 13:03:25.399642    2979 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0805 13:03:25.537148  450576 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 13:03:25.945501  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:27.945972  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:33.413703  450576 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-rc.0
	I0805 13:03:33.413775  450576 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 13:03:33.413863  450576 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 13:03:33.414008  450576 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 13:03:33.414152  450576 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0805 13:03:33.414235  450576 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 13:03:33.415804  450576 out.go:204]   - Generating certificates and keys ...
	I0805 13:03:33.415874  450576 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 13:03:33.415949  450576 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 13:03:33.416037  450576 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0805 13:03:33.416101  450576 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0805 13:03:33.416174  450576 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0805 13:03:33.416237  450576 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0805 13:03:33.416289  450576 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0805 13:03:33.416357  450576 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0805 13:03:33.416437  450576 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0805 13:03:33.416518  450576 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0805 13:03:33.416553  450576 kubeadm.go:310] [certs] Using the existing "sa" key
	I0805 13:03:33.416603  450576 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 13:03:33.416646  450576 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 13:03:33.416701  450576 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0805 13:03:33.416745  450576 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 13:03:33.416816  450576 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 13:03:33.416878  450576 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 13:03:33.416971  450576 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 13:03:33.417059  450576 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 13:03:33.418572  450576 out.go:204]   - Booting up control plane ...
	I0805 13:03:33.418671  450576 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 13:03:33.418751  450576 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 13:03:33.418833  450576 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 13:03:33.418965  450576 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 13:03:33.419092  450576 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 13:03:33.419172  450576 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 13:03:33.419342  450576 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0805 13:03:33.419488  450576 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0805 13:03:33.419577  450576 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.308417ms
	I0805 13:03:33.419672  450576 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0805 13:03:33.419780  450576 kubeadm.go:310] [api-check] The API server is healthy after 5.001429681s
	I0805 13:03:33.419908  450576 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 13:03:33.420049  450576 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 13:03:33.420117  450576 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0805 13:03:33.420293  450576 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-669469 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 13:03:33.420385  450576 kubeadm.go:310] [bootstrap-token] Using token: i9zl3x.c4hzh1c9ccxlydzt
	I0805 13:03:33.421925  450576 out.go:204]   - Configuring RBAC rules ...
	I0805 13:03:33.422042  450576 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 13:03:33.422157  450576 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 13:03:33.422352  450576 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 13:03:33.422488  450576 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 13:03:33.422649  450576 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 13:03:33.422784  450576 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 13:03:33.422914  450576 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 13:03:33.422991  450576 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0805 13:03:33.423060  450576 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0805 13:03:33.423070  450576 kubeadm.go:310] 
	I0805 13:03:33.423160  450576 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0805 13:03:33.423173  450576 kubeadm.go:310] 
	I0805 13:03:33.423274  450576 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0805 13:03:33.423283  450576 kubeadm.go:310] 
	I0805 13:03:33.423316  450576 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0805 13:03:33.423409  450576 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 13:03:33.423495  450576 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 13:03:33.423513  450576 kubeadm.go:310] 
	I0805 13:03:33.423616  450576 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0805 13:03:33.423628  450576 kubeadm.go:310] 
	I0805 13:03:33.423692  450576 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 13:03:33.423701  450576 kubeadm.go:310] 
	I0805 13:03:33.423793  450576 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0805 13:03:33.423931  450576 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 13:03:33.424030  450576 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 13:03:33.424039  450576 kubeadm.go:310] 
	I0805 13:03:33.424106  450576 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0805 13:03:33.424176  450576 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0805 13:03:33.424185  450576 kubeadm.go:310] 
	I0805 13:03:33.424282  450576 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token i9zl3x.c4hzh1c9ccxlydzt \
	I0805 13:03:33.424430  450576 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d5d31a77e9c4cbf19599d2fca5d8f2345e115b01301fa4b841f92bcfec86ddc6 \
	I0805 13:03:33.424473  450576 kubeadm.go:310] 	--control-plane 
	I0805 13:03:33.424482  450576 kubeadm.go:310] 
	I0805 13:03:33.424588  450576 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0805 13:03:33.424602  450576 kubeadm.go:310] 
	I0805 13:03:33.424725  450576 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token i9zl3x.c4hzh1c9ccxlydzt \
	I0805 13:03:33.424870  450576 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d5d31a77e9c4cbf19599d2fca5d8f2345e115b01301fa4b841f92bcfec86ddc6 
	I0805 13:03:33.424892  450576 cni.go:84] Creating CNI manager for ""
	I0805 13:03:33.424911  450576 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 13:03:33.426503  450576 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0805 13:03:33.427981  450576 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0805 13:03:33.439484  450576 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0805 13:03:33.458459  450576 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 13:03:33.458547  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:33.458579  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-669469 minikube.k8s.io/updated_at=2024_08_05T13_03_33_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f minikube.k8s.io/name=no-preload-669469 minikube.k8s.io/primary=true
	I0805 13:03:33.488847  450576 ops.go:34] apiserver oom_adj: -16
	I0805 13:03:29.946423  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:32.444923  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:33.674306  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:34.174940  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:34.674936  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:35.174693  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:35.675004  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:36.174801  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:36.674878  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:37.174394  450576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:37.263948  450576 kubeadm.go:1113] duration metric: took 3.805464287s to wait for elevateKubeSystemPrivileges
	I0805 13:03:37.263985  450576 kubeadm.go:394] duration metric: took 4m56.93214495s to StartCluster
	I0805 13:03:37.264025  450576 settings.go:142] acquiring lock: {Name:mkef693333292ed53a03690c72ec170ce2e26d3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 13:03:37.264143  450576 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 13:03:37.265965  450576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/kubeconfig: {Name:mkf2ea766e58530103015ce4ba9d1ed3336f3926 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 13:03:37.266283  450576 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.223 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 13:03:37.266400  450576 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 13:03:37.266469  450576 addons.go:69] Setting storage-provisioner=true in profile "no-preload-669469"
	I0805 13:03:37.266510  450576 addons.go:234] Setting addon storage-provisioner=true in "no-preload-669469"
	W0805 13:03:37.266518  450576 addons.go:243] addon storage-provisioner should already be in state true
	I0805 13:03:37.266519  450576 config.go:182] Loaded profile config "no-preload-669469": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0805 13:03:37.266551  450576 host.go:66] Checking if "no-preload-669469" exists ...
	I0805 13:03:37.266505  450576 addons.go:69] Setting default-storageclass=true in profile "no-preload-669469"
	I0805 13:03:37.266547  450576 addons.go:69] Setting metrics-server=true in profile "no-preload-669469"
	I0805 13:03:37.266612  450576 addons.go:234] Setting addon metrics-server=true in "no-preload-669469"
	I0805 13:03:37.266616  450576 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-669469"
	W0805 13:03:37.266627  450576 addons.go:243] addon metrics-server should already be in state true
	I0805 13:03:37.266668  450576 host.go:66] Checking if "no-preload-669469" exists ...
	I0805 13:03:37.267002  450576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:03:37.267002  450576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:03:37.267035  450576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:03:37.267049  450576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:03:37.267041  450576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:03:37.267085  450576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:03:37.267985  450576 out.go:177] * Verifying Kubernetes components...
	I0805 13:03:37.269486  450576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 13:03:37.283242  450576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44391
	I0805 13:03:37.283291  450576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35597
	I0805 13:03:37.283245  450576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38679
	I0805 13:03:37.283710  450576 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:03:37.283785  450576 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:03:37.283717  450576 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:03:37.284296  450576 main.go:141] libmachine: Using API Version  1
	I0805 13:03:37.284316  450576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:03:37.284319  450576 main.go:141] libmachine: Using API Version  1
	I0805 13:03:37.284296  450576 main.go:141] libmachine: Using API Version  1
	I0805 13:03:37.284335  450576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:03:37.284360  450576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:03:37.284734  450576 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:03:37.284735  450576 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:03:37.284746  450576 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:03:37.284963  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetState
	I0805 13:03:37.285343  450576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:03:37.285375  450576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:03:37.285387  450576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:03:37.285441  450576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:03:37.288699  450576 addons.go:234] Setting addon default-storageclass=true in "no-preload-669469"
	W0805 13:03:37.288722  450576 addons.go:243] addon default-storageclass should already be in state true
	I0805 13:03:37.288753  450576 host.go:66] Checking if "no-preload-669469" exists ...
	I0805 13:03:37.289023  450576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:03:37.289049  450576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:03:37.303814  450576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38647
	I0805 13:03:37.304491  450576 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:03:37.305081  450576 main.go:141] libmachine: Using API Version  1
	I0805 13:03:37.305104  450576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:03:37.305552  450576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42975
	I0805 13:03:37.305566  450576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36331
	I0805 13:03:37.305583  450576 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:03:37.305928  450576 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:03:37.306007  450576 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:03:37.306148  450576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:03:37.306190  450576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:03:37.306485  450576 main.go:141] libmachine: Using API Version  1
	I0805 13:03:37.306503  450576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:03:37.306595  450576 main.go:141] libmachine: Using API Version  1
	I0805 13:03:37.306611  450576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:03:37.306971  450576 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:03:37.306998  450576 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:03:37.307157  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetState
	I0805 13:03:37.307162  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetState
	I0805 13:03:37.309002  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 13:03:37.309241  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 13:03:37.311054  450576 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0805 13:03:37.311055  450576 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 13:03:37.312682  450576 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0805 13:03:37.312695  450576 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0805 13:03:37.312710  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 13:03:37.312834  450576 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 13:03:37.312856  450576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 13:03:37.312874  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 13:03:37.317044  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 13:03:37.317635  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 13:03:37.317660  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 13:03:37.317753  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 13:03:37.317955  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 13:03:37.318141  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 13:03:37.318360  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 13:03:37.318400  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 13:03:37.318427  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 13:03:37.318539  450576 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa Username:docker}
	I0805 13:03:37.318633  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 13:03:37.318967  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 13:03:37.319111  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 13:03:37.319241  450576 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa Username:docker}
	I0805 13:03:37.325066  450576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46527
	I0805 13:03:37.325633  450576 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:03:37.326052  450576 main.go:141] libmachine: Using API Version  1
	I0805 13:03:37.326071  450576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:03:37.326326  450576 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:03:37.326473  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetState
	I0805 13:03:37.328502  450576 main.go:141] libmachine: (no-preload-669469) Calling .DriverName
	I0805 13:03:37.328814  450576 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 13:03:37.328826  450576 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 13:03:37.328839  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHHostname
	I0805 13:03:37.331482  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 13:03:37.331853  450576 main.go:141] libmachine: (no-preload-669469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:0a", ip: ""} in network mk-no-preload-669469: {Iface:virbr4 ExpiryTime:2024-08-05 13:58:12 +0000 UTC Type:0 Mac:52:54:00:55:38:0a Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:no-preload-669469 Clientid:01:52:54:00:55:38:0a}
	I0805 13:03:37.331874  450576 main.go:141] libmachine: (no-preload-669469) DBG | domain no-preload-669469 has defined IP address 192.168.72.223 and MAC address 52:54:00:55:38:0a in network mk-no-preload-669469
	I0805 13:03:37.332013  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHPort
	I0805 13:03:37.332169  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHKeyPath
	I0805 13:03:37.332270  450576 main.go:141] libmachine: (no-preload-669469) Calling .GetSSHUsername
	I0805 13:03:37.332358  450576 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/no-preload-669469/id_rsa Username:docker}
	I0805 13:03:37.483477  450576 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 13:03:37.501924  450576 node_ready.go:35] waiting up to 6m0s for node "no-preload-669469" to be "Ready" ...
	I0805 13:03:37.511394  450576 node_ready.go:49] node "no-preload-669469" has status "Ready":"True"
	I0805 13:03:37.511427  450576 node_ready.go:38] duration metric: took 9.462968ms for node "no-preload-669469" to be "Ready" ...
	I0805 13:03:37.511443  450576 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 13:03:37.526505  450576 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 13:03:37.575598  450576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 13:03:37.583338  450576 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0805 13:03:37.583362  450576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0805 13:03:37.594019  450576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 13:03:37.629885  450576 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0805 13:03:37.629913  450576 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0805 13:03:37.684790  450576 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0805 13:03:37.684825  450576 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0805 13:03:37.753629  450576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0805 13:03:37.857352  450576 main.go:141] libmachine: Making call to close driver server
	I0805 13:03:37.857386  450576 main.go:141] libmachine: (no-preload-669469) Calling .Close
	I0805 13:03:37.857777  450576 main.go:141] libmachine: (no-preload-669469) DBG | Closing plugin on server side
	I0805 13:03:37.857780  450576 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:03:37.857812  450576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:03:37.857829  450576 main.go:141] libmachine: Making call to close driver server
	I0805 13:03:37.857838  450576 main.go:141] libmachine: (no-preload-669469) Calling .Close
	I0805 13:03:37.858101  450576 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:03:37.858117  450576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:03:37.858153  450576 main.go:141] libmachine: (no-preload-669469) DBG | Closing plugin on server side
	I0805 13:03:37.871616  450576 main.go:141] libmachine: Making call to close driver server
	I0805 13:03:37.871639  450576 main.go:141] libmachine: (no-preload-669469) Calling .Close
	I0805 13:03:37.871970  450576 main.go:141] libmachine: (no-preload-669469) DBG | Closing plugin on server side
	I0805 13:03:37.872022  450576 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:03:37.872031  450576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:03:38.290429  450576 main.go:141] libmachine: Making call to close driver server
	I0805 13:03:38.290449  450576 main.go:141] libmachine: (no-preload-669469) Calling .Close
	I0805 13:03:38.290784  450576 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:03:38.290856  450576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:03:38.290871  450576 main.go:141] libmachine: Making call to close driver server
	I0805 13:03:38.290880  450576 main.go:141] libmachine: (no-preload-669469) Calling .Close
	I0805 13:03:38.290829  450576 main.go:141] libmachine: (no-preload-669469) DBG | Closing plugin on server side
	I0805 13:03:38.291265  450576 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:03:38.291289  450576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:03:38.291271  450576 main.go:141] libmachine: (no-preload-669469) DBG | Closing plugin on server side
	I0805 13:03:38.880274  450576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.126602375s)
	I0805 13:03:38.880331  450576 main.go:141] libmachine: Making call to close driver server
	I0805 13:03:38.880344  450576 main.go:141] libmachine: (no-preload-669469) Calling .Close
	I0805 13:03:38.880868  450576 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:03:38.880896  450576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:03:38.880906  450576 main.go:141] libmachine: Making call to close driver server
	I0805 13:03:38.880916  450576 main.go:141] libmachine: (no-preload-669469) Calling .Close
	I0805 13:03:38.880871  450576 main.go:141] libmachine: (no-preload-669469) DBG | Closing plugin on server side
	I0805 13:03:38.881196  450576 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:03:38.881204  450576 main.go:141] libmachine: (no-preload-669469) DBG | Closing plugin on server side
	I0805 13:03:38.881211  450576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:03:38.881230  450576 addons.go:475] Verifying addon metrics-server=true in "no-preload-669469"
	I0805 13:03:38.882896  450576 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0805 13:03:34.945631  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:37.446855  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:39.741362  450884 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.722174979s)
	I0805 13:03:39.741438  450884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 13:03:39.760465  450884 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 13:03:39.770587  450884 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 13:03:39.780157  450884 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 13:03:39.780177  450884 kubeadm.go:157] found existing configuration files:
	
	I0805 13:03:39.780215  450884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0805 13:03:39.790172  450884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 13:03:39.790243  450884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 13:03:39.803838  450884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0805 13:03:39.816314  450884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 13:03:39.816367  450884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 13:03:39.826636  450884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0805 13:03:39.836513  450884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 13:03:39.836570  450884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 13:03:39.846356  450884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0805 13:03:39.855694  450884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 13:03:39.855770  450884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 13:03:39.865721  450884 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 13:03:40.081251  450884 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 13:03:38.884521  450576 addons.go:510] duration metric: took 1.618121451s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0805 13:03:39.536758  450576 pod_ready.go:102] pod "etcd-no-preload-669469" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:41.035239  450576 pod_ready.go:92] pod "etcd-no-preload-669469" in "kube-system" namespace has status "Ready":"True"
	I0805 13:03:41.035266  450576 pod_ready.go:81] duration metric: took 3.508734543s for pod "etcd-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 13:03:41.035280  450576 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 13:03:41.042787  450576 pod_ready.go:92] pod "kube-apiserver-no-preload-669469" in "kube-system" namespace has status "Ready":"True"
	I0805 13:03:41.042811  450576 pod_ready.go:81] duration metric: took 7.522909ms for pod "kube-apiserver-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 13:03:41.042824  450576 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 13:03:42.048338  450576 pod_ready.go:92] pod "kube-controller-manager-no-preload-669469" in "kube-system" namespace has status "Ready":"True"
	I0805 13:03:42.048363  450576 pod_ready.go:81] duration metric: took 1.005531569s for pod "kube-controller-manager-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 13:03:42.048373  450576 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 13:03:39.945815  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:42.445704  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:44.056394  450576 pod_ready.go:102] pod "kube-scheduler-no-preload-669469" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:45.555280  450576 pod_ready.go:92] pod "kube-scheduler-no-preload-669469" in "kube-system" namespace has status "Ready":"True"
	I0805 13:03:45.555310  450576 pod_ready.go:81] duration metric: took 3.506927542s for pod "kube-scheduler-no-preload-669469" in "kube-system" namespace to be "Ready" ...
	I0805 13:03:45.555321  450576 pod_ready.go:38] duration metric: took 8.043865797s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 13:03:45.555338  450576 api_server.go:52] waiting for apiserver process to appear ...
	I0805 13:03:45.555397  450576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:03:45.572225  450576 api_server.go:72] duration metric: took 8.30589728s to wait for apiserver process to appear ...
	I0805 13:03:45.572249  450576 api_server.go:88] waiting for apiserver healthz status ...
	I0805 13:03:45.572272  450576 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I0805 13:03:45.578042  450576 api_server.go:279] https://192.168.72.223:8443/healthz returned 200:
	ok
	I0805 13:03:45.579014  450576 api_server.go:141] control plane version: v1.31.0-rc.0
	I0805 13:03:45.579034  450576 api_server.go:131] duration metric: took 6.778214ms to wait for apiserver health ...
	I0805 13:03:45.579042  450576 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 13:03:45.585537  450576 system_pods.go:59] 9 kube-system pods found
	I0805 13:03:45.585660  450576 system_pods.go:61] "coredns-6f6b679f8f-npbmj" [9eea9e0a-697b-42c9-857c-a3556c658fde] Running
	I0805 13:03:45.585673  450576 system_pods.go:61] "coredns-6f6b679f8f-pqhwx" [3d7bb193-e93e-49b8-be4b-943f2d7fe59d] Running
	I0805 13:03:45.585679  450576 system_pods.go:61] "etcd-no-preload-669469" [550acfbb-f255-470e-9e4f-a6eb36447951] Running
	I0805 13:03:45.585687  450576 system_pods.go:61] "kube-apiserver-no-preload-669469" [57089d30-f83b-4f06-8281-8bcdfb571df9] Running
	I0805 13:03:45.585694  450576 system_pods.go:61] "kube-controller-manager-no-preload-669469" [8f3b2de3-6296-4f95-8d91-b9408c8eb38b] Running
	I0805 13:03:45.585700  450576 system_pods.go:61] "kube-proxy-tpn5s" [f89e32f9-d750-41ac-891e-e3ca4a4fbbd2] Running
	I0805 13:03:45.585705  450576 system_pods.go:61] "kube-scheduler-no-preload-669469" [69af56a0-7269-4bc5-83ea-c632c7b8d060] Running
	I0805 13:03:45.585716  450576 system_pods.go:61] "metrics-server-6867b74b74-x4j7b" [55a747e4-f9a7-41f1-b584-470048ba6fcb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 13:03:45.585726  450576 system_pods.go:61] "storage-provisioner" [cb19adf6-e208-4709-b02f-ae32acc30478] Running
	I0805 13:03:45.585736  450576 system_pods.go:74] duration metric: took 6.688464ms to wait for pod list to return data ...
	I0805 13:03:45.585749  450576 default_sa.go:34] waiting for default service account to be created ...
	I0805 13:03:45.589498  450576 default_sa.go:45] found service account: "default"
	I0805 13:03:45.589526  450576 default_sa.go:55] duration metric: took 3.765664ms for default service account to be created ...
	I0805 13:03:45.589535  450576 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 13:03:45.597499  450576 system_pods.go:86] 9 kube-system pods found
	I0805 13:03:45.597527  450576 system_pods.go:89] "coredns-6f6b679f8f-npbmj" [9eea9e0a-697b-42c9-857c-a3556c658fde] Running
	I0805 13:03:45.597533  450576 system_pods.go:89] "coredns-6f6b679f8f-pqhwx" [3d7bb193-e93e-49b8-be4b-943f2d7fe59d] Running
	I0805 13:03:45.597537  450576 system_pods.go:89] "etcd-no-preload-669469" [550acfbb-f255-470e-9e4f-a6eb36447951] Running
	I0805 13:03:45.597541  450576 system_pods.go:89] "kube-apiserver-no-preload-669469" [57089d30-f83b-4f06-8281-8bcdfb571df9] Running
	I0805 13:03:45.597547  450576 system_pods.go:89] "kube-controller-manager-no-preload-669469" [8f3b2de3-6296-4f95-8d91-b9408c8eb38b] Running
	I0805 13:03:45.597550  450576 system_pods.go:89] "kube-proxy-tpn5s" [f89e32f9-d750-41ac-891e-e3ca4a4fbbd2] Running
	I0805 13:03:45.597554  450576 system_pods.go:89] "kube-scheduler-no-preload-669469" [69af56a0-7269-4bc5-83ea-c632c7b8d060] Running
	I0805 13:03:45.597563  450576 system_pods.go:89] "metrics-server-6867b74b74-x4j7b" [55a747e4-f9a7-41f1-b584-470048ba6fcb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 13:03:45.597568  450576 system_pods.go:89] "storage-provisioner" [cb19adf6-e208-4709-b02f-ae32acc30478] Running
	I0805 13:03:45.597577  450576 system_pods.go:126] duration metric: took 8.035546ms to wait for k8s-apps to be running ...
	I0805 13:03:45.597586  450576 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 13:03:45.597631  450576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 13:03:45.619317  450576 system_svc.go:56] duration metric: took 21.706117ms WaitForService to wait for kubelet
	I0805 13:03:45.619365  450576 kubeadm.go:582] duration metric: took 8.353035332s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 13:03:45.619398  450576 node_conditions.go:102] verifying NodePressure condition ...
	I0805 13:03:45.622763  450576 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 13:03:45.622790  450576 node_conditions.go:123] node cpu capacity is 2
	I0805 13:03:45.622801  450576 node_conditions.go:105] duration metric: took 3.396756ms to run NodePressure ...
	I0805 13:03:45.622814  450576 start.go:241] waiting for startup goroutines ...
	I0805 13:03:45.622821  450576 start.go:246] waiting for cluster config update ...
	I0805 13:03:45.622831  450576 start.go:255] writing updated cluster config ...
	I0805 13:03:45.623102  450576 ssh_runner.go:195] Run: rm -f paused
	I0805 13:03:45.682547  450576 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-rc.0 (minor skew: 1)
	I0805 13:03:45.684415  450576 out.go:177] * Done! kubectl is now configured to use "no-preload-669469" cluster and "default" namespace by default
	I0805 13:03:48.707730  450884 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0805 13:03:48.707817  450884 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 13:03:48.707920  450884 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 13:03:48.708065  450884 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 13:03:48.708218  450884 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 13:03:48.708311  450884 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 13:03:48.709807  450884 out.go:204]   - Generating certificates and keys ...
	I0805 13:03:48.709878  450884 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 13:03:48.709931  450884 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 13:03:48.710008  450884 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0805 13:03:48.710084  450884 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0805 13:03:48.710148  450884 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0805 13:03:48.710196  450884 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0805 13:03:48.710251  450884 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0805 13:03:48.710316  450884 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0805 13:03:48.710415  450884 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0805 13:03:48.710520  450884 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0805 13:03:48.710582  450884 kubeadm.go:310] [certs] Using the existing "sa" key
	I0805 13:03:48.710656  450884 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 13:03:48.710700  450884 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 13:03:48.710746  450884 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0805 13:03:48.710790  450884 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 13:03:48.710843  450884 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 13:03:48.710895  450884 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 13:03:48.710971  450884 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 13:03:48.711055  450884 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 13:03:48.713503  450884 out.go:204]   - Booting up control plane ...
	I0805 13:03:48.713601  450884 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 13:03:48.713687  450884 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 13:03:48.713763  450884 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 13:03:48.713911  450884 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 13:03:48.714039  450884 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 13:03:48.714105  450884 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 13:03:48.714222  450884 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0805 13:03:48.714284  450884 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0805 13:03:48.714345  450884 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.128103ms
	I0805 13:03:48.714423  450884 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0805 13:03:48.714491  450884 kubeadm.go:310] [api-check] The API server is healthy after 5.502076793s
	I0805 13:03:48.714600  450884 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 13:03:48.714730  450884 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 13:03:48.714794  450884 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0805 13:03:48.714987  450884 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-371585 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 13:03:48.715075  450884 kubeadm.go:310] [bootstrap-token] Using token: cpuyhq.sjq5yhx27tk7meks
	I0805 13:03:48.716575  450884 out.go:204]   - Configuring RBAC rules ...
	I0805 13:03:48.716686  450884 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 13:03:48.716775  450884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 13:03:48.716952  450884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 13:03:48.717075  450884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 13:03:48.717196  450884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 13:03:48.717270  450884 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 13:03:48.717391  450884 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 13:03:48.717450  450884 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0805 13:03:48.717512  450884 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0805 13:03:48.717521  450884 kubeadm.go:310] 
	I0805 13:03:48.717613  450884 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0805 13:03:48.717623  450884 kubeadm.go:310] 
	I0805 13:03:48.717724  450884 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0805 13:03:48.717734  450884 kubeadm.go:310] 
	I0805 13:03:48.717768  450884 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0805 13:03:48.717848  450884 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 13:03:48.717892  450884 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 13:03:48.717898  450884 kubeadm.go:310] 
	I0805 13:03:48.717968  450884 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0805 13:03:48.717978  450884 kubeadm.go:310] 
	I0805 13:03:48.718047  450884 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 13:03:48.718057  450884 kubeadm.go:310] 
	I0805 13:03:48.718133  450884 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0805 13:03:48.718220  450884 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 13:03:48.718297  450884 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 13:03:48.718304  450884 kubeadm.go:310] 
	I0805 13:03:48.718422  450884 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0805 13:03:48.718506  450884 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0805 13:03:48.718513  450884 kubeadm.go:310] 
	I0805 13:03:48.718585  450884 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token cpuyhq.sjq5yhx27tk7meks \
	I0805 13:03:48.718669  450884 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d5d31a77e9c4cbf19599d2fca5d8f2345e115b01301fa4b841f92bcfec86ddc6 \
	I0805 13:03:48.718688  450884 kubeadm.go:310] 	--control-plane 
	I0805 13:03:48.718694  450884 kubeadm.go:310] 
	I0805 13:03:48.718761  450884 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0805 13:03:48.718769  450884 kubeadm.go:310] 
	I0805 13:03:48.718848  450884 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token cpuyhq.sjq5yhx27tk7meks \
	I0805 13:03:48.718948  450884 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d5d31a77e9c4cbf19599d2fca5d8f2345e115b01301fa4b841f92bcfec86ddc6 
	I0805 13:03:48.718957  450884 cni.go:84] Creating CNI manager for ""
	I0805 13:03:48.718965  450884 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 13:03:48.720262  450884 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0805 13:03:44.946225  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:47.444313  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:48.721390  450884 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0805 13:03:48.732324  450884 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0805 13:03:48.750318  450884 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 13:03:48.750397  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:48.750398  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-371585 minikube.k8s.io/updated_at=2024_08_05T13_03_48_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f minikube.k8s.io/name=default-k8s-diff-port-371585 minikube.k8s.io/primary=true
	I0805 13:03:48.781590  450884 ops.go:34] apiserver oom_adj: -16
	I0805 13:03:48.966544  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:49.467473  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:49.967093  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:50.466813  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:50.967183  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:51.467350  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:51.967432  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:49.444667  450393 pod_ready.go:102] pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace has status "Ready":"False"
	I0805 13:03:49.444719  450393 pod_ready.go:81] duration metric: took 4m0.006667631s for pod "metrics-server-569cc877fc-k8mrt" in "kube-system" namespace to be "Ready" ...
	E0805 13:03:49.444731  450393 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0805 13:03:49.444738  450393 pod_ready.go:38] duration metric: took 4m2.407503205s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 13:03:49.444757  450393 api_server.go:52] waiting for apiserver process to appear ...
	I0805 13:03:49.444787  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:03:49.444849  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:03:49.502039  450393 cri.go:89] found id: "be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7"
	I0805 13:03:49.502067  450393 cri.go:89] found id: ""
	I0805 13:03:49.502079  450393 logs.go:276] 1 containers: [be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7]
	I0805 13:03:49.502139  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:49.510426  450393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:03:49.510494  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:03:49.553861  450393 cri.go:89] found id: "85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804"
	I0805 13:03:49.553889  450393 cri.go:89] found id: ""
	I0805 13:03:49.553899  450393 logs.go:276] 1 containers: [85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804]
	I0805 13:03:49.553960  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:49.558802  450393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:03:49.558868  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:03:49.594787  450393 cri.go:89] found id: "b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb"
	I0805 13:03:49.594810  450393 cri.go:89] found id: ""
	I0805 13:03:49.594828  450393 logs.go:276] 1 containers: [b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb]
	I0805 13:03:49.594891  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:49.599735  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:03:49.599822  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:03:49.637856  450393 cri.go:89] found id: "8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756"
	I0805 13:03:49.637878  450393 cri.go:89] found id: ""
	I0805 13:03:49.637886  450393 logs.go:276] 1 containers: [8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756]
	I0805 13:03:49.637939  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:49.642228  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:03:49.642295  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:03:49.683822  450393 cri.go:89] found id: "c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0"
	I0805 13:03:49.683844  450393 cri.go:89] found id: ""
	I0805 13:03:49.683853  450393 logs.go:276] 1 containers: [c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0]
	I0805 13:03:49.683913  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:49.688077  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:03:49.688155  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:03:49.724887  450393 cri.go:89] found id: "75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f"
	I0805 13:03:49.724913  450393 cri.go:89] found id: ""
	I0805 13:03:49.724923  450393 logs.go:276] 1 containers: [75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f]
	I0805 13:03:49.724987  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:49.728965  450393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:03:49.729052  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:03:49.765826  450393 cri.go:89] found id: ""
	I0805 13:03:49.765859  450393 logs.go:276] 0 containers: []
	W0805 13:03:49.765871  450393 logs.go:278] No container was found matching "kindnet"
	I0805 13:03:49.765878  450393 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0805 13:03:49.765944  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0805 13:03:49.803790  450393 cri.go:89] found id: "07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b"
	I0805 13:03:49.803811  450393 cri.go:89] found id: "2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86"
	I0805 13:03:49.803815  450393 cri.go:89] found id: ""
	I0805 13:03:49.803823  450393 logs.go:276] 2 containers: [07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b 2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86]
	I0805 13:03:49.803887  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:49.808064  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:49.812308  450393 logs.go:123] Gathering logs for storage-provisioner [2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86] ...
	I0805 13:03:49.812332  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86"
	I0805 13:03:49.851842  450393 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:49.851867  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:50.418758  450393 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:50.418808  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 13:03:50.564965  450393 logs.go:123] Gathering logs for coredns [b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb] ...
	I0805 13:03:50.564999  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb"
	I0805 13:03:50.608518  450393 logs.go:123] Gathering logs for kube-apiserver [be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7] ...
	I0805 13:03:50.608557  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7"
	I0805 13:03:50.658446  450393 logs.go:123] Gathering logs for etcd [85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804] ...
	I0805 13:03:50.658482  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804"
	I0805 13:03:50.699924  450393 logs.go:123] Gathering logs for kube-scheduler [8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756] ...
	I0805 13:03:50.699962  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756"
	I0805 13:03:50.741228  450393 logs.go:123] Gathering logs for kube-proxy [c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0] ...
	I0805 13:03:50.741264  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0"
	I0805 13:03:50.776100  450393 logs.go:123] Gathering logs for kube-controller-manager [75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f] ...
	I0805 13:03:50.776133  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f"
	I0805 13:03:50.827847  450393 logs.go:123] Gathering logs for storage-provisioner [07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b] ...
	I0805 13:03:50.827880  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b"
	I0805 13:03:50.867699  450393 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:50.867731  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:50.920049  450393 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:50.920085  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:50.934198  450393 logs.go:123] Gathering logs for container status ...
	I0805 13:03:50.934224  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:03:53.477808  450393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:03:53.494062  450393 api_server.go:72] duration metric: took 4m14.183013645s to wait for apiserver process to appear ...
	I0805 13:03:53.494093  450393 api_server.go:88] waiting for apiserver healthz status ...
	I0805 13:03:53.494143  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:03:53.494211  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:03:53.534293  450393 cri.go:89] found id: "be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7"
	I0805 13:03:53.534322  450393 cri.go:89] found id: ""
	I0805 13:03:53.534333  450393 logs.go:276] 1 containers: [be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7]
	I0805 13:03:53.534400  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:53.539014  450393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:03:53.539088  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:03:53.576587  450393 cri.go:89] found id: "85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804"
	I0805 13:03:53.576608  450393 cri.go:89] found id: ""
	I0805 13:03:53.576616  450393 logs.go:276] 1 containers: [85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804]
	I0805 13:03:53.576667  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:53.582068  450393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:03:53.582147  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:03:53.623240  450393 cri.go:89] found id: "b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb"
	I0805 13:03:53.623264  450393 cri.go:89] found id: ""
	I0805 13:03:53.623274  450393 logs.go:276] 1 containers: [b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb]
	I0805 13:03:53.623352  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:53.627638  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:03:53.627699  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:03:53.668167  450393 cri.go:89] found id: "8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756"
	I0805 13:03:53.668198  450393 cri.go:89] found id: ""
	I0805 13:03:53.668209  450393 logs.go:276] 1 containers: [8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756]
	I0805 13:03:53.668281  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:53.672390  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:03:53.672469  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:03:53.714046  450393 cri.go:89] found id: "c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0"
	I0805 13:03:53.714069  450393 cri.go:89] found id: ""
	I0805 13:03:53.714078  450393 logs.go:276] 1 containers: [c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0]
	I0805 13:03:53.714130  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:53.718325  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:03:53.718392  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:03:53.756343  450393 cri.go:89] found id: "75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f"
	I0805 13:03:53.756372  450393 cri.go:89] found id: ""
	I0805 13:03:53.756382  450393 logs.go:276] 1 containers: [75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f]
	I0805 13:03:53.756444  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:53.760627  450393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:03:53.760696  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:03:53.806370  450393 cri.go:89] found id: ""
	I0805 13:03:53.806406  450393 logs.go:276] 0 containers: []
	W0805 13:03:53.806424  450393 logs.go:278] No container was found matching "kindnet"
	I0805 13:03:53.806432  450393 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0805 13:03:53.806505  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0805 13:03:53.843082  450393 cri.go:89] found id: "07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b"
	I0805 13:03:53.843116  450393 cri.go:89] found id: "2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86"
	I0805 13:03:53.843121  450393 cri.go:89] found id: ""
	I0805 13:03:53.843129  450393 logs.go:276] 2 containers: [07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b 2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86]
	I0805 13:03:53.843188  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:53.847214  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:53.851093  450393 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:53.851112  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:52.467589  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:52.967390  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:53.466580  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:53.967544  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:54.467454  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:54.967281  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:55.467111  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:55.967513  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:56.467255  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:56.967513  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:54.296506  450393 logs.go:123] Gathering logs for kube-apiserver [be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7] ...
	I0805 13:03:54.296556  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7"
	I0805 13:03:54.343983  450393 logs.go:123] Gathering logs for etcd [85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804] ...
	I0805 13:03:54.344026  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804"
	I0805 13:03:54.389236  450393 logs.go:123] Gathering logs for coredns [b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb] ...
	I0805 13:03:54.389271  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb"
	I0805 13:03:54.427964  450393 logs.go:123] Gathering logs for kube-proxy [c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0] ...
	I0805 13:03:54.427996  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0"
	I0805 13:03:54.465953  450393 logs.go:123] Gathering logs for kube-controller-manager [75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f] ...
	I0805 13:03:54.465988  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f"
	I0805 13:03:54.521755  450393 logs.go:123] Gathering logs for storage-provisioner [07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b] ...
	I0805 13:03:54.521835  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b"
	I0805 13:03:54.565481  450393 logs.go:123] Gathering logs for storage-provisioner [2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86] ...
	I0805 13:03:54.565513  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86"
	I0805 13:03:54.606592  450393 logs.go:123] Gathering logs for container status ...
	I0805 13:03:54.606634  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:03:54.650820  450393 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:54.650858  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:54.704512  450393 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:54.704559  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:54.722149  450393 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:54.722184  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 13:03:54.844289  450393 logs.go:123] Gathering logs for kube-scheduler [8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756] ...
	I0805 13:03:54.844324  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756"
	I0805 13:03:57.386998  450393 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0805 13:03:57.391714  450393 api_server.go:279] https://192.168.39.196:8443/healthz returned 200:
	ok
	I0805 13:03:57.392752  450393 api_server.go:141] control plane version: v1.30.3
	I0805 13:03:57.392776  450393 api_server.go:131] duration metric: took 3.898675075s to wait for apiserver health ...
	I0805 13:03:57.392783  450393 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 13:03:57.392812  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:03:57.392868  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:03:57.430171  450393 cri.go:89] found id: "be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7"
	I0805 13:03:57.430201  450393 cri.go:89] found id: ""
	I0805 13:03:57.430210  450393 logs.go:276] 1 containers: [be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7]
	I0805 13:03:57.430270  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:57.434861  450393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:03:57.434920  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:03:57.490595  450393 cri.go:89] found id: "85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804"
	I0805 13:03:57.490622  450393 cri.go:89] found id: ""
	I0805 13:03:57.490632  450393 logs.go:276] 1 containers: [85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804]
	I0805 13:03:57.490702  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:57.496054  450393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:03:57.496141  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:03:57.540248  450393 cri.go:89] found id: "b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb"
	I0805 13:03:57.540278  450393 cri.go:89] found id: ""
	I0805 13:03:57.540289  450393 logs.go:276] 1 containers: [b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb]
	I0805 13:03:57.540353  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:57.547750  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:03:57.547820  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:03:57.595821  450393 cri.go:89] found id: "8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756"
	I0805 13:03:57.595852  450393 cri.go:89] found id: ""
	I0805 13:03:57.595864  450393 logs.go:276] 1 containers: [8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756]
	I0805 13:03:57.595932  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:57.600153  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:03:57.600225  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:03:57.640382  450393 cri.go:89] found id: "c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0"
	I0805 13:03:57.640409  450393 cri.go:89] found id: ""
	I0805 13:03:57.640418  450393 logs.go:276] 1 containers: [c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0]
	I0805 13:03:57.640486  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:57.645476  450393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:03:57.645569  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:03:57.700199  450393 cri.go:89] found id: "75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f"
	I0805 13:03:57.700224  450393 cri.go:89] found id: ""
	I0805 13:03:57.700233  450393 logs.go:276] 1 containers: [75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f]
	I0805 13:03:57.700294  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:57.704818  450393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:03:57.704874  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:03:57.745647  450393 cri.go:89] found id: ""
	I0805 13:03:57.745677  450393 logs.go:276] 0 containers: []
	W0805 13:03:57.745687  450393 logs.go:278] No container was found matching "kindnet"
	I0805 13:03:57.745696  450393 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0805 13:03:57.745760  450393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0805 13:03:57.787327  450393 cri.go:89] found id: "07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b"
	I0805 13:03:57.787367  450393 cri.go:89] found id: "2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86"
	I0805 13:03:57.787374  450393 cri.go:89] found id: ""
	I0805 13:03:57.787384  450393 logs.go:276] 2 containers: [07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b 2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86]
	I0805 13:03:57.787448  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:57.792340  450393 ssh_runner.go:195] Run: which crictl
	I0805 13:03:57.796906  450393 logs.go:123] Gathering logs for kubelet ...
	I0805 13:03:57.796933  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:03:57.850401  450393 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:03:57.850447  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 13:03:57.961760  450393 logs.go:123] Gathering logs for kube-apiserver [be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7] ...
	I0805 13:03:57.961808  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be59c5f295285e1c2acdc811748b6e8afe62115f2c6c96235418bce96d7f64b7"
	I0805 13:03:58.009682  450393 logs.go:123] Gathering logs for etcd [85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804] ...
	I0805 13:03:58.009720  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85c424836db219fbf14adb5127b4fda802e9a5c641c9421b15a95e2efeb15804"
	I0805 13:03:58.061874  450393 logs.go:123] Gathering logs for kube-proxy [c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0] ...
	I0805 13:03:58.061915  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c905047116d6c70a3f4c0c79c7ee5a7918b75a3b6e4e200a3d6c844b5c990dc0"
	I0805 13:03:58.105715  450393 logs.go:123] Gathering logs for kube-controller-manager [75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f] ...
	I0805 13:03:58.105745  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75f0d0c4ce468e34e7371191ba72b28c099eda754b350086e857849d87f9410f"
	I0805 13:03:58.164739  450393 logs.go:123] Gathering logs for storage-provisioner [07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b] ...
	I0805 13:03:58.164780  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07a14eee4cdaed528db21d46190d9cbcb92616ed18a01cd14ec083cf5ff4b30b"
	I0805 13:03:58.203530  450393 logs.go:123] Gathering logs for storage-provisioner [2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86] ...
	I0805 13:03:58.203579  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d096466c2e0da0dd733761f597c541aa264b3b127b10cc25866bf21cd236c86"
	I0805 13:03:58.245478  450393 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:03:58.245511  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:03:58.647807  450393 logs.go:123] Gathering logs for container status ...
	I0805 13:03:58.647857  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 13:03:58.694175  450393 logs.go:123] Gathering logs for dmesg ...
	I0805 13:03:58.694211  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:03:58.709744  450393 logs.go:123] Gathering logs for coredns [b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb] ...
	I0805 13:03:58.709773  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b22c1fc4aed8b791abc4aaeae637e4979c02ee68180aac3352e2196879eb61fb"
	I0805 13:03:58.750668  450393 logs.go:123] Gathering logs for kube-scheduler [8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756] ...
	I0805 13:03:58.750698  450393 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b553257286046a321bc21991466131d73cc3926365085bd4579e40753e54756"
	I0805 13:04:01.297212  450393 system_pods.go:59] 8 kube-system pods found
	I0805 13:04:01.297248  450393 system_pods.go:61] "coredns-7db6d8ff4d-wm7lh" [e3851d79-431c-4629-bfdc-ed9615cd46aa] Running
	I0805 13:04:01.297255  450393 system_pods.go:61] "etcd-embed-certs-321139" [98de664b-92d7-432d-9881-496dd8edd9f3] Running
	I0805 13:04:01.297261  450393 system_pods.go:61] "kube-apiserver-embed-certs-321139" [2d93e6df-1933-4ac1-82f6-d0d8f74f6d4e] Running
	I0805 13:04:01.297265  450393 system_pods.go:61] "kube-controller-manager-embed-certs-321139" [84165f78-f74b-4714-81b9-eeac2771b86b] Running
	I0805 13:04:01.297269  450393 system_pods.go:61] "kube-proxy-shgv2" [a19c5991-505f-4105-8c20-7afd63dd8e61] Running
	I0805 13:04:01.297273  450393 system_pods.go:61] "kube-scheduler-embed-certs-321139" [961a5013-fd55-48a2-adc2-acde33f6aed5] Running
	I0805 13:04:01.297281  450393 system_pods.go:61] "metrics-server-569cc877fc-k8mrt" [6d400b20-5de5-4046-b773-39766c67cdb4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 13:04:01.297289  450393 system_pods.go:61] "storage-provisioner" [8b2db057-5262-4648-93ea-f2f0ed51a19b] Running
	I0805 13:04:01.297300  450393 system_pods.go:74] duration metric: took 3.904508974s to wait for pod list to return data ...
	I0805 13:04:01.297312  450393 default_sa.go:34] waiting for default service account to be created ...
	I0805 13:04:01.299765  450393 default_sa.go:45] found service account: "default"
	I0805 13:04:01.299792  450393 default_sa.go:55] duration metric: took 2.470684ms for default service account to be created ...
	I0805 13:04:01.299802  450393 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 13:04:01.304612  450393 system_pods.go:86] 8 kube-system pods found
	I0805 13:04:01.304644  450393 system_pods.go:89] "coredns-7db6d8ff4d-wm7lh" [e3851d79-431c-4629-bfdc-ed9615cd46aa] Running
	I0805 13:04:01.304651  450393 system_pods.go:89] "etcd-embed-certs-321139" [98de664b-92d7-432d-9881-496dd8edd9f3] Running
	I0805 13:04:01.304656  450393 system_pods.go:89] "kube-apiserver-embed-certs-321139" [2d93e6df-1933-4ac1-82f6-d0d8f74f6d4e] Running
	I0805 13:04:01.304661  450393 system_pods.go:89] "kube-controller-manager-embed-certs-321139" [84165f78-f74b-4714-81b9-eeac2771b86b] Running
	I0805 13:04:01.304665  450393 system_pods.go:89] "kube-proxy-shgv2" [a19c5991-505f-4105-8c20-7afd63dd8e61] Running
	I0805 13:04:01.304670  450393 system_pods.go:89] "kube-scheduler-embed-certs-321139" [961a5013-fd55-48a2-adc2-acde33f6aed5] Running
	I0805 13:04:01.304677  450393 system_pods.go:89] "metrics-server-569cc877fc-k8mrt" [6d400b20-5de5-4046-b773-39766c67cdb4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 13:04:01.304685  450393 system_pods.go:89] "storage-provisioner" [8b2db057-5262-4648-93ea-f2f0ed51a19b] Running
	I0805 13:04:01.304694  450393 system_pods.go:126] duration metric: took 4.885808ms to wait for k8s-apps to be running ...
	I0805 13:04:01.304702  450393 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 13:04:01.304751  450393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 13:04:01.323278  450393 system_svc.go:56] duration metric: took 18.55935ms WaitForService to wait for kubelet
	I0805 13:04:01.323316  450393 kubeadm.go:582] duration metric: took 4m22.01227204s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 13:04:01.323349  450393 node_conditions.go:102] verifying NodePressure condition ...
	I0805 13:04:01.326802  450393 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 13:04:01.326829  450393 node_conditions.go:123] node cpu capacity is 2
	I0805 13:04:01.326843  450393 node_conditions.go:105] duration metric: took 3.486931ms to run NodePressure ...
	I0805 13:04:01.326859  450393 start.go:241] waiting for startup goroutines ...
	I0805 13:04:01.326869  450393 start.go:246] waiting for cluster config update ...
	I0805 13:04:01.326883  450393 start.go:255] writing updated cluster config ...
	I0805 13:04:01.327230  450393 ssh_runner.go:195] Run: rm -f paused
	I0805 13:04:01.380315  450393 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0805 13:04:01.381891  450393 out.go:177] * Done! kubectl is now configured to use "embed-certs-321139" cluster and "default" namespace by default
	I0805 13:03:57.113870  451238 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0805 13:03:57.114408  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:03:57.114630  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:03:57.467412  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:57.967538  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:58.467217  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:58.967035  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:59.466816  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:03:59.966909  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:04:00.467553  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:04:00.967667  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:04:01.467382  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:04:01.967495  450884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 13:04:02.085428  450884 kubeadm.go:1113] duration metric: took 13.335097096s to wait for elevateKubeSystemPrivileges
	I0805 13:04:02.085464  450884 kubeadm.go:394] duration metric: took 5m13.227479413s to StartCluster
	I0805 13:04:02.085482  450884 settings.go:142] acquiring lock: {Name:mkef693333292ed53a03690c72ec170ce2e26d3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 13:04:02.085571  450884 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 13:04:02.087178  450884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/kubeconfig: {Name:mkf2ea766e58530103015ce4ba9d1ed3336f3926 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 13:04:02.087425  450884 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.228 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 13:04:02.087550  450884 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 13:04:02.087653  450884 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-371585"
	I0805 13:04:02.087659  450884 config.go:182] Loaded profile config "default-k8s-diff-port-371585": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 13:04:02.087681  450884 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-371585"
	I0805 13:04:02.087697  450884 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-371585"
	I0805 13:04:02.087718  450884 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-371585"
	W0805 13:04:02.087729  450884 addons.go:243] addon metrics-server should already be in state true
	I0805 13:04:02.087783  450884 host.go:66] Checking if "default-k8s-diff-port-371585" exists ...
	I0805 13:04:02.087727  450884 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-371585"
	I0805 13:04:02.087692  450884 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-371585"
	W0805 13:04:02.087953  450884 addons.go:243] addon storage-provisioner should already be in state true
	I0805 13:04:02.087986  450884 host.go:66] Checking if "default-k8s-diff-port-371585" exists ...
	I0805 13:04:02.088243  450884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:04:02.088294  450884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:04:02.088243  450884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:04:02.088377  450884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:04:02.088406  450884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:04:02.088415  450884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:04:02.088935  450884 out.go:177] * Verifying Kubernetes components...
	I0805 13:04:02.090386  450884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 13:04:02.105328  450884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39169
	I0805 13:04:02.105335  450884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33049
	I0805 13:04:02.105853  450884 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:04:02.105848  450884 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:04:02.106395  450884 main.go:141] libmachine: Using API Version  1
	I0805 13:04:02.106398  450884 main.go:141] libmachine: Using API Version  1
	I0805 13:04:02.106420  450884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:04:02.106423  450884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:04:02.106506  450884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33831
	I0805 13:04:02.106879  450884 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:04:02.106957  450884 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:04:02.106982  450884 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:04:02.107193  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetState
	I0805 13:04:02.107508  450884 main.go:141] libmachine: Using API Version  1
	I0805 13:04:02.107522  450884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:04:02.107534  450884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:04:02.107561  450884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:04:02.107903  450884 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:04:02.108458  450884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:04:02.108490  450884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:04:02.111681  450884 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-371585"
	W0805 13:04:02.111709  450884 addons.go:243] addon default-storageclass should already be in state true
	I0805 13:04:02.111775  450884 host.go:66] Checking if "default-k8s-diff-port-371585" exists ...
	I0805 13:04:02.113601  450884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:04:02.113648  450884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:04:02.127860  450884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37207
	I0805 13:04:02.128512  450884 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:04:02.128619  450884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39253
	I0805 13:04:02.129023  450884 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:04:02.129174  450884 main.go:141] libmachine: Using API Version  1
	I0805 13:04:02.129198  450884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:04:02.129495  450884 main.go:141] libmachine: Using API Version  1
	I0805 13:04:02.129516  450884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:04:02.129566  450884 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:04:02.129850  450884 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:04:02.129879  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetState
	I0805 13:04:02.130443  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetState
	I0805 13:04:02.131691  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 13:04:02.132370  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 13:04:02.133468  450884 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 13:04:02.134210  450884 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0805 13:04:02.134899  450884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37161
	I0805 13:04:02.135049  450884 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0805 13:04:02.135067  450884 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0805 13:04:02.135099  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 13:04:02.135183  450884 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 13:04:02.135201  450884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 13:04:02.135216  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 13:04:02.135404  450884 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:04:02.136704  450884 main.go:141] libmachine: Using API Version  1
	I0805 13:04:02.136723  450884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:04:02.138362  450884 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:04:02.138801  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 13:04:02.138918  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 13:04:02.139264  450884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 13:04:02.139290  450884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 13:04:02.139335  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 13:04:02.139377  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 13:04:02.139404  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 13:04:02.139448  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 13:04:02.139482  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 13:04:02.139503  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 13:04:02.139581  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 13:04:02.139637  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 13:04:02.139737  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 13:04:02.139807  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 13:04:02.139867  450884 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa Username:docker}
	I0805 13:04:02.139909  450884 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa Username:docker}
	I0805 13:04:02.159720  450884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34137
	I0805 13:04:02.160199  450884 main.go:141] libmachine: () Calling .GetVersion
	I0805 13:04:02.160744  450884 main.go:141] libmachine: Using API Version  1
	I0805 13:04:02.160770  450884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 13:04:02.161048  450884 main.go:141] libmachine: () Calling .GetMachineName
	I0805 13:04:02.161246  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetState
	I0805 13:04:02.162535  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .DriverName
	I0805 13:04:02.162788  450884 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 13:04:02.162805  450884 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 13:04:02.162825  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHHostname
	I0805 13:04:02.165787  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 13:04:02.166204  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:9f:83", ip: ""} in network mk-default-k8s-diff-port-371585: {Iface:virbr2 ExpiryTime:2024-08-05 13:50:49 +0000 UTC Type:0 Mac:52:54:00:f4:9f:83 Iaid: IPaddr:192.168.50.228 Prefix:24 Hostname:default-k8s-diff-port-371585 Clientid:01:52:54:00:f4:9f:83}
	I0805 13:04:02.166236  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | domain default-k8s-diff-port-371585 has defined IP address 192.168.50.228 and MAC address 52:54:00:f4:9f:83 in network mk-default-k8s-diff-port-371585
	I0805 13:04:02.166411  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHPort
	I0805 13:04:02.166594  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHKeyPath
	I0805 13:04:02.166744  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .GetSSHUsername
	I0805 13:04:02.166876  450884 sshutil.go:53] new ssh client: &{IP:192.168.50.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/default-k8s-diff-port-371585/id_rsa Username:docker}
	I0805 13:04:02.349175  450884 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 13:04:02.453663  450884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 13:04:02.462474  450884 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-371585" to be "Ready" ...
	I0805 13:04:02.472177  450884 node_ready.go:49] node "default-k8s-diff-port-371585" has status "Ready":"True"
	I0805 13:04:02.472201  450884 node_ready.go:38] duration metric: took 9.692872ms for node "default-k8s-diff-port-371585" to be "Ready" ...
	I0805 13:04:02.472211  450884 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 13:04:02.474341  450884 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0805 13:04:02.474363  450884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0805 13:04:02.485604  450884 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-5vxpl" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:02.514889  450884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 13:04:02.543388  450884 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0805 13:04:02.543428  450884 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0805 13:04:02.618040  450884 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0805 13:04:02.618094  450884 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0805 13:04:02.716705  450884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0805 13:04:02.784102  450884 main.go:141] libmachine: Making call to close driver server
	I0805 13:04:02.784193  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .Close
	I0805 13:04:02.784545  450884 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:04:02.784566  450884 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:04:02.784577  450884 main.go:141] libmachine: Making call to close driver server
	I0805 13:04:02.784586  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .Close
	I0805 13:04:02.784588  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | Closing plugin on server side
	I0805 13:04:02.784851  450884 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:04:02.784868  450884 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:04:02.784868  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | Closing plugin on server side
	I0805 13:04:02.797584  450884 main.go:141] libmachine: Making call to close driver server
	I0805 13:04:02.797617  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .Close
	I0805 13:04:02.797938  450884 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:04:02.797956  450884 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:04:03.431060  450884 main.go:141] libmachine: Making call to close driver server
	I0805 13:04:03.431091  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .Close
	I0805 13:04:03.431452  450884 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:04:03.431494  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) DBG | Closing plugin on server side
	I0805 13:04:03.431511  450884 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:04:03.431530  450884 main.go:141] libmachine: Making call to close driver server
	I0805 13:04:03.431539  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .Close
	I0805 13:04:03.431839  450884 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:04:03.431893  450884 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:04:03.746668  450884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.029912928s)
	I0805 13:04:03.746734  450884 main.go:141] libmachine: Making call to close driver server
	I0805 13:04:03.746750  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .Close
	I0805 13:04:03.747152  450884 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:04:03.747180  450884 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:04:03.747191  450884 main.go:141] libmachine: Making call to close driver server
	I0805 13:04:03.747200  450884 main.go:141] libmachine: (default-k8s-diff-port-371585) Calling .Close
	I0805 13:04:03.748527  450884 main.go:141] libmachine: Successfully made call to close driver server
	I0805 13:04:03.748558  450884 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 13:04:03.748571  450884 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-371585"
	I0805 13:04:03.750522  450884 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0805 13:04:03.751714  450884 addons.go:510] duration metric: took 1.664163176s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0805 13:04:04.491832  450884 pod_ready.go:92] pod "coredns-7db6d8ff4d-5vxpl" in "kube-system" namespace has status "Ready":"True"
	I0805 13:04:04.491861  450884 pod_ready.go:81] duration metric: took 2.00623062s for pod "coredns-7db6d8ff4d-5vxpl" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.491870  450884 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-qtt9j" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.496173  450884 pod_ready.go:92] pod "coredns-7db6d8ff4d-qtt9j" in "kube-system" namespace has status "Ready":"True"
	I0805 13:04:04.496194  450884 pod_ready.go:81] duration metric: took 4.317446ms for pod "coredns-7db6d8ff4d-qtt9j" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.496202  450884 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.500270  450884 pod_ready.go:92] pod "etcd-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"True"
	I0805 13:04:04.500297  450884 pod_ready.go:81] duration metric: took 4.088399ms for pod "etcd-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.500309  450884 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.504892  450884 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"True"
	I0805 13:04:04.504917  450884 pod_ready.go:81] duration metric: took 4.598589ms for pod "kube-apiserver-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.504926  450884 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.509448  450884 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"True"
	I0805 13:04:04.509468  450884 pod_ready.go:81] duration metric: took 4.535174ms for pod "kube-controller-manager-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.509478  450884 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4v6sn" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.890517  450884 pod_ready.go:92] pod "kube-proxy-4v6sn" in "kube-system" namespace has status "Ready":"True"
	I0805 13:04:04.890544  450884 pod_ready.go:81] duration metric: took 381.059204ms for pod "kube-proxy-4v6sn" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:04.890552  450884 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:05.289670  450884 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace has status "Ready":"True"
	I0805 13:04:05.289701  450884 pod_ready.go:81] duration metric: took 399.141309ms for pod "kube-scheduler-default-k8s-diff-port-371585" in "kube-system" namespace to be "Ready" ...
	I0805 13:04:05.289712  450884 pod_ready.go:38] duration metric: took 2.817491444s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 13:04:05.289732  450884 api_server.go:52] waiting for apiserver process to appear ...
	I0805 13:04:05.289805  450884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 13:04:05.305815  450884 api_server.go:72] duration metric: took 3.218344531s to wait for apiserver process to appear ...
	I0805 13:04:05.305848  450884 api_server.go:88] waiting for apiserver healthz status ...
	I0805 13:04:05.305870  450884 api_server.go:253] Checking apiserver healthz at https://192.168.50.228:8444/healthz ...
	I0805 13:04:05.311144  450884 api_server.go:279] https://192.168.50.228:8444/healthz returned 200:
	ok
	I0805 13:04:05.312427  450884 api_server.go:141] control plane version: v1.30.3
	I0805 13:04:05.312450  450884 api_server.go:131] duration metric: took 6.595933ms to wait for apiserver health ...
	I0805 13:04:05.312460  450884 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 13:04:05.493376  450884 system_pods.go:59] 9 kube-system pods found
	I0805 13:04:05.493417  450884 system_pods.go:61] "coredns-7db6d8ff4d-5vxpl" [6f6aa906-d76f-4f92-8de4-4d3a4a1ee733] Running
	I0805 13:04:05.493425  450884 system_pods.go:61] "coredns-7db6d8ff4d-qtt9j" [8dcadd0b-af8c-4d76-a1d1-ceeaffb725b8] Running
	I0805 13:04:05.493432  450884 system_pods.go:61] "etcd-default-k8s-diff-port-371585" [c3ab12b8-78ea-42c5-a1d3-e37eb9e72961] Running
	I0805 13:04:05.493438  450884 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-371585" [16d27e99-f652-4e88-907f-c2895f051a8a] Running
	I0805 13:04:05.493444  450884 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-371585" [f8d0d828-a7fb-4887-bbf9-e3ad9fd3ebf3] Running
	I0805 13:04:05.493450  450884 system_pods.go:61] "kube-proxy-4v6sn" [497a1512-cdee-49ff-92ea-ea523d3de2a4] Running
	I0805 13:04:05.493456  450884 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-371585" [48ae4522-6d11-4f79-820b-68eb06410186] Running
	I0805 13:04:05.493465  450884 system_pods.go:61] "metrics-server-569cc877fc-xf92r" [edb560ac-ddb1-4afa-b3a3-aa054ea38162] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 13:04:05.493475  450884 system_pods.go:61] "storage-provisioner" [8f3de3fc-9b34-4a46-a7cf-5487647b06ca] Running
	I0805 13:04:05.493488  450884 system_pods.go:74] duration metric: took 181.019102ms to wait for pod list to return data ...
	I0805 13:04:05.493504  450884 default_sa.go:34] waiting for default service account to be created ...
	I0805 13:04:05.688283  450884 default_sa.go:45] found service account: "default"
	I0805 13:04:05.688313  450884 default_sa.go:55] duration metric: took 194.799711ms for default service account to be created ...
	I0805 13:04:05.688323  450884 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 13:04:05.892656  450884 system_pods.go:86] 9 kube-system pods found
	I0805 13:04:05.892688  450884 system_pods.go:89] "coredns-7db6d8ff4d-5vxpl" [6f6aa906-d76f-4f92-8de4-4d3a4a1ee733] Running
	I0805 13:04:05.892696  450884 system_pods.go:89] "coredns-7db6d8ff4d-qtt9j" [8dcadd0b-af8c-4d76-a1d1-ceeaffb725b8] Running
	I0805 13:04:05.892702  450884 system_pods.go:89] "etcd-default-k8s-diff-port-371585" [c3ab12b8-78ea-42c5-a1d3-e37eb9e72961] Running
	I0805 13:04:05.892709  450884 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-371585" [16d27e99-f652-4e88-907f-c2895f051a8a] Running
	I0805 13:04:05.892715  450884 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-371585" [f8d0d828-a7fb-4887-bbf9-e3ad9fd3ebf3] Running
	I0805 13:04:05.892721  450884 system_pods.go:89] "kube-proxy-4v6sn" [497a1512-cdee-49ff-92ea-ea523d3de2a4] Running
	I0805 13:04:05.892727  450884 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-371585" [48ae4522-6d11-4f79-820b-68eb06410186] Running
	I0805 13:04:05.892737  450884 system_pods.go:89] "metrics-server-569cc877fc-xf92r" [edb560ac-ddb1-4afa-b3a3-aa054ea38162] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 13:04:05.892743  450884 system_pods.go:89] "storage-provisioner" [8f3de3fc-9b34-4a46-a7cf-5487647b06ca] Running
	I0805 13:04:05.892755  450884 system_pods.go:126] duration metric: took 204.423562ms to wait for k8s-apps to be running ...
	I0805 13:04:05.892765  450884 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 13:04:05.892819  450884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 13:04:05.907542  450884 system_svc.go:56] duration metric: took 14.764349ms WaitForService to wait for kubelet
	I0805 13:04:05.907576  450884 kubeadm.go:582] duration metric: took 3.820116927s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 13:04:05.907599  450884 node_conditions.go:102] verifying NodePressure condition ...
	I0805 13:04:06.089000  450884 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 13:04:06.089025  450884 node_conditions.go:123] node cpu capacity is 2
	I0805 13:04:06.089035  450884 node_conditions.go:105] duration metric: took 181.431221ms to run NodePressure ...
	I0805 13:04:06.089047  450884 start.go:241] waiting for startup goroutines ...
	I0805 13:04:06.089054  450884 start.go:246] waiting for cluster config update ...
	I0805 13:04:06.089065  450884 start.go:255] writing updated cluster config ...
	I0805 13:04:06.089373  450884 ssh_runner.go:195] Run: rm -f paused
	I0805 13:04:06.140202  450884 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0805 13:04:06.142149  450884 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-371585" cluster and "default" namespace by default
	I0805 13:04:02.115811  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:04:02.116057  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:04:12.115990  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:04:12.116208  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:04:32.116734  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:04:32.117001  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:05:12.119196  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:05:12.119475  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:05:12.119502  451238 kubeadm.go:310] 
	I0805 13:05:12.119564  451238 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0805 13:05:12.119622  451238 kubeadm.go:310] 		timed out waiting for the condition
	I0805 13:05:12.119634  451238 kubeadm.go:310] 
	I0805 13:05:12.119680  451238 kubeadm.go:310] 	This error is likely caused by:
	I0805 13:05:12.119724  451238 kubeadm.go:310] 		- The kubelet is not running
	I0805 13:05:12.119880  451238 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0805 13:05:12.119898  451238 kubeadm.go:310] 
	I0805 13:05:12.120029  451238 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0805 13:05:12.120114  451238 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0805 13:05:12.120169  451238 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0805 13:05:12.120179  451238 kubeadm.go:310] 
	I0805 13:05:12.120321  451238 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0805 13:05:12.120445  451238 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0805 13:05:12.120455  451238 kubeadm.go:310] 
	I0805 13:05:12.120612  451238 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0805 13:05:12.120751  451238 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0805 13:05:12.120888  451238 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0805 13:05:12.121010  451238 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0805 13:05:12.121023  451238 kubeadm.go:310] 
	I0805 13:05:12.121325  451238 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 13:05:12.121458  451238 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0805 13:05:12.121545  451238 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0805 13:05:12.121714  451238 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0805 13:05:12.121782  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0805 13:05:12.587687  451238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 13:05:12.603422  451238 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 13:05:12.614302  451238 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 13:05:12.614330  451238 kubeadm.go:157] found existing configuration files:
	
	I0805 13:05:12.614391  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 13:05:12.625131  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 13:05:12.625199  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 13:05:12.635606  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 13:05:12.644896  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 13:05:12.644953  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 13:05:12.655178  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 13:05:12.664668  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 13:05:12.664753  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 13:05:12.675174  451238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 13:05:12.684765  451238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 13:05:12.684834  451238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 13:05:12.694762  451238 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 13:05:12.930906  451238 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 13:07:09.256859  451238 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0805 13:07:09.257016  451238 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0805 13:07:09.258511  451238 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0805 13:07:09.258579  451238 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 13:07:09.258710  451238 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 13:07:09.258881  451238 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 13:07:09.259022  451238 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 13:07:09.259125  451238 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 13:07:09.260912  451238 out.go:204]   - Generating certificates and keys ...
	I0805 13:07:09.261023  451238 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 13:07:09.261123  451238 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 13:07:09.261232  451238 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0805 13:07:09.261319  451238 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0805 13:07:09.261411  451238 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0805 13:07:09.261507  451238 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0805 13:07:09.261601  451238 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0805 13:07:09.261690  451238 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0805 13:07:09.261801  451238 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0805 13:07:09.261946  451238 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0805 13:07:09.262015  451238 kubeadm.go:310] [certs] Using the existing "sa" key
	I0805 13:07:09.262119  451238 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 13:07:09.262198  451238 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 13:07:09.262273  451238 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 13:07:09.262369  451238 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 13:07:09.262464  451238 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 13:07:09.262615  451238 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 13:07:09.262731  451238 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 13:07:09.262770  451238 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 13:07:09.262831  451238 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 13:07:09.264428  451238 out.go:204]   - Booting up control plane ...
	I0805 13:07:09.264537  451238 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 13:07:09.264663  451238 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 13:07:09.264774  451238 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 13:07:09.264896  451238 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 13:07:09.265144  451238 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0805 13:07:09.265224  451238 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0805 13:07:09.265318  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:07:09.265554  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:07:09.265630  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:07:09.265783  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:07:09.265886  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:07:09.266143  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:07:09.266221  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:07:09.266387  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:07:09.266472  451238 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0805 13:07:09.266656  451238 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0805 13:07:09.266673  451238 kubeadm.go:310] 
	I0805 13:07:09.266707  451238 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0805 13:07:09.266738  451238 kubeadm.go:310] 		timed out waiting for the condition
	I0805 13:07:09.266743  451238 kubeadm.go:310] 
	I0805 13:07:09.266788  451238 kubeadm.go:310] 	This error is likely caused by:
	I0805 13:07:09.266819  451238 kubeadm.go:310] 		- The kubelet is not running
	I0805 13:07:09.266924  451238 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0805 13:07:09.266932  451238 kubeadm.go:310] 
	I0805 13:07:09.267050  451238 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0805 13:07:09.267137  451238 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0805 13:07:09.267192  451238 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0805 13:07:09.267201  451238 kubeadm.go:310] 
	I0805 13:07:09.267316  451238 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0805 13:07:09.267435  451238 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0805 13:07:09.267445  451238 kubeadm.go:310] 
	I0805 13:07:09.267570  451238 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0805 13:07:09.267683  451238 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0805 13:07:09.267802  451238 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0805 13:07:09.267898  451238 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0805 13:07:09.267986  451238 kubeadm.go:310] 
	I0805 13:07:09.268003  451238 kubeadm.go:394] duration metric: took 7m57.870990174s to StartCluster
	I0805 13:07:09.268066  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 13:07:09.268158  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 13:07:09.311436  451238 cri.go:89] found id: ""
	I0805 13:07:09.311471  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.311497  451238 logs.go:278] No container was found matching "kube-apiserver"
	I0805 13:07:09.311509  451238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 13:07:09.311573  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 13:07:09.347748  451238 cri.go:89] found id: ""
	I0805 13:07:09.347776  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.347784  451238 logs.go:278] No container was found matching "etcd"
	I0805 13:07:09.347797  451238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 13:07:09.347860  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 13:07:09.385418  451238 cri.go:89] found id: ""
	I0805 13:07:09.385445  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.385453  451238 logs.go:278] No container was found matching "coredns"
	I0805 13:07:09.385460  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 13:07:09.385517  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 13:07:09.427209  451238 cri.go:89] found id: ""
	I0805 13:07:09.427255  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.427268  451238 logs.go:278] No container was found matching "kube-scheduler"
	I0805 13:07:09.427276  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 13:07:09.427360  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 13:07:09.461763  451238 cri.go:89] found id: ""
	I0805 13:07:09.461787  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.461795  451238 logs.go:278] No container was found matching "kube-proxy"
	I0805 13:07:09.461801  451238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 13:07:09.461854  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 13:07:09.498655  451238 cri.go:89] found id: ""
	I0805 13:07:09.498692  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.498705  451238 logs.go:278] No container was found matching "kube-controller-manager"
	I0805 13:07:09.498713  451238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 13:07:09.498782  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 13:07:09.534100  451238 cri.go:89] found id: ""
	I0805 13:07:09.534134  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.534143  451238 logs.go:278] No container was found matching "kindnet"
	I0805 13:07:09.534149  451238 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0805 13:07:09.534207  451238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0805 13:07:09.570089  451238 cri.go:89] found id: ""
	I0805 13:07:09.570125  451238 logs.go:276] 0 containers: []
	W0805 13:07:09.570137  451238 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0805 13:07:09.570153  451238 logs.go:123] Gathering logs for kubelet ...
	I0805 13:07:09.570176  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 13:07:09.625158  451238 logs.go:123] Gathering logs for dmesg ...
	I0805 13:07:09.625199  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 13:07:09.640087  451238 logs.go:123] Gathering logs for describe nodes ...
	I0805 13:07:09.640119  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0805 13:07:09.719851  451238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0805 13:07:09.719879  451238 logs.go:123] Gathering logs for CRI-O ...
	I0805 13:07:09.719895  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 13:07:09.832717  451238 logs.go:123] Gathering logs for container status ...
	I0805 13:07:09.832758  451238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0805 13:07:09.878585  451238 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0805 13:07:09.878653  451238 out.go:239] * 
	W0805 13:07:09.878739  451238 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0805 13:07:09.878767  451238 out.go:239] * 
	W0805 13:07:09.879755  451238 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 13:07:09.883027  451238 out.go:177] 
	W0805 13:07:09.884197  451238 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0805 13:07:09.884243  451238 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0805 13:07:09.884265  451238 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0805 13:07:09.885783  451238 out.go:177] 
	
	
	==> CRI-O <==
	Aug 05 13:18:23 old-k8s-version-635707 crio[653]: time="2024-08-05 13:18:23.432953606Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863903432920634,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3a1adb1c-a8ea-4a72-a60e-6cfdd84a96b0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:18:23 old-k8s-version-635707 crio[653]: time="2024-08-05 13:18:23.433632117Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=249e0f6c-976a-438f-9b8a-738497cf49e9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:18:23 old-k8s-version-635707 crio[653]: time="2024-08-05 13:18:23.433699608Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=249e0f6c-976a-438f-9b8a-738497cf49e9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:18:23 old-k8s-version-635707 crio[653]: time="2024-08-05 13:18:23.433735638Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=249e0f6c-976a-438f-9b8a-738497cf49e9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:18:23 old-k8s-version-635707 crio[653]: time="2024-08-05 13:18:23.465766362Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8a41b90f-65ee-4fbb-bbc0-e515ed2905b4 name=/runtime.v1.RuntimeService/Version
	Aug 05 13:18:23 old-k8s-version-635707 crio[653]: time="2024-08-05 13:18:23.465867956Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8a41b90f-65ee-4fbb-bbc0-e515ed2905b4 name=/runtime.v1.RuntimeService/Version
	Aug 05 13:18:23 old-k8s-version-635707 crio[653]: time="2024-08-05 13:18:23.467015990Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cd517ba5-11d7-453a-902c-58367c82fc22 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:18:23 old-k8s-version-635707 crio[653]: time="2024-08-05 13:18:23.467521516Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863903467497989,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cd517ba5-11d7-453a-902c-58367c82fc22 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:18:23 old-k8s-version-635707 crio[653]: time="2024-08-05 13:18:23.468253038Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8ba3b6cd-f120-42c0-b29d-631945a357f0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:18:23 old-k8s-version-635707 crio[653]: time="2024-08-05 13:18:23.468310099Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8ba3b6cd-f120-42c0-b29d-631945a357f0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:18:23 old-k8s-version-635707 crio[653]: time="2024-08-05 13:18:23.468348203Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8ba3b6cd-f120-42c0-b29d-631945a357f0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:18:23 old-k8s-version-635707 crio[653]: time="2024-08-05 13:18:23.498383413Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=61df7326-ea11-4b17-8f12-60e8b3abe4ed name=/runtime.v1.RuntimeService/Version
	Aug 05 13:18:23 old-k8s-version-635707 crio[653]: time="2024-08-05 13:18:23.498473443Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=61df7326-ea11-4b17-8f12-60e8b3abe4ed name=/runtime.v1.RuntimeService/Version
	Aug 05 13:18:23 old-k8s-version-635707 crio[653]: time="2024-08-05 13:18:23.499712514Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1b2dea44-dd92-401c-a0b4-3ab10f37a830 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:18:23 old-k8s-version-635707 crio[653]: time="2024-08-05 13:18:23.500122486Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863903500089828,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1b2dea44-dd92-401c-a0b4-3ab10f37a830 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:18:23 old-k8s-version-635707 crio[653]: time="2024-08-05 13:18:23.501015338Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=69bec66c-3079-47a6-a27b-8904025558a6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:18:23 old-k8s-version-635707 crio[653]: time="2024-08-05 13:18:23.501060878Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=69bec66c-3079-47a6-a27b-8904025558a6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:18:23 old-k8s-version-635707 crio[653]: time="2024-08-05 13:18:23.501107641Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=69bec66c-3079-47a6-a27b-8904025558a6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:18:23 old-k8s-version-635707 crio[653]: time="2024-08-05 13:18:23.531520050Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ba121932-e357-4755-b390-8e4d66484307 name=/runtime.v1.RuntimeService/Version
	Aug 05 13:18:23 old-k8s-version-635707 crio[653]: time="2024-08-05 13:18:23.531603653Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ba121932-e357-4755-b390-8e4d66484307 name=/runtime.v1.RuntimeService/Version
	Aug 05 13:18:23 old-k8s-version-635707 crio[653]: time="2024-08-05 13:18:23.532720255Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9f7a8454-ae72-4358-9b83-6c0b6f08ea43 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:18:23 old-k8s-version-635707 crio[653]: time="2024-08-05 13:18:23.533145821Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722863903533118371,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9f7a8454-ae72-4358-9b83-6c0b6f08ea43 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 13:18:23 old-k8s-version-635707 crio[653]: time="2024-08-05 13:18:23.533718373Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8fb8bd72-1d95-4cb7-94b9-ad636050db24 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:18:23 old-k8s-version-635707 crio[653]: time="2024-08-05 13:18:23.533774684Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8fb8bd72-1d95-4cb7-94b9-ad636050db24 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 13:18:23 old-k8s-version-635707 crio[653]: time="2024-08-05 13:18:23.533815322Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8fb8bd72-1d95-4cb7-94b9-ad636050db24 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug 5 12:58] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051038] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041240] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.092710] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.744514] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.605530] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug 5 12:59] systemd-fstab-generator[575]: Ignoring "noauto" option for root device
	[  +0.063666] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056001] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.204547] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.129155] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.264906] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +6.500378] systemd-fstab-generator[840]: Ignoring "noauto" option for root device
	[  +0.060609] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.866070] systemd-fstab-generator[964]: Ignoring "noauto" option for root device
	[ +12.192283] kauditd_printk_skb: 46 callbacks suppressed
	[Aug 5 13:03] systemd-fstab-generator[5024]: Ignoring "noauto" option for root device
	[Aug 5 13:05] systemd-fstab-generator[5302]: Ignoring "noauto" option for root device
	[  +0.067316] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 13:18:23 up 19 min,  0 users,  load average: 0.05, 0.08, 0.07
	Linux old-k8s-version-635707 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 05 13:18:22 old-k8s-version-635707 kubelet[6774]:         /usr/local/go/src/net/dial.go:425 +0x6e5
	Aug 05 13:18:22 old-k8s-version-635707 kubelet[6774]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc0006e8e20, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000976b10, 0x24, 0x60, 0x7f0ef8d60250, 0x118, ...)
	Aug 05 13:18:22 old-k8s-version-635707 kubelet[6774]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Aug 05 13:18:22 old-k8s-version-635707 kubelet[6774]: net/http.(*Transport).dial(0xc000864dc0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000976b10, 0x24, 0x0, 0x0, 0x0, ...)
	Aug 05 13:18:22 old-k8s-version-635707 kubelet[6774]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Aug 05 13:18:22 old-k8s-version-635707 kubelet[6774]: net/http.(*Transport).dialConn(0xc000864dc0, 0x4f7fe00, 0xc000120018, 0x0, 0xc000996de0, 0x5, 0xc000976b10, 0x24, 0x0, 0xc0009ae240, ...)
	Aug 05 13:18:22 old-k8s-version-635707 kubelet[6774]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Aug 05 13:18:22 old-k8s-version-635707 kubelet[6774]: net/http.(*Transport).dialConnFor(0xc000864dc0, 0xc00089bef0)
	Aug 05 13:18:22 old-k8s-version-635707 kubelet[6774]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Aug 05 13:18:22 old-k8s-version-635707 kubelet[6774]: created by net/http.(*Transport).queueForDial
	Aug 05 13:18:22 old-k8s-version-635707 kubelet[6774]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Aug 05 13:18:22 old-k8s-version-635707 kubelet[6774]: goroutine 163 [select]:
	Aug 05 13:18:22 old-k8s-version-635707 kubelet[6774]: net.(*netFD).connect.func2(0x4f7fe40, 0xc00033cde0, 0xc0006ed400, 0xc000997200, 0xc0009971a0)
	Aug 05 13:18:22 old-k8s-version-635707 kubelet[6774]:         /usr/local/go/src/net/fd_unix.go:118 +0xc5
	Aug 05 13:18:22 old-k8s-version-635707 kubelet[6774]: created by net.(*netFD).connect
	Aug 05 13:18:22 old-k8s-version-635707 kubelet[6774]:         /usr/local/go/src/net/fd_unix.go:117 +0x234
	Aug 05 13:18:22 old-k8s-version-635707 kubelet[6774]: E0805 13:18:22.250740    6774 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.61.41:8443: connect: connection refused
	Aug 05 13:18:22 old-k8s-version-635707 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 137.
	Aug 05 13:18:22 old-k8s-version-635707 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 05 13:18:22 old-k8s-version-635707 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 05 13:18:23 old-k8s-version-635707 kubelet[6800]: I0805 13:18:23.007591    6800 server.go:416] Version: v1.20.0
	Aug 05 13:18:23 old-k8s-version-635707 kubelet[6800]: I0805 13:18:23.007976    6800 server.go:837] Client rotation is on, will bootstrap in background
	Aug 05 13:18:23 old-k8s-version-635707 kubelet[6800]: I0805 13:18:23.011303    6800 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 05 13:18:23 old-k8s-version-635707 kubelet[6800]: I0805 13:18:23.013465    6800 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 05 13:18:23 old-k8s-version-635707 kubelet[6800]: W0805 13:18:23.013500    6800 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-635707 -n old-k8s-version-635707
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-635707 -n old-k8s-version-635707: exit status 2 (226.984793ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-635707" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (128.03s)

                                                
                                    

Test pass (248/320)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 24.7
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.30.3/json-events 13.36
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.06
18 TestDownloadOnly/v1.30.3/DeleteAll 0.14
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.31.0-rc.0/json-events 19.5
22 TestDownloadOnly/v1.31.0-rc.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-rc.0/LogsDuration 0.16
27 TestDownloadOnly/v1.31.0-rc.0/DeleteAll 0.14
28 TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.57
31 TestOffline 67.34
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
36 TestAddons/Setup 154.7
40 TestAddons/serial/GCPAuth/Namespaces 0.14
42 TestAddons/parallel/Registry 16.64
44 TestAddons/parallel/InspektorGadget 11.95
46 TestAddons/parallel/HelmTiller 11.77
48 TestAddons/parallel/CSI 92.06
49 TestAddons/parallel/Headlamp 20.07
50 TestAddons/parallel/CloudSpanner 5.64
51 TestAddons/parallel/LocalPath 57.25
52 TestAddons/parallel/NvidiaDevicePlugin 5.53
53 TestAddons/parallel/Yakd 10.87
55 TestCertOptions 98.09
56 TestCertExpiration 314.48
58 TestForceSystemdFlag 83.78
59 TestForceSystemdEnv 76.74
61 TestKVMDriverInstallOrUpdate 4.51
65 TestErrorSpam/setup 39.55
66 TestErrorSpam/start 0.34
67 TestErrorSpam/status 0.72
68 TestErrorSpam/pause 1.53
69 TestErrorSpam/unpause 1.54
70 TestErrorSpam/stop 4.78
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 55.81
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 54.45
77 TestFunctional/serial/KubeContext 0.05
78 TestFunctional/serial/KubectlGetPods 0.08
81 TestFunctional/serial/CacheCmd/cache/add_remote 3.56
82 TestFunctional/serial/CacheCmd/cache/add_local 2.22
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
84 TestFunctional/serial/CacheCmd/cache/list 0.05
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.25
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.59
87 TestFunctional/serial/CacheCmd/cache/delete 0.09
88 TestFunctional/serial/MinikubeKubectlCmd 0.11
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
90 TestFunctional/serial/ExtraConfig 60.04
91 TestFunctional/serial/ComponentHealth 0.06
92 TestFunctional/serial/LogsCmd 1.55
93 TestFunctional/serial/LogsFileCmd 1.59
94 TestFunctional/serial/InvalidService 9.72
96 TestFunctional/parallel/ConfigCmd 0.32
97 TestFunctional/parallel/DashboardCmd 13.89
98 TestFunctional/parallel/DryRun 0.3
99 TestFunctional/parallel/InternationalLanguage 0.16
100 TestFunctional/parallel/StatusCmd 1.03
104 TestFunctional/parallel/ServiceCmdConnect 8.54
105 TestFunctional/parallel/AddonsCmd 0.12
106 TestFunctional/parallel/PersistentVolumeClaim 40.75
108 TestFunctional/parallel/SSHCmd 0.47
109 TestFunctional/parallel/CpCmd 1.28
110 TestFunctional/parallel/MySQL 23.02
111 TestFunctional/parallel/FileSync 0.28
112 TestFunctional/parallel/CertSync 1.32
116 TestFunctional/parallel/NodeLabels 0.07
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.5
120 TestFunctional/parallel/License 0.63
121 TestFunctional/parallel/ServiceCmd/DeployApp 12.19
122 TestFunctional/parallel/Version/short 0.05
123 TestFunctional/parallel/Version/components 0.7
125 TestFunctional/parallel/ImageCommands/ImageListTable 0.36
126 TestFunctional/parallel/ImageCommands/ImageListJson 0.47
127 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
128 TestFunctional/parallel/ImageCommands/ImageBuild 4.45
129 TestFunctional/parallel/ImageCommands/Setup 1.95
130 TestFunctional/parallel/ProfileCmd/profile_not_create 0.35
131 TestFunctional/parallel/ProfileCmd/profile_list 0.38
133 TestFunctional/parallel/ProfileCmd/profile_json_output 0.36
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.1
135 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.9
136 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.83
137 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.94
138 TestFunctional/parallel/ImageCommands/ImageRemove 0.44
139 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.81
140 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.57
141 TestFunctional/parallel/ServiceCmd/List 0.54
142 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
143 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
144 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
145 TestFunctional/parallel/ServiceCmd/JSONOutput 0.61
146 TestFunctional/parallel/ServiceCmd/HTTPS 0.31
147 TestFunctional/parallel/ServiceCmd/Format 0.33
148 TestFunctional/parallel/ServiceCmd/URL 0.31
158 TestFunctional/parallel/MountCmd/specific-port 1.67
159 TestFunctional/parallel/MountCmd/VerifyCleanup 1.14
160 TestFunctional/delete_echo-server_images 0.04
161 TestFunctional/delete_my-image_image 0.02
162 TestFunctional/delete_minikube_cached_images 0.02
166 TestMultiControlPlane/serial/StartCluster 213.89
167 TestMultiControlPlane/serial/DeployApp 7.52
168 TestMultiControlPlane/serial/PingHostFromPods 1.22
169 TestMultiControlPlane/serial/AddWorkerNode 85.61
170 TestMultiControlPlane/serial/NodeLabels 0.07
171 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.53
172 TestMultiControlPlane/serial/CopyFile 12.77
174 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.49
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.39
178 TestMultiControlPlane/serial/DeleteSecondaryNode 17.36
179 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.38
181 TestMultiControlPlane/serial/RestartCluster 324.26
182 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.38
183 TestMultiControlPlane/serial/AddSecondaryNode 84.58
184 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.54
188 TestJSONOutput/start/Command 95.3
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.73
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.63
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 7.35
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.19
216 TestMainNoArgs 0.04
217 TestMinikubeProfile 89.99
220 TestMountStart/serial/StartWithMountFirst 32.28
221 TestMountStart/serial/VerifyMountFirst 0.36
222 TestMountStart/serial/StartWithMountSecond 27.41
223 TestMountStart/serial/VerifyMountSecond 0.38
224 TestMountStart/serial/DeleteFirst 0.67
225 TestMountStart/serial/VerifyMountPostDelete 0.37
226 TestMountStart/serial/Stop 1.28
227 TestMountStart/serial/RestartStopped 22.82
228 TestMountStart/serial/VerifyMountPostStop 0.37
231 TestMultiNode/serial/FreshStart2Nodes 120.62
232 TestMultiNode/serial/DeployApp2Nodes 5.3
233 TestMultiNode/serial/PingHostFrom2Pods 0.78
234 TestMultiNode/serial/AddNode 47.51
235 TestMultiNode/serial/MultiNodeLabels 0.06
236 TestMultiNode/serial/ProfileList 0.22
237 TestMultiNode/serial/CopyFile 7.14
238 TestMultiNode/serial/StopNode 2.33
239 TestMultiNode/serial/StartAfterStop 39.63
241 TestMultiNode/serial/DeleteNode 2.15
243 TestMultiNode/serial/RestartMultiNode 182.36
244 TestMultiNode/serial/ValidateNameConflict 44.22
251 TestScheduledStopUnix 112.22
255 TestRunningBinaryUpgrade 185.66
260 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
264 TestNoKubernetes/serial/StartWithK8s 99.83
269 TestNetworkPlugins/group/false 3.45
273 TestNoKubernetes/serial/StartWithStopK8s 65.61
274 TestNoKubernetes/serial/Start 28.09
275 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
276 TestNoKubernetes/serial/ProfileList 0.8
277 TestNoKubernetes/serial/Stop 1.66
278 TestNoKubernetes/serial/StartNoArgs 67.82
279 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
280 TestStoppedBinaryUpgrade/Setup 2.65
281 TestStoppedBinaryUpgrade/Upgrade 95.65
290 TestPause/serial/Start 77.89
291 TestNetworkPlugins/group/auto/Start 126.17
292 TestStoppedBinaryUpgrade/MinikubeLogs 0.85
293 TestNetworkPlugins/group/kindnet/Start 102.17
295 TestNetworkPlugins/group/calico/Start 92.25
296 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
297 TestNetworkPlugins/group/auto/KubeletFlags 0.21
298 TestNetworkPlugins/group/auto/NetCatPod 10.25
299 TestNetworkPlugins/group/kindnet/KubeletFlags 0.2
300 TestNetworkPlugins/group/kindnet/NetCatPod 9.22
301 TestNetworkPlugins/group/auto/DNS 0.16
302 TestNetworkPlugins/group/auto/Localhost 0.18
303 TestNetworkPlugins/group/auto/HairPin 0.12
304 TestNetworkPlugins/group/kindnet/DNS 0.18
305 TestNetworkPlugins/group/kindnet/Localhost 0.13
306 TestNetworkPlugins/group/kindnet/HairPin 0.16
307 TestNetworkPlugins/group/custom-flannel/Start 90.36
308 TestNetworkPlugins/group/enable-default-cni/Start 133.12
309 TestNetworkPlugins/group/calico/ControllerPod 6.01
310 TestNetworkPlugins/group/calico/KubeletFlags 0.21
311 TestNetworkPlugins/group/calico/NetCatPod 11.22
312 TestNetworkPlugins/group/calico/DNS 0.18
313 TestNetworkPlugins/group/calico/Localhost 0.15
314 TestNetworkPlugins/group/calico/HairPin 0.14
315 TestNetworkPlugins/group/flannel/Start 81.63
316 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
317 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.25
318 TestNetworkPlugins/group/custom-flannel/DNS 0.15
319 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
320 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
321 TestNetworkPlugins/group/bridge/Start 108.74
324 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
325 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.23
326 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
327 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
328 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
330 TestStartStop/group/no-preload/serial/FirstStart 116.76
331 TestNetworkPlugins/group/flannel/ControllerPod 6.01
332 TestNetworkPlugins/group/flannel/KubeletFlags 0.2
333 TestNetworkPlugins/group/flannel/NetCatPod 10.46
334 TestNetworkPlugins/group/flannel/DNS 0.33
335 TestNetworkPlugins/group/flannel/Localhost 0.18
336 TestNetworkPlugins/group/flannel/HairPin 0.18
338 TestStartStop/group/embed-certs/serial/FirstStart 62.31
339 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
340 TestNetworkPlugins/group/bridge/NetCatPod 11.37
341 TestNetworkPlugins/group/bridge/DNS 0.2
342 TestNetworkPlugins/group/bridge/Localhost 0.14
343 TestNetworkPlugins/group/bridge/HairPin 0.15
345 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 60.09
346 TestStartStop/group/embed-certs/serial/DeployApp 9.27
347 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1
349 TestStartStop/group/no-preload/serial/DeployApp 10.29
350 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.06
352 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.26
353 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.94
358 TestStartStop/group/embed-certs/serial/SecondStart 637.64
360 TestStartStop/group/no-preload/serial/SecondStart 602.49
362 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 589.38
363 TestStartStop/group/old-k8s-version/serial/Stop 3.29
364 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
375 TestStartStop/group/newest-cni/serial/FirstStart 49.39
376 TestStartStop/group/newest-cni/serial/DeployApp 0
377 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.08
378 TestStartStop/group/newest-cni/serial/Stop 11.38
379 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
380 TestStartStop/group/newest-cni/serial/SecondStart 76.05
381 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
382 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
383 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 1.16
384 TestStartStop/group/newest-cni/serial/Pause 3.7
x
+
TestDownloadOnly/v1.20.0/json-events (24.7s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-476412 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-476412 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (24.704566749s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (24.70s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-476412
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-476412: exit status 85 (58.406943ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-476412 | jenkins | v1.33.1 | 05 Aug 24 11:26 UTC |          |
	|         | -p download-only-476412        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 11:26:53
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 11:26:53.118208  391231 out.go:291] Setting OutFile to fd 1 ...
	I0805 11:26:53.118493  391231 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 11:26:53.118504  391231 out.go:304] Setting ErrFile to fd 2...
	I0805 11:26:53.118509  391231 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 11:26:53.118697  391231 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
	W0805 11:26:53.118817  391231 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19377-383955/.minikube/config/config.json: open /home/jenkins/minikube-integration/19377-383955/.minikube/config/config.json: no such file or directory
	I0805 11:26:53.119393  391231 out.go:298] Setting JSON to true
	I0805 11:26:53.120399  391231 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":4160,"bootTime":1722853053,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0805 11:26:53.120469  391231 start.go:139] virtualization: kvm guest
	I0805 11:26:53.122761  391231 out.go:97] [download-only-476412] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0805 11:26:53.122901  391231 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball: no such file or directory
	I0805 11:26:53.122964  391231 notify.go:220] Checking for updates...
	I0805 11:26:53.124479  391231 out.go:169] MINIKUBE_LOCATION=19377
	I0805 11:26:53.125848  391231 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 11:26:53.127257  391231 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 11:26:53.128617  391231 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 11:26:53.129822  391231 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0805 11:26:53.132375  391231 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0805 11:26:53.132663  391231 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 11:26:53.164954  391231 out.go:97] Using the kvm2 driver based on user configuration
	I0805 11:26:53.164986  391231 start.go:297] selected driver: kvm2
	I0805 11:26:53.164994  391231 start.go:901] validating driver "kvm2" against <nil>
	I0805 11:26:53.165481  391231 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 11:26:53.165591  391231 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19377-383955/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0805 11:26:53.180886  391231 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0805 11:26:53.180960  391231 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 11:26:53.181445  391231 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0805 11:26:53.181584  391231 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0805 11:26:53.181611  391231 cni.go:84] Creating CNI manager for ""
	I0805 11:26:53.181626  391231 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 11:26:53.181636  391231 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 11:26:53.181705  391231 start.go:340] cluster config:
	{Name:download-only-476412 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-476412 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 11:26:53.181883  391231 iso.go:125] acquiring lock: {Name:mk78a4988ea0dfb86bb6f7367e362683a39fd912 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 11:26:53.183837  391231 out.go:97] Downloading VM boot image ...
	I0805 11:26:53.183872  391231 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19377-383955/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0805 11:27:03.372719  391231 out.go:97] Starting "download-only-476412" primary control-plane node in "download-only-476412" cluster
	I0805 11:27:03.372739  391231 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0805 11:27:03.482512  391231 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0805 11:27:03.482545  391231 cache.go:56] Caching tarball of preloaded images
	I0805 11:27:03.482712  391231 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0805 11:27:03.484694  391231 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0805 11:27:03.484708  391231 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0805 11:27:03.594208  391231 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-476412 host does not exist
	  To start a cluster, run: "minikube start -p download-only-476412"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-476412
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (13.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-704604 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-704604 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (13.357298161s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (13.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-704604
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-704604: exit status 85 (61.398707ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-476412 | jenkins | v1.33.1 | 05 Aug 24 11:26 UTC |                     |
	|         | -p download-only-476412        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 05 Aug 24 11:27 UTC | 05 Aug 24 11:27 UTC |
	| delete  | -p download-only-476412        | download-only-476412 | jenkins | v1.33.1 | 05 Aug 24 11:27 UTC | 05 Aug 24 11:27 UTC |
	| start   | -o=json --download-only        | download-only-704604 | jenkins | v1.33.1 | 05 Aug 24 11:27 UTC |                     |
	|         | -p download-only-704604        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 11:27:18
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 11:27:18.142708  391485 out.go:291] Setting OutFile to fd 1 ...
	I0805 11:27:18.142837  391485 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 11:27:18.142848  391485 out.go:304] Setting ErrFile to fd 2...
	I0805 11:27:18.142852  391485 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 11:27:18.143047  391485 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
	I0805 11:27:18.143654  391485 out.go:298] Setting JSON to true
	I0805 11:27:18.144694  391485 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":4185,"bootTime":1722853053,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0805 11:27:18.144756  391485 start.go:139] virtualization: kvm guest
	I0805 11:27:18.146957  391485 out.go:97] [download-only-704604] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0805 11:27:18.147130  391485 notify.go:220] Checking for updates...
	I0805 11:27:18.148465  391485 out.go:169] MINIKUBE_LOCATION=19377
	I0805 11:27:18.150096  391485 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 11:27:18.151381  391485 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 11:27:18.152512  391485 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 11:27:18.153587  391485 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0805 11:27:18.155839  391485 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0805 11:27:18.156054  391485 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 11:27:18.187513  391485 out.go:97] Using the kvm2 driver based on user configuration
	I0805 11:27:18.187546  391485 start.go:297] selected driver: kvm2
	I0805 11:27:18.187552  391485 start.go:901] validating driver "kvm2" against <nil>
	I0805 11:27:18.187913  391485 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 11:27:18.187999  391485 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19377-383955/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0805 11:27:18.203635  391485 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0805 11:27:18.203703  391485 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 11:27:18.204227  391485 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0805 11:27:18.204414  391485 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0805 11:27:18.204444  391485 cni.go:84] Creating CNI manager for ""
	I0805 11:27:18.204455  391485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 11:27:18.204472  391485 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 11:27:18.204534  391485 start.go:340] cluster config:
	{Name:download-only-704604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-704604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 11:27:18.204655  391485 iso.go:125] acquiring lock: {Name:mk78a4988ea0dfb86bb6f7367e362683a39fd912 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 11:27:18.206467  391485 out.go:97] Starting "download-only-704604" primary control-plane node in "download-only-704604" cluster
	I0805 11:27:18.206488  391485 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 11:27:18.799300  391485 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0805 11:27:18.799343  391485 cache.go:56] Caching tarball of preloaded images
	I0805 11:27:18.799538  391485 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 11:27:18.801602  391485 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0805 11:27:18.801636  391485 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	I0805 11:27:18.911652  391485 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:15191286f02471d9b3ea0b587fcafc39 -> /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-704604 host does not exist
	  To start a cluster, run: "minikube start -p download-only-704604"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-704604
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/json-events (19.5s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-413572 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-413572 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (19.503734677s)
--- PASS: TestDownloadOnly/v1.31.0-rc.0/json-events (19.50s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-413572
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-413572: exit status 85 (162.779473ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-476412 | jenkins | v1.33.1 | 05 Aug 24 11:26 UTC |                     |
	|         | -p download-only-476412           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 05 Aug 24 11:27 UTC | 05 Aug 24 11:27 UTC |
	| delete  | -p download-only-476412           | download-only-476412 | jenkins | v1.33.1 | 05 Aug 24 11:27 UTC | 05 Aug 24 11:27 UTC |
	| start   | -o=json --download-only           | download-only-704604 | jenkins | v1.33.1 | 05 Aug 24 11:27 UTC |                     |
	|         | -p download-only-704604           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 05 Aug 24 11:27 UTC | 05 Aug 24 11:27 UTC |
	| delete  | -p download-only-704604           | download-only-704604 | jenkins | v1.33.1 | 05 Aug 24 11:27 UTC | 05 Aug 24 11:27 UTC |
	| start   | -o=json --download-only           | download-only-413572 | jenkins | v1.33.1 | 05 Aug 24 11:27 UTC |                     |
	|         | -p download-only-413572           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0 |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 11:27:31
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 11:27:31.821349  391705 out.go:291] Setting OutFile to fd 1 ...
	I0805 11:27:31.821492  391705 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 11:27:31.821503  391705 out.go:304] Setting ErrFile to fd 2...
	I0805 11:27:31.821509  391705 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 11:27:31.821720  391705 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
	I0805 11:27:31.822296  391705 out.go:298] Setting JSON to true
	I0805 11:27:31.823219  391705 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":4199,"bootTime":1722853053,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0805 11:27:31.823293  391705 start.go:139] virtualization: kvm guest
	I0805 11:27:31.825194  391705 out.go:97] [download-only-413572] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0805 11:27:31.825357  391705 notify.go:220] Checking for updates...
	I0805 11:27:31.826599  391705 out.go:169] MINIKUBE_LOCATION=19377
	I0805 11:27:31.827995  391705 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 11:27:31.829311  391705 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 11:27:31.830567  391705 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 11:27:31.831819  391705 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0805 11:27:31.834090  391705 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0805 11:27:31.834301  391705 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 11:27:31.865938  391705 out.go:97] Using the kvm2 driver based on user configuration
	I0805 11:27:31.865973  391705 start.go:297] selected driver: kvm2
	I0805 11:27:31.865979  391705 start.go:901] validating driver "kvm2" against <nil>
	I0805 11:27:31.866390  391705 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 11:27:31.866485  391705 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19377-383955/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0805 11:27:31.881781  391705 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0805 11:27:31.881838  391705 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 11:27:31.882354  391705 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0805 11:27:31.882522  391705 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0805 11:27:31.882548  391705 cni.go:84] Creating CNI manager for ""
	I0805 11:27:31.882556  391705 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 11:27:31.882568  391705 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 11:27:31.882628  391705 start.go:340] cluster config:
	{Name:download-only-413572 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:download-only-413572 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 11:27:31.882738  391705 iso.go:125] acquiring lock: {Name:mk78a4988ea0dfb86bb6f7367e362683a39fd912 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 11:27:31.884613  391705 out.go:97] Starting "download-only-413572" primary control-plane node in "download-only-413572" cluster
	I0805 11:27:31.884650  391705 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0805 11:27:32.474197  391705 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0805 11:27:32.474250  391705 cache.go:56] Caching tarball of preloaded images
	I0805 11:27:32.474404  391705 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0805 11:27:32.476381  391705 out.go:97] Downloading Kubernetes v1.31.0-rc.0 preload ...
	I0805 11:27:32.476397  391705 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	I0805 11:27:32.586826  391705 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:89b2d75682ccec9e5b50b57ad7b65741 -> /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0805 11:27:43.318600  391705 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	I0805 11:27:43.318701  391705 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19377-383955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	I0805 11:27:44.060183  391705 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on crio
	I0805 11:27:44.060552  391705 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/download-only-413572/config.json ...
	I0805 11:27:44.060584  391705 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/download-only-413572/config.json: {Name:mk477e55d95c755f716bdd2ce40b211c16784d1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:27:44.060768  391705 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0805 11:27:44.060943  391705 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19377-383955/.minikube/cache/linux/amd64/v1.31.0-rc.0/kubectl
	
	
	* The control-plane node download-only-413572 host does not exist
	  To start a cluster, run: "minikube start -p download-only-413572"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-413572
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-355673 --alsologtostderr --binary-mirror http://127.0.0.1:37911 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-355673" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-355673
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestOffline (67.34s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-824747 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-824747 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m6.297850641s)
helpers_test.go:175: Cleaning up "offline-crio-824747" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-824747
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-824747: (1.039946291s)
--- PASS: TestOffline (67.34s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-624151
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-624151: exit status 85 (48.880167ms)

                                                
                                                
-- stdout --
	* Profile "addons-624151" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-624151"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-624151
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-624151: exit status 85 (48.399171ms)

                                                
                                                
-- stdout --
	* Profile "addons-624151" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-624151"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (154.7s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-624151 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-624151 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m34.70126762s)
--- PASS: TestAddons/Setup (154.70s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-624151 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-624151 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 4.107986ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-698f998955-kbn7c" [825a2f6e-bea8-4451-bc76-8ab82bd3e8f4] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.006169024s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-6z85d" [f926e212-9d55-48fa-8149-0c86aaff8647] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004824177s
addons_test.go:342: (dbg) Run:  kubectl --context addons-624151 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-624151 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-624151 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.846932171s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-624151 ip
2024/08/05 11:31:02 [DEBUG] GET http://192.168.39.142:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-624151 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.64s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.95s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-n79rk" [2c7f8829-f036-44a6-b51a-38ac1de3304c] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004639726s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-624151
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-624151: (5.944187059s)
--- PASS: TestAddons/parallel/InspektorGadget (11.95s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.77s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 3.510961ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-g6dj9" [b48dc3b9-5ca0-4b5c-a47b-ed3b9a318ea5] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.005098649s
addons_test.go:475: (dbg) Run:  kubectl --context addons-624151 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-624151 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.149592441s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-624151 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.77s)

                                                
                                    
x
+
TestAddons/parallel/CSI (92.06s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 10.220041ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-624151 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-624151 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [230e5009-e4ce-4169-a971-26c2b896b41b] Pending
helpers_test.go:344: "task-pv-pod" [230e5009-e4ce-4169-a971-26c2b896b41b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [230e5009-e4ce-4169-a971-26c2b896b41b] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.004353193s
addons_test.go:590: (dbg) Run:  kubectl --context addons-624151 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-624151 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-624151 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-624151 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-624151 delete pod task-pv-pod: (1.263880884s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-624151 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-624151 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-624151 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [a41a6a58-a648-4aa5-a0e9-89d7ce1f7e5d] Pending
helpers_test.go:344: "task-pv-pod-restore" [a41a6a58-a648-4aa5-a0e9-89d7ce1f7e5d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [a41a6a58-a648-4aa5-a0e9-89d7ce1f7e5d] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004891701s
addons_test.go:632: (dbg) Run:  kubectl --context addons-624151 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-624151 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-624151 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-624151 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-624151 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.970005961s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-624151 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (92.06s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.07s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-624151 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-624151 --alsologtostderr -v=1: (1.124245497s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-9d868696f-w84kb" [53faf1f6-4138-4d20-a7fe-d73a30fb3ec9] Pending
helpers_test.go:344: "headlamp-9d868696f-w84kb" [53faf1f6-4138-4d20-a7fe-d73a30fb3ec9] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-9d868696f-w84kb" [53faf1f6-4138-4d20-a7fe-d73a30fb3ec9] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.00431586s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-624151 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-624151 addons disable headlamp --alsologtostderr -v=1: (5.939922083s)
--- PASS: TestAddons/parallel/Headlamp (20.07s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.64s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5455fb9b69-l6xlh" [d3629ba3-f7c5-4ecc-a08c-92456d35845f] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004175824s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-624151
--- PASS: TestAddons/parallel/CloudSpanner (5.64s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (57.25s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-624151 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-624151 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624151 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [ca808685-c97e-49d8-b57a-ce8b70763d19] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [ca808685-c97e-49d8-b57a-ce8b70763d19] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [ca808685-c97e-49d8-b57a-ce8b70763d19] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 7.004070459s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-624151 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-624151 ssh "cat /opt/local-path-provisioner/pvc-04dfcdb1-8800-4729-a32a-d013816c2f92_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-624151 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-624151 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-624151 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-624151 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.315991968s)
--- PASS: TestAddons/parallel/LocalPath (57.25s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.53s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-kgtjf" [bb17bf33-643c-4417-8bb1-1814162e0e18] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.007181458s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-624151
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.53s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-kwx4d" [3e4ba2f2-c791-42fd-bd6c-8c58786c2d95] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.006939937s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-624151 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-624151 addons disable yakd --alsologtostderr -v=1: (5.860143998s)
--- PASS: TestAddons/parallel/Yakd (10.87s)

                                                
                                    
x
+
TestCertOptions (98.09s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-823434 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-823434 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m36.550948983s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-823434 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-823434 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-823434 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-823434" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-823434
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-823434: (1.049199742s)
--- PASS: TestCertOptions (98.09s)

                                                
                                    
x
+
TestCertExpiration (314.48s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-623276 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-623276 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m52.234699916s)
E0805 12:40:27.755927  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-623276 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-623276 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (21.370979932s)
helpers_test.go:175: Cleaning up "cert-expiration-623276" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-623276
--- PASS: TestCertExpiration (314.48s)

                                                
                                    
x
+
TestForceSystemdFlag (83.78s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-960699 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-960699 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m22.783004058s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-960699 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-960699" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-960699
--- PASS: TestForceSystemdFlag (83.78s)

                                                
                                    
x
+
TestForceSystemdEnv (76.74s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-882422 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-882422 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m15.965086641s)
helpers_test.go:175: Cleaning up "force-systemd-env-882422" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-882422
--- PASS: TestForceSystemdEnv (76.74s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.51s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.51s)

                                                
                                    
x
+
TestErrorSpam/setup (39.55s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-215748 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-215748 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-215748 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-215748 --driver=kvm2  --container-runtime=crio: (39.549883422s)
--- PASS: TestErrorSpam/setup (39.55s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-215748 --log_dir /tmp/nospam-215748 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-215748 --log_dir /tmp/nospam-215748 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-215748 --log_dir /tmp/nospam-215748 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-215748 --log_dir /tmp/nospam-215748 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-215748 --log_dir /tmp/nospam-215748 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-215748 --log_dir /tmp/nospam-215748 status
--- PASS: TestErrorSpam/status (0.72s)

                                                
                                    
x
+
TestErrorSpam/pause (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-215748 --log_dir /tmp/nospam-215748 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-215748 --log_dir /tmp/nospam-215748 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-215748 --log_dir /tmp/nospam-215748 pause
--- PASS: TestErrorSpam/pause (1.53s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-215748 --log_dir /tmp/nospam-215748 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-215748 --log_dir /tmp/nospam-215748 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-215748 --log_dir /tmp/nospam-215748 unpause
--- PASS: TestErrorSpam/unpause (1.54s)

                                                
                                    
x
+
TestErrorSpam/stop (4.78s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-215748 --log_dir /tmp/nospam-215748 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-215748 --log_dir /tmp/nospam-215748 stop: (2.277124766s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-215748 --log_dir /tmp/nospam-215748 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-215748 --log_dir /tmp/nospam-215748 stop: (1.249302983s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-215748 --log_dir /tmp/nospam-215748 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-215748 --log_dir /tmp/nospam-215748 stop: (1.258117403s)
--- PASS: TestErrorSpam/stop (4.78s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19377-383955/.minikube/files/etc/test/nested/copy/391219/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (55.81s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-014296 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0805 11:40:27.757321  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.crt: no such file or directory
E0805 11:40:27.763461  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.crt: no such file or directory
E0805 11:40:27.773702  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.crt: no such file or directory
E0805 11:40:27.794207  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.crt: no such file or directory
E0805 11:40:27.834566  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.crt: no such file or directory
E0805 11:40:27.914925  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.crt: no such file or directory
E0805 11:40:28.075345  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.crt: no such file or directory
E0805 11:40:28.395946  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.crt: no such file or directory
E0805 11:40:29.036845  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.crt: no such file or directory
E0805 11:40:30.317345  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.crt: no such file or directory
E0805 11:40:32.878145  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-014296 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (55.812944874s)
--- PASS: TestFunctional/serial/StartWithProxy (55.81s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (54.45s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-014296 --alsologtostderr -v=8
E0805 11:40:37.998693  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.crt: no such file or directory
E0805 11:40:48.239500  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.crt: no such file or directory
E0805 11:41:08.720481  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-014296 --alsologtostderr -v=8: (54.449240116s)
functional_test.go:659: soft start took 54.449925756s for "functional-014296" cluster.
--- PASS: TestFunctional/serial/SoftStart (54.45s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-014296 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-014296 cache add registry.k8s.io/pause:3.1: (1.575888707s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-014296 cache add registry.k8s.io/pause:latest: (1.004992983s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-014296 /tmp/TestFunctionalserialCacheCmdcacheadd_local921935536/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 cache add minikube-local-cache-test:functional-014296
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-014296 cache add minikube-local-cache-test:functional-014296: (1.886393416s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 cache delete minikube-local-cache-test:functional-014296
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-014296
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.59s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-014296 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (207.259306ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.59s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 kubectl -- --context functional-014296 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-014296 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (60.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-014296 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0805 11:41:49.682078  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-014296 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m0.036922202s)
functional_test.go:757: restart took 1m0.037048873s for "functional-014296" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (60.04s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-014296 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-014296 logs: (1.550144199s)
--- PASS: TestFunctional/serial/LogsCmd (1.55s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.59s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 logs --file /tmp/TestFunctionalserialLogsFileCmd67239028/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-014296 logs --file /tmp/TestFunctionalserialLogsFileCmd67239028/001/logs.txt: (1.587667208s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.59s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (9.72s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-014296 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-014296
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-014296: exit status 115 (297.522167ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.155:30720 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-014296 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-014296 delete -f testdata/invalidsvc.yaml: (6.22514744s)
--- PASS: TestFunctional/serial/InvalidService (9.72s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-014296 config get cpus: exit status 14 (54.194718ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-014296 config get cpus: exit status 14 (43.77914ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-014296 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-014296 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 400255: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.89s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-014296 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-014296 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (149.497352ms)

                                                
                                                
-- stdout --
	* [functional-014296] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19377
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19377-383955/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-383955/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 11:42:55.154697  399988 out.go:291] Setting OutFile to fd 1 ...
	I0805 11:42:55.154968  399988 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 11:42:55.154979  399988 out.go:304] Setting ErrFile to fd 2...
	I0805 11:42:55.154986  399988 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 11:42:55.155199  399988 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
	I0805 11:42:55.155785  399988 out.go:298] Setting JSON to false
	I0805 11:42:55.156833  399988 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":5122,"bootTime":1722853053,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0805 11:42:55.156907  399988 start.go:139] virtualization: kvm guest
	I0805 11:42:55.159124  399988 out.go:177] * [functional-014296] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0805 11:42:55.160692  399988 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 11:42:55.160684  399988 notify.go:220] Checking for updates...
	I0805 11:42:55.163141  399988 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 11:42:55.164426  399988 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 11:42:55.165750  399988 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 11:42:55.167098  399988 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0805 11:42:55.168429  399988 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 11:42:55.170155  399988 config.go:182] Loaded profile config "functional-014296": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 11:42:55.170738  399988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:42:55.170806  399988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:42:55.187271  399988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38087
	I0805 11:42:55.187775  399988 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:42:55.188370  399988 main.go:141] libmachine: Using API Version  1
	I0805 11:42:55.188396  399988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:42:55.188830  399988 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:42:55.189058  399988 main.go:141] libmachine: (functional-014296) Calling .DriverName
	I0805 11:42:55.189380  399988 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 11:42:55.189809  399988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:42:55.189854  399988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:42:55.205750  399988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43843
	I0805 11:42:55.206274  399988 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:42:55.206748  399988 main.go:141] libmachine: Using API Version  1
	I0805 11:42:55.206767  399988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:42:55.207147  399988 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:42:55.207355  399988 main.go:141] libmachine: (functional-014296) Calling .DriverName
	I0805 11:42:55.247986  399988 out.go:177] * Using the kvm2 driver based on existing profile
	I0805 11:42:55.249210  399988 start.go:297] selected driver: kvm2
	I0805 11:42:55.249229  399988 start.go:901] validating driver "kvm2" against &{Name:functional-014296 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-014296 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.155 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 11:42:55.249395  399988 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 11:42:55.251761  399988 out.go:177] 
	W0805 11:42:55.253162  399988 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0805 11:42:55.254544  399988 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-014296 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-014296 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-014296 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (162.458491ms)

                                                
                                                
-- stdout --
	* [functional-014296] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19377
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19377-383955/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-383955/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 11:42:55.469126  400068 out.go:291] Setting OutFile to fd 1 ...
	I0805 11:42:55.469248  400068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 11:42:55.469259  400068 out.go:304] Setting ErrFile to fd 2...
	I0805 11:42:55.469266  400068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 11:42:55.469586  400068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
	I0805 11:42:55.470152  400068 out.go:298] Setting JSON to false
	I0805 11:42:55.471198  400068 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":5122,"bootTime":1722853053,"procs":228,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0805 11:42:55.471266  400068 start.go:139] virtualization: kvm guest
	I0805 11:42:55.473875  400068 out.go:177] * [functional-014296] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0805 11:42:55.475497  400068 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 11:42:55.475595  400068 notify.go:220] Checking for updates...
	I0805 11:42:55.478743  400068 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 11:42:55.480217  400068 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 11:42:55.481590  400068 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 11:42:55.483234  400068 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0805 11:42:55.484469  400068 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 11:42:55.486204  400068 config.go:182] Loaded profile config "functional-014296": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 11:42:55.486632  400068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:42:55.486682  400068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:42:55.510631  400068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40911
	I0805 11:42:55.511315  400068 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:42:55.511978  400068 main.go:141] libmachine: Using API Version  1
	I0805 11:42:55.512025  400068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:42:55.512366  400068 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:42:55.512590  400068 main.go:141] libmachine: (functional-014296) Calling .DriverName
	I0805 11:42:55.512921  400068 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 11:42:55.513254  400068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 11:42:55.513316  400068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 11:42:55.529072  400068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35373
	I0805 11:42:55.529651  400068 main.go:141] libmachine: () Calling .GetVersion
	I0805 11:42:55.530276  400068 main.go:141] libmachine: Using API Version  1
	I0805 11:42:55.530303  400068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 11:42:55.530751  400068 main.go:141] libmachine: () Calling .GetMachineName
	I0805 11:42:55.530960  400068 main.go:141] libmachine: (functional-014296) Calling .DriverName
	I0805 11:42:55.566306  400068 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0805 11:42:55.567555  400068 start.go:297] selected driver: kvm2
	I0805 11:42:55.567568  400068 start.go:901] validating driver "kvm2" against &{Name:functional-014296 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-014296 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.155 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 11:42:55.567690  400068 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 11:42:55.569690  400068 out.go:177] 
	W0805 11:42:55.570852  400068 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0805 11:42:55.572164  400068 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-014296 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-014296 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-nnvc9" [a3d86b52-9c71-41ce-81bb-b0974cc57a13] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-nnvc9" [a3d86b52-9c71-41ce-81bb-b0974cc57a13] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.008299105s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.155:31072
functional_test.go:1671: http://192.168.39.155:31072: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-nnvc9

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.155:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.155:31072
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.54s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (40.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [70bd0a3a-f6e0-4e84-8908-17eacd9ecd92] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005351037s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-014296 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-014296 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-014296 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-014296 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e57b2d7c-9807-4c27-909a-1079c1fad086] Pending
helpers_test.go:344: "sp-pod" [e57b2d7c-9807-4c27-909a-1079c1fad086] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0805 11:43:11.603292  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [e57b2d7c-9807-4c27-909a-1079c1fad086] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 26.004698251s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-014296 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-014296 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-014296 delete -f testdata/storage-provisioner/pod.yaml: (1.023249962s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-014296 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0849c481-15ea-4780-a559-9b7a71fc77d3] Pending
helpers_test.go:344: "sp-pod" [0849c481-15ea-4780-a559-9b7a71fc77d3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [0849c481-15ea-4780-a559-9b7a71fc77d3] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004077517s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-014296 exec sp-pod -- ls /tmp/mount
E0805 11:45:27.753039  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.crt: no such file or directory
E0805 11:45:55.443819  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (40.75s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 ssh -n functional-014296 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 cp functional-014296:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3706908383/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 ssh -n functional-014296 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 ssh -n functional-014296 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-014296 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
2024/08/05 11:43:09 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:344: "mysql-64454c8b5c-9ck55" [d06165e7-cc1a-4449-bbc2-1a19adbe8615] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-9ck55" [d06165e7-cc1a-4449-bbc2-1a19adbe8615] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.003908221s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-014296 exec mysql-64454c8b5c-9ck55 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-014296 exec mysql-64454c8b5c-9ck55 -- mysql -ppassword -e "show databases;": exit status 1 (156.539656ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-014296 exec mysql-64454c8b5c-9ck55 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.02s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/391219/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 ssh "sudo cat /etc/test/nested/copy/391219/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/391219.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 ssh "sudo cat /etc/ssl/certs/391219.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/391219.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 ssh "sudo cat /usr/share/ca-certificates/391219.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3912192.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 ssh "sudo cat /etc/ssl/certs/3912192.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/3912192.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 ssh "sudo cat /usr/share/ca-certificates/3912192.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-014296 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-014296 ssh "sudo systemctl is-active docker": exit status 1 (250.608229ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-014296 ssh "sudo systemctl is-active containerd": exit status 1 (245.086709ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-014296 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-014296 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-vnrjf" [098e804d-e8cc-436a-ae88-74fc9360ddf3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-vnrjf" [098e804d-e8cc-436a-ae88-74fc9360ddf3] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.004365513s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.19s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-014296 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-scheduler          | v1.30.3            | 3edc18e7b7672 | 63.1MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| localhost/my-image                      | functional-014296  | 7e5a2571b1511 | 1.47MB |
| registry.k8s.io/kube-controller-manager | v1.30.3            | 76932a3b37d7e | 112MB  |
| registry.k8s.io/kube-proxy              | v1.30.3            | 55bb025d2cfa5 | 86MB   |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/kicbase/echo-server           | functional-014296  | 9056ab77afb8e | 4.94MB |
| docker.io/kindest/kindnetd              | v20240715-585640e9 | 5cc3abe5717db | 87.2MB |
| localhost/minikube-local-cache-test     | functional-014296  | ac2f8c272e9bc | 3.33kB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| registry.k8s.io/kube-apiserver          | v1.30.3            | 1f6d574d502f3 | 118MB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-014296 image ls --format table --alsologtostderr:
I0805 11:43:26.392093  401460 out.go:291] Setting OutFile to fd 1 ...
I0805 11:43:26.392381  401460 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 11:43:26.392394  401460 out.go:304] Setting ErrFile to fd 2...
I0805 11:43:26.392400  401460 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 11:43:26.392679  401460 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
I0805 11:43:26.394204  401460 config.go:182] Loaded profile config "functional-014296": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0805 11:43:26.394564  401460 config.go:182] Loaded profile config "functional-014296": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0805 11:43:26.395403  401460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0805 11:43:26.395461  401460 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 11:43:26.410453  401460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43457
I0805 11:43:26.410992  401460 main.go:141] libmachine: () Calling .GetVersion
I0805 11:43:26.411721  401460 main.go:141] libmachine: Using API Version  1
I0805 11:43:26.411767  401460 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 11:43:26.412119  401460 main.go:141] libmachine: () Calling .GetMachineName
I0805 11:43:26.412331  401460 main.go:141] libmachine: (functional-014296) Calling .GetState
I0805 11:43:26.414004  401460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0805 11:43:26.414047  401460 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 11:43:26.428231  401460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41365
I0805 11:43:26.428618  401460 main.go:141] libmachine: () Calling .GetVersion
I0805 11:43:26.429091  401460 main.go:141] libmachine: Using API Version  1
I0805 11:43:26.429116  401460 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 11:43:26.429528  401460 main.go:141] libmachine: () Calling .GetMachineName
I0805 11:43:26.429743  401460 main.go:141] libmachine: (functional-014296) Calling .DriverName
I0805 11:43:26.429938  401460 ssh_runner.go:195] Run: systemctl --version
I0805 11:43:26.429965  401460 main.go:141] libmachine: (functional-014296) Calling .GetSSHHostname
I0805 11:43:26.432303  401460 main.go:141] libmachine: (functional-014296) DBG | domain functional-014296 has defined MAC address 52:54:00:29:fd:03 in network mk-functional-014296
I0805 11:43:26.432727  401460 main.go:141] libmachine: (functional-014296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:fd:03", ip: ""} in network mk-functional-014296: {Iface:virbr1 ExpiryTime:2024-08-05 12:39:56 +0000 UTC Type:0 Mac:52:54:00:29:fd:03 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:functional-014296 Clientid:01:52:54:00:29:fd:03}
I0805 11:43:26.432752  401460 main.go:141] libmachine: (functional-014296) DBG | domain functional-014296 has defined IP address 192.168.39.155 and MAC address 52:54:00:29:fd:03 in network mk-functional-014296
I0805 11:43:26.432901  401460 main.go:141] libmachine: (functional-014296) Calling .GetSSHPort
I0805 11:43:26.433053  401460 main.go:141] libmachine: (functional-014296) Calling .GetSSHKeyPath
I0805 11:43:26.433211  401460 main.go:141] libmachine: (functional-014296) Calling .GetSSHUsername
I0805 11:43:26.433339  401460 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/functional-014296/id_rsa Username:docker}
I0805 11:43:26.567352  401460 ssh_runner.go:195] Run: sudo crictl images --output json
I0805 11:43:26.698303  401460 main.go:141] libmachine: Making call to close driver server
I0805 11:43:26.698320  401460 main.go:141] libmachine: (functional-014296) Calling .Close
I0805 11:43:26.698626  401460 main.go:141] libmachine: Successfully made call to close driver server
I0805 11:43:26.698644  401460 main.go:141] libmachine: Making call to close connection to plugin binary
I0805 11:43:26.698653  401460 main.go:141] libmachine: Making call to close driver server
I0805 11:43:26.698662  401460 main.go:141] libmachine: (functional-014296) Calling .Close
I0805 11:43:26.698707  401460 main.go:141] libmachine: (functional-014296) DBG | Closing plugin on server side
I0805 11:43:26.698906  401460 main.go:141] libmachine: Successfully made call to close driver server
I0805 11:43:26.698927  401460 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-014296 image ls --format json --alsologtostderr:
[{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":["registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"63051080"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:functional-014296"],"size":"4943877"},{"id":"23f93697145d47643b76789bd9146ddb0842b4450560e0a0c21745afcdcdb351","repoDigests":["docker.io/library/9791d9e
4c8cff518d34363de06a8ecf21067fad05a48bc18401671c2f20f9501-tmp@sha256:bed3212c80a3e37a9642653d69ba0a254fa9915d3129c7f3a3fc803c3548c47f"],"repoTags":[],"size":"1466018"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scrap
er@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"ac2f8c272e9bcbb24c80f9cfdd92a1eb59ac98b1ae3a0a37b7732438787a4ae9","repoDigests":["localhost/minikube-local-cache-test@sha256:62b05f804b09e42af2e16a4dc4b42906f8f01c8f7371f0af27cea93babed732d"],"repoTags":["localhost/minikube-local-cache-test:functional-014296"],"size":"3330"},{"id":"55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":["registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"85953945"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"e6f1816883972d4be47bd
48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id
":"1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":["registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c","registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"117609954"},{"id":"7e5a2571b1511e0659b7ec6a3f209c2d3186364268a83e360c6e3a6c85c0dc32","repoDigests":["localhost/my-image@sha256:82660852c4784a851300ffd5b6945920d51c8a7380fd14759b6073ff6a677868"],"repoTags":["localhost/my-image:functional-014296"],"size":"1468600"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce
2f0277e15b5e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7","registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"112198984"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f","repoDigests":["docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115","docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"],"repoTags":["docker.io/kindest/kindnetd:v20240715-585640e9"],"size":"87165492"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba0
80558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-014296 image ls --format json --alsologtostderr:
I0805 11:43:25.914998  401436 out.go:291] Setting OutFile to fd 1 ...
I0805 11:43:25.915291  401436 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 11:43:25.915301  401436 out.go:304] Setting ErrFile to fd 2...
I0805 11:43:25.915308  401436 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 11:43:25.915502  401436 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
I0805 11:43:25.916173  401436 config.go:182] Loaded profile config "functional-014296": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0805 11:43:25.916296  401436 config.go:182] Loaded profile config "functional-014296": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0805 11:43:25.916718  401436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0805 11:43:25.916780  401436 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 11:43:25.932127  401436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35373
I0805 11:43:25.932625  401436 main.go:141] libmachine: () Calling .GetVersion
I0805 11:43:25.933237  401436 main.go:141] libmachine: Using API Version  1
I0805 11:43:25.933259  401436 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 11:43:25.933679  401436 main.go:141] libmachine: () Calling .GetMachineName
I0805 11:43:25.933880  401436 main.go:141] libmachine: (functional-014296) Calling .GetState
I0805 11:43:25.935963  401436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0805 11:43:25.936004  401436 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 11:43:25.952484  401436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34857
I0805 11:43:25.952912  401436 main.go:141] libmachine: () Calling .GetVersion
I0805 11:43:25.953466  401436 main.go:141] libmachine: Using API Version  1
I0805 11:43:25.953499  401436 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 11:43:25.953886  401436 main.go:141] libmachine: () Calling .GetMachineName
I0805 11:43:25.954155  401436 main.go:141] libmachine: (functional-014296) Calling .DriverName
I0805 11:43:25.954391  401436 ssh_runner.go:195] Run: systemctl --version
I0805 11:43:25.954428  401436 main.go:141] libmachine: (functional-014296) Calling .GetSSHHostname
I0805 11:43:25.957467  401436 main.go:141] libmachine: (functional-014296) DBG | domain functional-014296 has defined MAC address 52:54:00:29:fd:03 in network mk-functional-014296
I0805 11:43:25.957925  401436 main.go:141] libmachine: (functional-014296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:fd:03", ip: ""} in network mk-functional-014296: {Iface:virbr1 ExpiryTime:2024-08-05 12:39:56 +0000 UTC Type:0 Mac:52:54:00:29:fd:03 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:functional-014296 Clientid:01:52:54:00:29:fd:03}
I0805 11:43:25.957964  401436 main.go:141] libmachine: (functional-014296) DBG | domain functional-014296 has defined IP address 192.168.39.155 and MAC address 52:54:00:29:fd:03 in network mk-functional-014296
I0805 11:43:25.958087  401436 main.go:141] libmachine: (functional-014296) Calling .GetSSHPort
I0805 11:43:25.958281  401436 main.go:141] libmachine: (functional-014296) Calling .GetSSHKeyPath
I0805 11:43:25.958458  401436 main.go:141] libmachine: (functional-014296) Calling .GetSSHUsername
I0805 11:43:25.958601  401436 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/functional-014296/id_rsa Username:docker}
I0805 11:43:26.103785  401436 ssh_runner.go:195] Run: sudo crictl images --output json
I0805 11:43:26.336125  401436 main.go:141] libmachine: Making call to close driver server
I0805 11:43:26.336146  401436 main.go:141] libmachine: (functional-014296) Calling .Close
I0805 11:43:26.336456  401436 main.go:141] libmachine: Successfully made call to close driver server
I0805 11:43:26.336481  401436 main.go:141] libmachine: Making call to close connection to plugin binary
I0805 11:43:26.336490  401436 main.go:141] libmachine: (functional-014296) DBG | Closing plugin on server side
I0805 11:43:26.336493  401436 main.go:141] libmachine: Making call to close driver server
I0805 11:43:26.336552  401436 main.go:141] libmachine: (functional-014296) Calling .Close
I0805 11:43:26.336763  401436 main.go:141] libmachine: Successfully made call to close driver server
I0805 11:43:26.336781  401436 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-014296 image ls --format yaml --alsologtostderr:
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7
- registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "112198984"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266
- registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "63051080"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: ac2f8c272e9bcbb24c80f9cfdd92a1eb59ac98b1ae3a0a37b7732438787a4ae9
repoDigests:
- localhost/minikube-local-cache-test@sha256:62b05f804b09e42af2e16a4dc4b42906f8f01c8f7371f0af27cea93babed732d
repoTags:
- localhost/minikube-local-cache-test:functional-014296
size: "3330"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c
- registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117609954"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f
repoDigests:
- docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115
- docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493
repoTags:
- docker.io/kindest/kindnetd:v20240715-585640e9
size: "87165492"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:functional-014296
size: "4943877"
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests:
- registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80
- registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "85953945"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-014296 image ls --format yaml --alsologtostderr:
I0805 11:43:21.194352  401320 out.go:291] Setting OutFile to fd 1 ...
I0805 11:43:21.194492  401320 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 11:43:21.194503  401320 out.go:304] Setting ErrFile to fd 2...
I0805 11:43:21.194510  401320 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 11:43:21.194802  401320 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
I0805 11:43:21.195618  401320 config.go:182] Loaded profile config "functional-014296": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0805 11:43:21.195784  401320 config.go:182] Loaded profile config "functional-014296": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0805 11:43:21.196406  401320 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0805 11:43:21.196473  401320 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 11:43:21.211633  401320 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35463
I0805 11:43:21.212136  401320 main.go:141] libmachine: () Calling .GetVersion
I0805 11:43:21.212728  401320 main.go:141] libmachine: Using API Version  1
I0805 11:43:21.212753  401320 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 11:43:21.213111  401320 main.go:141] libmachine: () Calling .GetMachineName
I0805 11:43:21.213329  401320 main.go:141] libmachine: (functional-014296) Calling .GetState
I0805 11:43:21.215385  401320 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0805 11:43:21.215443  401320 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 11:43:21.230579  401320 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46797
I0805 11:43:21.231085  401320 main.go:141] libmachine: () Calling .GetVersion
I0805 11:43:21.231604  401320 main.go:141] libmachine: Using API Version  1
I0805 11:43:21.231632  401320 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 11:43:21.232023  401320 main.go:141] libmachine: () Calling .GetMachineName
I0805 11:43:21.232215  401320 main.go:141] libmachine: (functional-014296) Calling .DriverName
I0805 11:43:21.232440  401320 ssh_runner.go:195] Run: systemctl --version
I0805 11:43:21.232469  401320 main.go:141] libmachine: (functional-014296) Calling .GetSSHHostname
I0805 11:43:21.235016  401320 main.go:141] libmachine: (functional-014296) DBG | domain functional-014296 has defined MAC address 52:54:00:29:fd:03 in network mk-functional-014296
I0805 11:43:21.235382  401320 main.go:141] libmachine: (functional-014296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:fd:03", ip: ""} in network mk-functional-014296: {Iface:virbr1 ExpiryTime:2024-08-05 12:39:56 +0000 UTC Type:0 Mac:52:54:00:29:fd:03 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:functional-014296 Clientid:01:52:54:00:29:fd:03}
I0805 11:43:21.235416  401320 main.go:141] libmachine: (functional-014296) DBG | domain functional-014296 has defined IP address 192.168.39.155 and MAC address 52:54:00:29:fd:03 in network mk-functional-014296
I0805 11:43:21.235596  401320 main.go:141] libmachine: (functional-014296) Calling .GetSSHPort
I0805 11:43:21.235792  401320 main.go:141] libmachine: (functional-014296) Calling .GetSSHKeyPath
I0805 11:43:21.235969  401320 main.go:141] libmachine: (functional-014296) Calling .GetSSHUsername
I0805 11:43:21.236140  401320 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/functional-014296/id_rsa Username:docker}
I0805 11:43:21.331193  401320 ssh_runner.go:195] Run: sudo crictl images --output json
I0805 11:43:21.415047  401320 main.go:141] libmachine: Making call to close driver server
I0805 11:43:21.415065  401320 main.go:141] libmachine: (functional-014296) Calling .Close
I0805 11:43:21.415384  401320 main.go:141] libmachine: Successfully made call to close driver server
I0805 11:43:21.415409  401320 main.go:141] libmachine: Making call to close connection to plugin binary
I0805 11:43:21.415423  401320 main.go:141] libmachine: (functional-014296) DBG | Closing plugin on server side
I0805 11:43:21.415428  401320 main.go:141] libmachine: Making call to close driver server
I0805 11:43:21.415485  401320 main.go:141] libmachine: (functional-014296) Calling .Close
I0805 11:43:21.415734  401320 main.go:141] libmachine: Successfully made call to close driver server
I0805 11:43:21.415765  401320 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-014296 ssh pgrep buildkitd: exit status 1 (211.532038ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 image build -t localhost/my-image:functional-014296 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-014296 image build -t localhost/my-image:functional-014296 testdata/build --alsologtostderr: (3.813247617s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-014296 image build -t localhost/my-image:functional-014296 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 23f93697145
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-014296
--> 7e5a2571b15
Successfully tagged localhost/my-image:functional-014296
7e5a2571b1511e0659b7ec6a3f209c2d3186364268a83e360c6e3a6c85c0dc32
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-014296 image build -t localhost/my-image:functional-014296 testdata/build --alsologtostderr:
I0805 11:43:21.688468  401389 out.go:291] Setting OutFile to fd 1 ...
I0805 11:43:21.688797  401389 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 11:43:21.688812  401389 out.go:304] Setting ErrFile to fd 2...
I0805 11:43:21.688818  401389 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 11:43:21.689131  401389 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
I0805 11:43:21.689982  401389 config.go:182] Loaded profile config "functional-014296": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0805 11:43:21.690642  401389 config.go:182] Loaded profile config "functional-014296": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0805 11:43:21.691018  401389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0805 11:43:21.691068  401389 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 11:43:21.707241  401389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41419
I0805 11:43:21.707732  401389 main.go:141] libmachine: () Calling .GetVersion
I0805 11:43:21.708481  401389 main.go:141] libmachine: Using API Version  1
I0805 11:43:21.708505  401389 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 11:43:21.708890  401389 main.go:141] libmachine: () Calling .GetMachineName
I0805 11:43:21.709109  401389 main.go:141] libmachine: (functional-014296) Calling .GetState
I0805 11:43:21.711215  401389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0805 11:43:21.711266  401389 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 11:43:21.726364  401389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40483
I0805 11:43:21.726816  401389 main.go:141] libmachine: () Calling .GetVersion
I0805 11:43:21.727348  401389 main.go:141] libmachine: Using API Version  1
I0805 11:43:21.727378  401389 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 11:43:21.727692  401389 main.go:141] libmachine: () Calling .GetMachineName
I0805 11:43:21.727902  401389 main.go:141] libmachine: (functional-014296) Calling .DriverName
I0805 11:43:21.728137  401389 ssh_runner.go:195] Run: systemctl --version
I0805 11:43:21.728163  401389 main.go:141] libmachine: (functional-014296) Calling .GetSSHHostname
I0805 11:43:21.731316  401389 main.go:141] libmachine: (functional-014296) DBG | domain functional-014296 has defined MAC address 52:54:00:29:fd:03 in network mk-functional-014296
I0805 11:43:21.731895  401389 main.go:141] libmachine: (functional-014296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:fd:03", ip: ""} in network mk-functional-014296: {Iface:virbr1 ExpiryTime:2024-08-05 12:39:56 +0000 UTC Type:0 Mac:52:54:00:29:fd:03 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:functional-014296 Clientid:01:52:54:00:29:fd:03}
I0805 11:43:21.731919  401389 main.go:141] libmachine: (functional-014296) DBG | domain functional-014296 has defined IP address 192.168.39.155 and MAC address 52:54:00:29:fd:03 in network mk-functional-014296
I0805 11:43:21.732088  401389 main.go:141] libmachine: (functional-014296) Calling .GetSSHPort
I0805 11:43:21.732268  401389 main.go:141] libmachine: (functional-014296) Calling .GetSSHKeyPath
I0805 11:43:21.732443  401389 main.go:141] libmachine: (functional-014296) Calling .GetSSHUsername
I0805 11:43:21.732605  401389 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/functional-014296/id_rsa Username:docker}
I0805 11:43:21.817283  401389 build_images.go:161] Building image from path: /tmp/build.3545607021.tar
I0805 11:43:21.817351  401389 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0805 11:43:21.827730  401389 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3545607021.tar
I0805 11:43:21.834170  401389 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3545607021.tar: stat -c "%s %y" /var/lib/minikube/build/build.3545607021.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3545607021.tar': No such file or directory
I0805 11:43:21.834205  401389 ssh_runner.go:362] scp /tmp/build.3545607021.tar --> /var/lib/minikube/build/build.3545607021.tar (3072 bytes)
I0805 11:43:21.861558  401389 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3545607021
I0805 11:43:21.873139  401389 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3545607021 -xf /var/lib/minikube/build/build.3545607021.tar
I0805 11:43:21.885732  401389 crio.go:315] Building image: /var/lib/minikube/build/build.3545607021
I0805 11:43:21.885844  401389 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-014296 /var/lib/minikube/build/build.3545607021 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0805 11:43:25.374844  401389 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-014296 /var/lib/minikube/build/build.3545607021 --cgroup-manager=cgroupfs: (3.488963931s)
I0805 11:43:25.374936  401389 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3545607021
I0805 11:43:25.410757  401389 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3545607021.tar
I0805 11:43:25.438312  401389 build_images.go:217] Built localhost/my-image:functional-014296 from /tmp/build.3545607021.tar
I0805 11:43:25.438365  401389 build_images.go:133] succeeded building to: functional-014296
I0805 11:43:25.438373  401389 build_images.go:134] failed building to: 
I0805 11:43:25.438406  401389 main.go:141] libmachine: Making call to close driver server
I0805 11:43:25.438422  401389 main.go:141] libmachine: (functional-014296) Calling .Close
I0805 11:43:25.438733  401389 main.go:141] libmachine: Successfully made call to close driver server
I0805 11:43:25.438756  401389 main.go:141] libmachine: Making call to close connection to plugin binary
I0805 11:43:25.438766  401389 main.go:141] libmachine: Making call to close driver server
I0805 11:43:25.438774  401389 main.go:141] libmachine: (functional-014296) Calling .Close
I0805 11:43:25.439066  401389 main.go:141] libmachine: Successfully made call to close driver server
I0805 11:43:25.439102  401389 main.go:141] libmachine: (functional-014296) DBG | Closing plugin on server side
I0805 11:43:25.439172  401389 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.924841678s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-014296
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "309.465317ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "68.303153ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "310.852105ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "48.031117ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 image load --daemon docker.io/kicbase/echo-server:functional-014296 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-014296 image load --daemon docker.io/kicbase/echo-server:functional-014296 --alsologtostderr: (2.777633606s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 image load --daemon docker.io/kicbase/echo-server:functional-014296 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-014296
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 image load --daemon docker.io/kicbase/echo-server:functional-014296 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 image save docker.io/kicbase/echo-server:functional-014296 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 image rm docker.io/kicbase/echo-server:functional-014296 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-014296
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 image save --daemon docker.io/kicbase/echo-server:functional-014296 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-014296
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 service list -o json
functional_test.go:1490: Took "612.61875ms" to run "out/minikube-linux-amd64 -p functional-014296 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.155:31956
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.155:31956
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-014296 /tmp/TestFunctionalparallelMountCmdspecific-port4119252941/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-014296 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (194.520913ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-014296 /tmp/TestFunctionalparallelMountCmdspecific-port4119252941/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-014296 ssh "sudo umount -f /mount-9p": exit status 1 (187.488814ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-014296 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-014296 /tmp/TestFunctionalparallelMountCmdspecific-port4119252941/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-014296 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3861791621/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-014296 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3861791621/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-014296 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3861791621/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-014296 ssh "findmnt -T" /mount1: exit status 1 (226.17725ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-014296 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-014296 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-014296 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3861791621/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-014296 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3861791621/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-014296 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3861791621/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.14s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-014296
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-014296
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-014296
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (213.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-672593 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0805 11:47:52.926531  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/functional-014296/client.crt: no such file or directory
E0805 11:47:52.931809  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/functional-014296/client.crt: no such file or directory
E0805 11:47:52.942140  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/functional-014296/client.crt: no such file or directory
E0805 11:47:52.962401  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/functional-014296/client.crt: no such file or directory
E0805 11:47:53.002731  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/functional-014296/client.crt: no such file or directory
E0805 11:47:53.083220  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/functional-014296/client.crt: no such file or directory
E0805 11:47:53.243661  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/functional-014296/client.crt: no such file or directory
E0805 11:47:53.564547  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/functional-014296/client.crt: no such file or directory
E0805 11:47:54.205547  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/functional-014296/client.crt: no such file or directory
E0805 11:47:55.486176  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/functional-014296/client.crt: no such file or directory
E0805 11:47:58.047289  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/functional-014296/client.crt: no such file or directory
E0805 11:48:03.168083  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/functional-014296/client.crt: no such file or directory
E0805 11:48:13.408994  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/functional-014296/client.crt: no such file or directory
E0805 11:48:33.889878  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/functional-014296/client.crt: no such file or directory
E0805 11:49:14.850452  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/functional-014296/client.crt: no such file or directory
E0805 11:50:27.753066  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-672593 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m33.233248551s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (213.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-672593 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-672593 -- rollout status deployment/busybox
E0805 11:50:36.771076  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/functional-014296/client.crt: no such file or directory
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-672593 -- rollout status deployment/busybox: (5.377952515s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-672593 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-672593 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-672593 -- exec busybox-fc5497c4f-dq7jg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-672593 -- exec busybox-fc5497c4f-vn64j -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-672593 -- exec busybox-fc5497c4f-xx72g -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-672593 -- exec busybox-fc5497c4f-dq7jg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-672593 -- exec busybox-fc5497c4f-vn64j -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-672593 -- exec busybox-fc5497c4f-xx72g -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-672593 -- exec busybox-fc5497c4f-dq7jg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-672593 -- exec busybox-fc5497c4f-vn64j -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-672593 -- exec busybox-fc5497c4f-xx72g -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-672593 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-672593 -- exec busybox-fc5497c4f-dq7jg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-672593 -- exec busybox-fc5497c4f-dq7jg -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-672593 -- exec busybox-fc5497c4f-vn64j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-672593 -- exec busybox-fc5497c4f-vn64j -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-672593 -- exec busybox-fc5497c4f-xx72g -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-672593 -- exec busybox-fc5497c4f-xx72g -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (85.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-672593 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-672593 -v=7 --alsologtostderr: (1m24.768539428s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (85.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-672593 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 cp testdata/cp-test.txt ha-672593:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 ssh -n ha-672593 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 cp ha-672593:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2308329850/001/cp-test_ha-672593.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 ssh -n ha-672593 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 cp ha-672593:/home/docker/cp-test.txt ha-672593-m02:/home/docker/cp-test_ha-672593_ha-672593-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 ssh -n ha-672593 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 ssh -n ha-672593-m02 "sudo cat /home/docker/cp-test_ha-672593_ha-672593-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 cp ha-672593:/home/docker/cp-test.txt ha-672593-m03:/home/docker/cp-test_ha-672593_ha-672593-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 ssh -n ha-672593 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 ssh -n ha-672593-m03 "sudo cat /home/docker/cp-test_ha-672593_ha-672593-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 cp ha-672593:/home/docker/cp-test.txt ha-672593-m04:/home/docker/cp-test_ha-672593_ha-672593-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 ssh -n ha-672593 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 ssh -n ha-672593-m04 "sudo cat /home/docker/cp-test_ha-672593_ha-672593-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 cp testdata/cp-test.txt ha-672593-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 ssh -n ha-672593-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 cp ha-672593-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2308329850/001/cp-test_ha-672593-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 ssh -n ha-672593-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 cp ha-672593-m02:/home/docker/cp-test.txt ha-672593:/home/docker/cp-test_ha-672593-m02_ha-672593.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 ssh -n ha-672593-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 ssh -n ha-672593 "sudo cat /home/docker/cp-test_ha-672593-m02_ha-672593.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 cp ha-672593-m02:/home/docker/cp-test.txt ha-672593-m03:/home/docker/cp-test_ha-672593-m02_ha-672593-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 ssh -n ha-672593-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 ssh -n ha-672593-m03 "sudo cat /home/docker/cp-test_ha-672593-m02_ha-672593-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 cp ha-672593-m02:/home/docker/cp-test.txt ha-672593-m04:/home/docker/cp-test_ha-672593-m02_ha-672593-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 ssh -n ha-672593-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 ssh -n ha-672593-m04 "sudo cat /home/docker/cp-test_ha-672593-m02_ha-672593-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 cp testdata/cp-test.txt ha-672593-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 ssh -n ha-672593-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 cp ha-672593-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2308329850/001/cp-test_ha-672593-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 ssh -n ha-672593-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 cp ha-672593-m03:/home/docker/cp-test.txt ha-672593:/home/docker/cp-test_ha-672593-m03_ha-672593.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 ssh -n ha-672593-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 ssh -n ha-672593 "sudo cat /home/docker/cp-test_ha-672593-m03_ha-672593.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 cp ha-672593-m03:/home/docker/cp-test.txt ha-672593-m02:/home/docker/cp-test_ha-672593-m03_ha-672593-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 ssh -n ha-672593-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 ssh -n ha-672593-m02 "sudo cat /home/docker/cp-test_ha-672593-m03_ha-672593-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 cp ha-672593-m03:/home/docker/cp-test.txt ha-672593-m04:/home/docker/cp-test_ha-672593-m03_ha-672593-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 ssh -n ha-672593-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 ssh -n ha-672593-m04 "sudo cat /home/docker/cp-test_ha-672593-m03_ha-672593-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 cp testdata/cp-test.txt ha-672593-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 ssh -n ha-672593-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 cp ha-672593-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2308329850/001/cp-test_ha-672593-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 ssh -n ha-672593-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 cp ha-672593-m04:/home/docker/cp-test.txt ha-672593:/home/docker/cp-test_ha-672593-m04_ha-672593.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 ssh -n ha-672593-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 ssh -n ha-672593 "sudo cat /home/docker/cp-test_ha-672593-m04_ha-672593.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 cp ha-672593-m04:/home/docker/cp-test.txt ha-672593-m02:/home/docker/cp-test_ha-672593-m04_ha-672593-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 ssh -n ha-672593-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 ssh -n ha-672593-m02 "sudo cat /home/docker/cp-test_ha-672593-m04_ha-672593-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 cp ha-672593-m04:/home/docker/cp-test.txt ha-672593-m03:/home/docker/cp-test_ha-672593-m04_ha-672593-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 ssh -n ha-672593-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 ssh -n ha-672593-m03 "sudo cat /home/docker/cp-test_ha-672593-m04_ha-672593-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.486530535s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-672593 node delete m03 -v=7 --alsologtostderr: (16.636030345s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (324.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-672593 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0805 12:05:27.756851  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.crt: no such file or directory
E0805 12:07:52.927399  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/functional-014296/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-672593 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m23.510216606s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (324.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (84.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-672593 --control-plane -v=7 --alsologtostderr
E0805 12:10:27.753286  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-672593 --control-plane -v=7 --alsologtostderr: (1m23.714314026s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-672593 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (84.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.54s)

                                                
                                    
x
+
TestJSONOutput/start/Command (95.3s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-034581 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0805 12:12:52.927164  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/functional-014296/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-034581 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m35.294414852s)
--- PASS: TestJSONOutput/start/Command (95.30s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-034581 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-034581 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.35s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-034581 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-034581 --output=json --user=testUser: (7.353925138s)
--- PASS: TestJSONOutput/stop/Command (7.35s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-595755 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-595755 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (61.784239ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b5ea4ccc-72d9-4919-b522-0d24df29ed73","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-595755] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b74f0851-00e4-4eec-b47e-abf1549b3c9b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19377"}}
	{"specversion":"1.0","id":"caa8a0f4-0939-4e16-aeaa-6d52d29c8a38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9580c65a-275b-49e9-9f12-1b9cc5ccaee9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19377-383955/kubeconfig"}}
	{"specversion":"1.0","id":"6e99074b-a644-4cc4-9b19-222d2a966bf2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-383955/.minikube"}}
	{"specversion":"1.0","id":"ae7fad35-fd7a-4b5c-a46c-9d0dbf1b2467","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"bc73a946-317b-4b1a-8caa-7f398bbf3c1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2ea9286c-c393-42eb-adab-d4fab4513ab2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-595755" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-595755
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (89.99s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-533590 --driver=kvm2  --container-runtime=crio
E0805 12:13:30.807551  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-533590 --driver=kvm2  --container-runtime=crio: (45.169363334s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-536877 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-536877 --driver=kvm2  --container-runtime=crio: (42.162625109s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-533590
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-536877
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-536877" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-536877
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-536877: (1.008120679s)
helpers_test.go:175: Cleaning up "first-533590" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-533590
--- PASS: TestMinikubeProfile (89.99s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (32.28s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-411875 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-411875 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (31.282815742s)
--- PASS: TestMountStart/serial/StartWithMountFirst (32.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-411875 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-411875 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.41s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-428262 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0805 12:15:27.757272  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-428262 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.406413204s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.41s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-428262 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-428262 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-411875 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-428262 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-428262 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-428262
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-428262: (1.275225218s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.82s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-428262
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-428262: (21.822082358s)
--- PASS: TestMountStart/serial/RestartStopped (22.82s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-428262 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-428262 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (120.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-841883 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0805 12:17:52.926353  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/functional-014296/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-841883 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m0.226663211s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841883 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (120.62s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-841883 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-841883 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-841883 -- rollout status deployment/busybox: (3.838038387s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-841883 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-841883 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-841883 -- exec busybox-fc5497c4f-7lqm2 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-841883 -- exec busybox-fc5497c4f-r4zf7 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-841883 -- exec busybox-fc5497c4f-7lqm2 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-841883 -- exec busybox-fc5497c4f-r4zf7 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-841883 -- exec busybox-fc5497c4f-7lqm2 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-841883 -- exec busybox-fc5497c4f-r4zf7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.30s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-841883 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-841883 -- exec busybox-fc5497c4f-7lqm2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-841883 -- exec busybox-fc5497c4f-7lqm2 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-841883 -- exec busybox-fc5497c4f-r4zf7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-841883 -- exec busybox-fc5497c4f-r4zf7 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (47.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-841883 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-841883 -v 3 --alsologtostderr: (46.932252389s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841883 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (47.51s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-841883 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841883 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841883 cp testdata/cp-test.txt multinode-841883:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841883 ssh -n multinode-841883 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841883 cp multinode-841883:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2344340306/001/cp-test_multinode-841883.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841883 ssh -n multinode-841883 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841883 cp multinode-841883:/home/docker/cp-test.txt multinode-841883-m02:/home/docker/cp-test_multinode-841883_multinode-841883-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841883 ssh -n multinode-841883 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841883 ssh -n multinode-841883-m02 "sudo cat /home/docker/cp-test_multinode-841883_multinode-841883-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841883 cp multinode-841883:/home/docker/cp-test.txt multinode-841883-m03:/home/docker/cp-test_multinode-841883_multinode-841883-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841883 ssh -n multinode-841883 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841883 ssh -n multinode-841883-m03 "sudo cat /home/docker/cp-test_multinode-841883_multinode-841883-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841883 cp testdata/cp-test.txt multinode-841883-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841883 ssh -n multinode-841883-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841883 cp multinode-841883-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2344340306/001/cp-test_multinode-841883-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841883 ssh -n multinode-841883-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841883 cp multinode-841883-m02:/home/docker/cp-test.txt multinode-841883:/home/docker/cp-test_multinode-841883-m02_multinode-841883.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841883 ssh -n multinode-841883-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841883 ssh -n multinode-841883 "sudo cat /home/docker/cp-test_multinode-841883-m02_multinode-841883.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841883 cp multinode-841883-m02:/home/docker/cp-test.txt multinode-841883-m03:/home/docker/cp-test_multinode-841883-m02_multinode-841883-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841883 ssh -n multinode-841883-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841883 ssh -n multinode-841883-m03 "sudo cat /home/docker/cp-test_multinode-841883-m02_multinode-841883-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841883 cp testdata/cp-test.txt multinode-841883-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841883 ssh -n multinode-841883-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841883 cp multinode-841883-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2344340306/001/cp-test_multinode-841883-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841883 ssh -n multinode-841883-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841883 cp multinode-841883-m03:/home/docker/cp-test.txt multinode-841883:/home/docker/cp-test_multinode-841883-m03_multinode-841883.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841883 ssh -n multinode-841883-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841883 ssh -n multinode-841883 "sudo cat /home/docker/cp-test_multinode-841883-m03_multinode-841883.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841883 cp multinode-841883-m03:/home/docker/cp-test.txt multinode-841883-m02:/home/docker/cp-test_multinode-841883-m03_multinode-841883-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841883 ssh -n multinode-841883-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841883 ssh -n multinode-841883-m02 "sudo cat /home/docker/cp-test_multinode-841883-m03_multinode-841883-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.14s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841883 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-841883 node stop m03: (1.476950345s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841883 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-841883 status: exit status 7 (431.457416ms)

                                                
                                                
-- stdout --
	multinode-841883
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-841883-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-841883-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841883 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-841883 status --alsologtostderr: exit status 7 (422.122519ms)

                                                
                                                
-- stdout --
	multinode-841883
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-841883-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-841883-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 12:19:12.845502  420583 out.go:291] Setting OutFile to fd 1 ...
	I0805 12:19:12.845641  420583 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:19:12.845651  420583 out.go:304] Setting ErrFile to fd 2...
	I0805 12:19:12.845655  420583 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:19:12.845857  420583 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
	I0805 12:19:12.846053  420583 out.go:298] Setting JSON to false
	I0805 12:19:12.846079  420583 mustload.go:65] Loading cluster: multinode-841883
	I0805 12:19:12.846178  420583 notify.go:220] Checking for updates...
	I0805 12:19:12.846516  420583 config.go:182] Loaded profile config "multinode-841883": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:19:12.846534  420583 status.go:255] checking status of multinode-841883 ...
	I0805 12:19:12.846947  420583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:19:12.847024  420583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:19:12.865301  420583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38985
	I0805 12:19:12.865730  420583 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:19:12.866254  420583 main.go:141] libmachine: Using API Version  1
	I0805 12:19:12.866279  420583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:19:12.866758  420583 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:19:12.866984  420583 main.go:141] libmachine: (multinode-841883) Calling .GetState
	I0805 12:19:12.868505  420583 status.go:330] multinode-841883 host status = "Running" (err=<nil>)
	I0805 12:19:12.868527  420583 host.go:66] Checking if "multinode-841883" exists ...
	I0805 12:19:12.868804  420583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:19:12.868842  420583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:19:12.884510  420583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36747
	I0805 12:19:12.884910  420583 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:19:12.885415  420583 main.go:141] libmachine: Using API Version  1
	I0805 12:19:12.885458  420583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:19:12.885799  420583 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:19:12.885982  420583 main.go:141] libmachine: (multinode-841883) Calling .GetIP
	I0805 12:19:12.888900  420583 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:19:12.889393  420583 main.go:141] libmachine: (multinode-841883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:b1:cd", ip: ""} in network mk-multinode-841883: {Iface:virbr1 ExpiryTime:2024-08-05 13:16:23 +0000 UTC Type:0 Mac:52:54:00:e6:b1:cd Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-841883 Clientid:01:52:54:00:e6:b1:cd}
	I0805 12:19:12.889423  420583 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined IP address 192.168.39.86 and MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:19:12.889566  420583 host.go:66] Checking if "multinode-841883" exists ...
	I0805 12:19:12.889878  420583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:19:12.889915  420583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:19:12.905105  420583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36423
	I0805 12:19:12.905480  420583 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:19:12.905892  420583 main.go:141] libmachine: Using API Version  1
	I0805 12:19:12.905915  420583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:19:12.906246  420583 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:19:12.906435  420583 main.go:141] libmachine: (multinode-841883) Calling .DriverName
	I0805 12:19:12.906607  420583 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 12:19:12.906625  420583 main.go:141] libmachine: (multinode-841883) Calling .GetSSHHostname
	I0805 12:19:12.909206  420583 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:19:12.909615  420583 main.go:141] libmachine: (multinode-841883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:b1:cd", ip: ""} in network mk-multinode-841883: {Iface:virbr1 ExpiryTime:2024-08-05 13:16:23 +0000 UTC Type:0 Mac:52:54:00:e6:b1:cd Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-841883 Clientid:01:52:54:00:e6:b1:cd}
	I0805 12:19:12.909651  420583 main.go:141] libmachine: (multinode-841883) DBG | domain multinode-841883 has defined IP address 192.168.39.86 and MAC address 52:54:00:e6:b1:cd in network mk-multinode-841883
	I0805 12:19:12.909760  420583 main.go:141] libmachine: (multinode-841883) Calling .GetSSHPort
	I0805 12:19:12.909947  420583 main.go:141] libmachine: (multinode-841883) Calling .GetSSHKeyPath
	I0805 12:19:12.910121  420583 main.go:141] libmachine: (multinode-841883) Calling .GetSSHUsername
	I0805 12:19:12.910241  420583 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/multinode-841883/id_rsa Username:docker}
	I0805 12:19:12.990948  420583 ssh_runner.go:195] Run: systemctl --version
	I0805 12:19:12.997029  420583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 12:19:13.011939  420583 kubeconfig.go:125] found "multinode-841883" server: "https://192.168.39.86:8443"
	I0805 12:19:13.011965  420583 api_server.go:166] Checking apiserver status ...
	I0805 12:19:13.012000  420583 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:19:13.026542  420583 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1106/cgroup
	W0805 12:19:13.037751  420583 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1106/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 12:19:13.037795  420583 ssh_runner.go:195] Run: ls
	I0805 12:19:13.041986  420583 api_server.go:253] Checking apiserver healthz at https://192.168.39.86:8443/healthz ...
	I0805 12:19:13.049671  420583 api_server.go:279] https://192.168.39.86:8443/healthz returned 200:
	ok
	I0805 12:19:13.049695  420583 status.go:422] multinode-841883 apiserver status = Running (err=<nil>)
	I0805 12:19:13.049707  420583 status.go:257] multinode-841883 status: &{Name:multinode-841883 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 12:19:13.049732  420583 status.go:255] checking status of multinode-841883-m02 ...
	I0805 12:19:13.050068  420583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:19:13.050112  420583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:19:13.065845  420583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35759
	I0805 12:19:13.066318  420583 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:19:13.066831  420583 main.go:141] libmachine: Using API Version  1
	I0805 12:19:13.066856  420583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:19:13.067226  420583 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:19:13.067435  420583 main.go:141] libmachine: (multinode-841883-m02) Calling .GetState
	I0805 12:19:13.069290  420583 status.go:330] multinode-841883-m02 host status = "Running" (err=<nil>)
	I0805 12:19:13.069310  420583 host.go:66] Checking if "multinode-841883-m02" exists ...
	I0805 12:19:13.069600  420583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:19:13.069631  420583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:19:13.084689  420583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41913
	I0805 12:19:13.085120  420583 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:19:13.085640  420583 main.go:141] libmachine: Using API Version  1
	I0805 12:19:13.085662  420583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:19:13.085926  420583 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:19:13.086089  420583 main.go:141] libmachine: (multinode-841883-m02) Calling .GetIP
	I0805 12:19:13.088918  420583 main.go:141] libmachine: (multinode-841883-m02) DBG | domain multinode-841883-m02 has defined MAC address 52:54:00:5c:95:8e in network mk-multinode-841883
	I0805 12:19:13.089343  420583 main.go:141] libmachine: (multinode-841883-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:95:8e", ip: ""} in network mk-multinode-841883: {Iface:virbr1 ExpiryTime:2024-08-05 13:17:34 +0000 UTC Type:0 Mac:52:54:00:5c:95:8e Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:multinode-841883-m02 Clientid:01:52:54:00:5c:95:8e}
	I0805 12:19:13.089374  420583 main.go:141] libmachine: (multinode-841883-m02) DBG | domain multinode-841883-m02 has defined IP address 192.168.39.205 and MAC address 52:54:00:5c:95:8e in network mk-multinode-841883
	I0805 12:19:13.089495  420583 host.go:66] Checking if "multinode-841883-m02" exists ...
	I0805 12:19:13.089833  420583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:19:13.089877  420583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:19:13.105362  420583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36255
	I0805 12:19:13.105771  420583 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:19:13.106231  420583 main.go:141] libmachine: Using API Version  1
	I0805 12:19:13.106252  420583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:19:13.106536  420583 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:19:13.106695  420583 main.go:141] libmachine: (multinode-841883-m02) Calling .DriverName
	I0805 12:19:13.106843  420583 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 12:19:13.106867  420583 main.go:141] libmachine: (multinode-841883-m02) Calling .GetSSHHostname
	I0805 12:19:13.109501  420583 main.go:141] libmachine: (multinode-841883-m02) DBG | domain multinode-841883-m02 has defined MAC address 52:54:00:5c:95:8e in network mk-multinode-841883
	I0805 12:19:13.109926  420583 main.go:141] libmachine: (multinode-841883-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:95:8e", ip: ""} in network mk-multinode-841883: {Iface:virbr1 ExpiryTime:2024-08-05 13:17:34 +0000 UTC Type:0 Mac:52:54:00:5c:95:8e Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:multinode-841883-m02 Clientid:01:52:54:00:5c:95:8e}
	I0805 12:19:13.109956  420583 main.go:141] libmachine: (multinode-841883-m02) DBG | domain multinode-841883-m02 has defined IP address 192.168.39.205 and MAC address 52:54:00:5c:95:8e in network mk-multinode-841883
	I0805 12:19:13.110113  420583 main.go:141] libmachine: (multinode-841883-m02) Calling .GetSSHPort
	I0805 12:19:13.110303  420583 main.go:141] libmachine: (multinode-841883-m02) Calling .GetSSHKeyPath
	I0805 12:19:13.110442  420583 main.go:141] libmachine: (multinode-841883-m02) Calling .GetSSHUsername
	I0805 12:19:13.110589  420583 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19377-383955/.minikube/machines/multinode-841883-m02/id_rsa Username:docker}
	I0805 12:19:13.190785  420583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 12:19:13.205466  420583 status.go:257] multinode-841883-m02 status: &{Name:multinode-841883-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0805 12:19:13.205516  420583 status.go:255] checking status of multinode-841883-m03 ...
	I0805 12:19:13.205962  420583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 12:19:13.206018  420583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 12:19:13.222166  420583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42861
	I0805 12:19:13.222619  420583 main.go:141] libmachine: () Calling .GetVersion
	I0805 12:19:13.223137  420583 main.go:141] libmachine: Using API Version  1
	I0805 12:19:13.223160  420583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 12:19:13.223455  420583 main.go:141] libmachine: () Calling .GetMachineName
	I0805 12:19:13.223646  420583 main.go:141] libmachine: (multinode-841883-m03) Calling .GetState
	I0805 12:19:13.225206  420583 status.go:330] multinode-841883-m03 host status = "Stopped" (err=<nil>)
	I0805 12:19:13.225220  420583 status.go:343] host is not running, skipping remaining checks
	I0805 12:19:13.225226  420583 status.go:257] multinode-841883-m03 status: &{Name:multinode-841883-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.33s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841883 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-841883 node start m03 -v=7 --alsologtostderr: (39.001188389s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841883 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.63s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841883 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-841883 node delete m03: (1.636103368s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841883 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.15s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (182.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-841883 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0805 12:27:52.926793  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/functional-014296/client.crt: no such file or directory
E0805 12:30:10.808855  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.crt: no such file or directory
E0805 12:30:27.754076  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/addons-624151/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-841883 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m1.835415788s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-841883 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (182.36s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-841883
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-841883-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-841883-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (61.472519ms)

                                                
                                                
-- stdout --
	* [multinode-841883-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19377
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19377-383955/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-383955/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-841883-m02' is duplicated with machine name 'multinode-841883-m02' in profile 'multinode-841883'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-841883-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-841883-m03 --driver=kvm2  --container-runtime=crio: (42.901643957s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-841883
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-841883: exit status 80 (217.569089ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-841883 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-841883-m03 already exists in multinode-841883-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-841883-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.22s)

                                                
                                    
x
+
TestScheduledStopUnix (112.22s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-082982 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-082982 --memory=2048 --driver=kvm2  --container-runtime=crio: (40.623654748s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-082982 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-082982 -n scheduled-stop-082982
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-082982 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-082982 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-082982 -n scheduled-stop-082982
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-082982
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-082982 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0805 12:37:35.978577  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/functional-014296/client.crt: no such file or directory
E0805 12:37:52.927095  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/functional-014296/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-082982
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-082982: exit status 7 (63.116957ms)

                                                
                                                
-- stdout --
	scheduled-stop-082982
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-082982 -n scheduled-stop-082982
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-082982 -n scheduled-stop-082982: exit status 7 (70.964534ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-082982" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-082982
--- PASS: TestScheduledStopUnix (112.22s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (185.66s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.424457601 start -p running-upgrade-313656 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.424457601 start -p running-upgrade-313656 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m6.656938137s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-313656 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0805 12:42:52.926303  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/functional-014296/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-313656 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (55.207736729s)
helpers_test.go:175: Cleaning up "running-upgrade-313656" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-313656
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-313656: (1.14721991s)
--- PASS: TestRunningBinaryUpgrade (185.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-833202 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-833202 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (78.533394ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-833202] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19377
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19377-383955/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-383955/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (99.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-833202 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-833202 --driver=kvm2  --container-runtime=crio: (1m39.585658321s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-833202 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (99.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-119870 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-119870 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (108.266693ms)

                                                
                                                
-- stdout --
	* [false-119870] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19377
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19377-383955/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-383955/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 12:38:04.474913  428274 out.go:291] Setting OutFile to fd 1 ...
	I0805 12:38:04.475209  428274 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:38:04.475222  428274 out.go:304] Setting ErrFile to fd 2...
	I0805 12:38:04.475229  428274 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:38:04.475517  428274 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-383955/.minikube/bin
	I0805 12:38:04.476290  428274 out.go:298] Setting JSON to false
	I0805 12:38:04.477606  428274 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8431,"bootTime":1722853053,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0805 12:38:04.477683  428274 start.go:139] virtualization: kvm guest
	I0805 12:38:04.479673  428274 out.go:177] * [false-119870] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0805 12:38:04.480858  428274 notify.go:220] Checking for updates...
	I0805 12:38:04.480863  428274 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 12:38:04.482130  428274 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 12:38:04.483466  428274 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19377-383955/kubeconfig
	I0805 12:38:04.484713  428274 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-383955/.minikube
	I0805 12:38:04.485983  428274 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0805 12:38:04.487200  428274 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 12:38:04.489202  428274 config.go:182] Loaded profile config "NoKubernetes-833202": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:38:04.489350  428274 config.go:182] Loaded profile config "force-systemd-env-882422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:38:04.489470  428274 config.go:182] Loaded profile config "offline-crio-824747": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 12:38:04.489585  428274 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 12:38:04.526840  428274 out.go:177] * Using the kvm2 driver based on user configuration
	I0805 12:38:04.528212  428274 start.go:297] selected driver: kvm2
	I0805 12:38:04.528230  428274 start.go:901] validating driver "kvm2" against <nil>
	I0805 12:38:04.528244  428274 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 12:38:04.530524  428274 out.go:177] 
	W0805 12:38:04.531955  428274 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0805 12:38:04.533175  428274 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-119870 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-119870

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-119870

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-119870

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-119870

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-119870

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-119870

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-119870

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-119870

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-119870

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-119870

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119870"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119870"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119870"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-119870

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119870"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119870"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-119870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-119870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-119870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-119870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-119870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-119870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-119870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-119870" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119870"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119870"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119870"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119870"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119870"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-119870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-119870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-119870" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119870"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119870"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119870"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119870"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119870"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-119870

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119870"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119870"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119870"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119870"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119870"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119870"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119870"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119870"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119870"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119870"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119870"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119870"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119870"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119870"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119870"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119870"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119870"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-119870"

                                                
                                                
----------------------- debugLogs end: false-119870 [took: 2.938678033s] --------------------------------
helpers_test.go:175: Cleaning up "false-119870" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-119870
--- PASS: TestNetworkPlugins/group/false (3.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (65.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-833202 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-833202 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m4.32632751s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-833202 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-833202 status -o json: exit status 2 (221.393257ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-833202","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-833202
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-833202: (1.059429008s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (65.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (28.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-833202 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-833202 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.090646046s)
--- PASS: TestNoKubernetes/serial/Start (28.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-833202 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-833202 "sudo systemctl is-active --quiet service kubelet": exit status 1 (194.689046ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-833202
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-833202: (1.655909857s)
--- PASS: TestNoKubernetes/serial/Stop (1.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (67.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-833202 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-833202 --driver=kvm2  --container-runtime=crio: (1m7.821100285s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (67.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-833202 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-833202 "sudo systemctl is-active --quiet service kubelet": exit status 1 (192.781945ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.65s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.65s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (95.65s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.298073302 start -p stopped-upgrade-938024 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.298073302 start -p stopped-upgrade-938024 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (50.15327925s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.298073302 -p stopped-upgrade-938024 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.298073302 -p stopped-upgrade-938024 stop: (2.136508286s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-938024 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-938024 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (43.364516693s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (95.65s)

                                                
                                    
x
+
TestPause/serial/Start (77.89s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-335738 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-335738 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m17.88623296s)
--- PASS: TestPause/serial/Start (77.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (126.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-119870 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-119870 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (2m6.168336084s)
--- PASS: TestNetworkPlugins/group/auto/Start (126.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.85s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-938024
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (102.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-119870 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-119870 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m42.167207093s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (102.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (92.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-119870 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-119870 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m32.253277677s)
--- PASS: TestNetworkPlugins/group/calico/Start (92.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-552vz" [0a9d1f53-b083-4610-b991-7383208d0501] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00515836s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-119870 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-119870 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-5mzlr" [9b3982d3-8407-46a1-9ffe-058a5adede71] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-5mzlr" [9b3982d3-8407-46a1-9ffe-058a5adede71] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004419926s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-119870 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-119870 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-8cs9l" [aed6a619-5b92-458e-a08c-0ccb9ebaef0a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-8cs9l" [aed6a619-5b92-458e-a08c-0ccb9ebaef0a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.006107743s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-119870 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-119870 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-119870 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-119870 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-119870 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-119870 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (90.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-119870 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-119870 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m30.360757378s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (90.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (133.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-119870 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-119870 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (2m13.11805727s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (133.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-dvprw" [3977eef9-445b-4433-b62f-a854619229c6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004870596s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-119870 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-119870 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-bnrs4" [48df87bc-b6cb-4043-859f-853dde18357f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-bnrs4" [48df87bc-b6cb-4043-859f-853dde18357f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004392202s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-119870 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-119870 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-119870 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (81.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-119870 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-119870 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m21.629181804s)
--- PASS: TestNetworkPlugins/group/flannel/Start (81.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-119870 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-119870 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-vb75b" [82f6fb54-d133-4b1a-acba-47117ac49843] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0805 12:47:52.927090  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/functional-014296/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-vb75b" [82f6fb54-d133-4b1a-acba-47117ac49843] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004817877s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-119870 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-119870 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-119870 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (108.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-119870 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-119870 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m48.74267519s)
--- PASS: TestNetworkPlugins/group/bridge/Start (108.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-119870 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-119870 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-5lbw5" [c439151b-d590-4ecd-b91a-77c75f830ac1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-5lbw5" [c439151b-d590-4ecd-b91a-77c75f830ac1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004317116s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-119870 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-119870 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-119870 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (116.76s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-669469 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-669469 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-rc.0: (1m56.757866525s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (116.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-5jkcl" [b5f41d95-eb48-431c-a991-f8be84b2bd93] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005596221s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-119870 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-119870 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-sgcx6" [5c5ffd97-9ff7-4b48-9f1d-4c9342835e98] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-sgcx6" [5c5ffd97-9ff7-4b48-9f1d-4c9342835e98] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.246127042s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-119870 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-119870 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-119870 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (62.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-321139 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-321139 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (1m2.306687912s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (62.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-119870 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-119870 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-hgc8l" [8a978ad4-4d8e-4f19-be36-4b3a139a70be] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-hgc8l" [8a978ad4-4d8e-4f19-be36-4b3a139a70be] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.129770064s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-119870 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-119870 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-119870 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (60.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-371585 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-371585 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (1m0.09309885s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (60.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-321139 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [61652096-d612-4b1d-bac3-a0df9a0e629b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [61652096-d612-4b1d-bac3-a0df9a0e629b] Running
E0805 12:50:48.986675  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/client.crt: no such file or directory
E0805 12:50:48.991967  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/client.crt: no such file or directory
E0805 12:50:49.002226  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/client.crt: no such file or directory
E0805 12:50:49.022502  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/client.crt: no such file or directory
E0805 12:50:49.062875  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/client.crt: no such file or directory
E0805 12:50:49.143265  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/client.crt: no such file or directory
E0805 12:50:49.303718  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/client.crt: no such file or directory
E0805 12:50:49.459246  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/auto-119870/client.crt: no such file or directory
E0805 12:50:49.464530  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/auto-119870/client.crt: no such file or directory
E0805 12:50:49.474778  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/auto-119870/client.crt: no such file or directory
E0805 12:50:49.495047  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/auto-119870/client.crt: no such file or directory
E0805 12:50:49.535409  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/auto-119870/client.crt: no such file or directory
E0805 12:50:49.616512  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/auto-119870/client.crt: no such file or directory
E0805 12:50:49.624760  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/client.crt: no such file or directory
E0805 12:50:49.776957  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/auto-119870/client.crt: no such file or directory
E0805 12:50:50.097685  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/auto-119870/client.crt: no such file or directory
E0805 12:50:50.265668  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/client.crt: no such file or directory
E0805 12:50:50.738396  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/auto-119870/client.crt: no such file or directory
E0805 12:50:51.546350  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003349519s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-321139 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-321139 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0805 12:50:52.018841  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/auto-119870/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-321139 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-669469 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [360674cf-c08b-4bb7-8ece-1813684534d9] Pending
helpers_test.go:344: "busybox" [360674cf-c08b-4bb7-8ece-1813684534d9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [360674cf-c08b-4bb7-8ece-1813684534d9] Running
E0805 12:51:09.468998  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/client.crt: no such file or directory
E0805 12:51:09.940726  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/auto-119870/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003911843s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-669469 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-669469 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-669469 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-371585 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [dcb72cfc-5b23-4826-9383-8f17d23f2f1a] Pending
helpers_test.go:344: "busybox" [dcb72cfc-5b23-4826-9383-8f17d23f2f1a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [dcb72cfc-5b23-4826-9383-8f17d23f2f1a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.00408066s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-371585 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-371585 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-371585 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (637.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-321139 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
E0805 12:53:29.231729  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/custom-flannel-119870/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-321139 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (10m37.363742725s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-321139 -n embed-certs-321139
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (637.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (602.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-669469 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-669469 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-rc.0: (10m2.219799571s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-669469 -n no-preload-669469
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (602.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (589.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-371585 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
E0805 12:54:17.298279  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/enable-default-cni-119870/client.crt: no such file or directory
E0805 12:54:27.231330  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/flannel-119870/client.crt: no such file or directory
E0805 12:54:47.711962  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/flannel-119870/client.crt: no such file or directory
E0805 12:54:53.744326  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/calico-119870/client.crt: no such file or directory
E0805 12:54:58.258972  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/enable-default-cni-119870/client.crt: no such file or directory
E0805 12:55:07.008124  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/bridge-119870/client.crt: no such file or directory
E0805 12:55:07.013395  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/bridge-119870/client.crt: no such file or directory
E0805 12:55:07.023680  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/bridge-119870/client.crt: no such file or directory
E0805 12:55:07.044010  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/bridge-119870/client.crt: no such file or directory
E0805 12:55:07.084339  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/bridge-119870/client.crt: no such file or directory
E0805 12:55:07.164744  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/bridge-119870/client.crt: no such file or directory
E0805 12:55:07.325193  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/bridge-119870/client.crt: no such file or directory
E0805 12:55:07.645835  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/bridge-119870/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-371585 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (9m49.132023596s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-371585 -n default-k8s-diff-port-371585
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (589.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-635707 --alsologtostderr -v=3
E0805 12:55:09.567308  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/bridge-119870/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-635707 --alsologtostderr -v=3: (3.29028908s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-635707 -n old-k8s-version-635707
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-635707 -n old-k8s-version-635707: exit status 7 (60.723987ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-635707 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (49.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-202226 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-rc.0
E0805 13:18:36.335793  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/enable-default-cni-119870/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-202226 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-rc.0: (49.38970506s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (49.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-202226 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-202226 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.080262174s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-202226 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-202226 --alsologtostderr -v=3: (11.376596842s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-202226 -n newest-cni-202226
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-202226 -n newest-cni-202226: exit status 7 (68.238856ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-202226 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (76.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-202226 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-202226 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-rc.0: (1m15.769675904s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-202226 -n newest-cni-202226
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (76.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-202226 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240719-e7903573
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (1.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-202226 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-202226 --alsologtostderr -v=1: (1.388709535s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-202226 -n newest-cni-202226
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-202226 -n newest-cni-202226: exit status 2 (266.469831ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-202226 -n newest-cni-202226
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-202226 -n newest-cni-202226: exit status 2 (315.990838ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-202226 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 unpause -p newest-cni-202226 --alsologtostderr -v=1: (1.045892521s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-202226 -n newest-cni-202226
E0805 13:20:48.986996  391219 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-383955/.minikube/profiles/kindnet-119870/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-202226 -n newest-cni-202226
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.70s)

                                                
                                    

Test skip (40/320)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.3/cached-images 0
15 TestDownloadOnly/v1.30.3/binaries 0
16 TestDownloadOnly/v1.30.3/kubectl 0
23 TestDownloadOnly/v1.31.0-rc.0/cached-images 0
24 TestDownloadOnly/v1.31.0-rc.0/binaries 0
25 TestDownloadOnly/v1.31.0-rc.0/kubectl 0
29 TestDownloadOnlyKic 0
38 TestAddons/serial/Volcano 0
47 TestAddons/parallel/Olm 0
57 TestDockerFlags 0
60 TestDockerEnvContainerd 0
62 TestHyperKitDriverInstallOrUpdate 0
63 TestHyperkitDriverSkipUpgrade 0
114 TestFunctional/parallel/DockerEnv 0
115 TestFunctional/parallel/PodmanEnv 0
150 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
151 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
152 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
153 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
154 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
155 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
156 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
157 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
163 TestGvisorAddon 0
185 TestImageBuild 0
212 TestKicCustomNetwork 0
213 TestKicExistingNetwork 0
214 TestKicCustomSubnet 0
215 TestKicStaticIP 0
247 TestChangeNoneUser 0
250 TestScheduledStopWindows 0
252 TestSkaffold 0
254 TestInsufficientStorage 0
258 TestMissingContainerUpgrade 0
263 TestNetworkPlugins/group/kubenet 2.87
272 TestNetworkPlugins/group/cilium 3.05
287 TestStartStop/group/disable-driver-mounts 0.17
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-119870 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-119870

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-119870

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-119870

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-119870

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-119870

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-119870

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-119870

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-119870

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-119870

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-119870

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119870"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119870"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119870"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-119870

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119870"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119870"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-119870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-119870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-119870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-119870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-119870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-119870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-119870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-119870" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119870"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119870"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119870"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119870"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119870"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-119870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-119870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-119870" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119870"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119870"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119870"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119870"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119870"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-119870

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119870"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119870"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119870"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119870"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119870"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119870"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119870"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119870"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119870"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119870"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119870"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119870"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119870"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119870"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119870"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119870"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119870"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-119870"

                                                
                                                
----------------------- debugLogs end: kubenet-119870 [took: 2.722909581s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-119870" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-119870
--- SKIP: TestNetworkPlugins/group/kubenet (2.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-119870 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-119870

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-119870

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-119870

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-119870

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-119870

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-119870

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-119870

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-119870

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-119870

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-119870

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119870"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119870"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119870"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-119870

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119870"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119870"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-119870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-119870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-119870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-119870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-119870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-119870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-119870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-119870" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119870"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119870"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119870"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119870"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119870"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-119870

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-119870

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-119870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-119870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-119870

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-119870

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-119870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-119870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-119870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-119870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-119870" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119870"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119870"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119870"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119870"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119870"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-119870

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119870"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119870"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119870"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119870"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119870"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119870"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119870"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119870"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119870"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119870"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119870"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119870"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119870"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119870"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119870"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119870"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119870"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-119870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-119870"

                                                
                                                
----------------------- debugLogs end: cilium-119870 [took: 2.918160442s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-119870" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-119870
--- SKIP: TestNetworkPlugins/group/cilium (3.05s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-130994" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-130994
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard